diff --git a/authorization_services/topics/getting-started-overview.adoc b/authorization_services/topics/getting-started-overview.adoc
index 1254af1af6..2d5fb7b5c0 100644
--- a/authorization_services/topics/getting-started-overview.adoc
+++ b/authorization_services/topics/getting-started-overview.adoc
@@ -6,7 +6,6 @@ There is one caveat to this. You have to run a separate {appserver_name} instanc
To boot {project_name} Server:
-ifeval::["{kc_dist}" == "quarkus"]
.Linux/Unix
[source]
----
@@ -18,21 +17,6 @@ $ .../bin/kc.sh start-dev --http-port 8180
----
> ...\bin\kc.bat start-dev --http-port 8180
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-.Linux/Unix
-[source]
-----
-$ .../bin/standalone.sh -Djboss.socket.binding.port-offset=100
-----
-
-.Windows
-[source]
-----
-> ...\bin\standalone.bat -Djboss.socket.binding.port-offset=100
-----
-endif::[]
After installing and booting both servers you should be able to access {project_name} Admin Console at http://localhost:8180/auth/admin/ and also the {appserver_name} instance at
http://localhost:8080.
diff --git a/getting_started/.asciidoctorconfig b/getting_started/.asciidoctorconfig
deleted file mode 100644
index acb09a0956..0000000000
--- a/getting_started/.asciidoctorconfig
+++ /dev/null
@@ -1 +0,0 @@
-:imagesdir: {asciidoctorconfigdir}
diff --git a/getting_started/docinfo-footer.html b/getting_started/docinfo-footer.html
deleted file mode 120000
index a39d3bd0f6..0000000000
--- a/getting_started/docinfo-footer.html
+++ /dev/null
@@ -1 +0,0 @@
-../aggregation/navbar.html
\ No newline at end of file
diff --git a/getting_started/docinfo.html b/getting_started/docinfo.html
deleted file mode 120000
index 14514f94d2..0000000000
--- a/getting_started/docinfo.html
+++ /dev/null
@@ -1 +0,0 @@
-../aggregation/navbar-head.html
\ No newline at end of file
diff --git a/getting_started/images/account-console.png b/getting_started/images/account-console.png
deleted file mode 100644
index 155673392c..0000000000
Binary files a/getting_started/images/account-console.png and /dev/null differ
diff --git a/getting_started/images/add-client.png b/getting_started/images/add-client.png
deleted file mode 100644
index 363a2bb464..0000000000
Binary files a/getting_started/images/add-client.png and /dev/null differ
diff --git a/getting_started/images/add-demo-realm.png b/getting_started/images/add-demo-realm.png
deleted file mode 100644
index 4062243a80..0000000000
Binary files a/getting_started/images/add-demo-realm.png and /dev/null differ
diff --git a/getting_started/images/add-user.png b/getting_started/images/add-user.png
deleted file mode 100644
index 25dc0404a4..0000000000
Binary files a/getting_started/images/add-user.png and /dev/null differ
diff --git a/getting_started/images/admin-console.png b/getting_started/images/admin-console.png
deleted file mode 100644
index babb332243..0000000000
Binary files a/getting_started/images/admin-console.png and /dev/null differ
diff --git a/getting_started/images/admin-login.png b/getting_started/images/admin-login.png
deleted file mode 100644
index 57263465b3..0000000000
Binary files a/getting_started/images/admin-login.png and /dev/null differ
diff --git a/getting_started/images/client-install-selected.png b/getting_started/images/client-install-selected.png
deleted file mode 100644
index fc9dd1746b..0000000000
Binary files a/getting_started/images/client-install-selected.png and /dev/null differ
diff --git a/getting_started/images/clients.png b/getting_started/images/clients.png
deleted file mode 100644
index f4e95a70d1..0000000000
Binary files a/getting_started/images/clients.png and /dev/null differ
diff --git a/getting_started/images/demo-login.png b/getting_started/images/demo-login.png
deleted file mode 100644
index 58df0aaf6e..0000000000
Binary files a/getting_started/images/demo-login.png and /dev/null differ
diff --git a/getting_started/images/demo-realm.png b/getting_started/images/demo-realm.png
deleted file mode 100644
index e48cd19630..0000000000
Binary files a/getting_started/images/demo-realm.png and /dev/null differ
diff --git a/getting_started/images/keycloak-json.png b/getting_started/images/keycloak-json.png
deleted file mode 100644
index 99c1d0254d..0000000000
Binary files a/getting_started/images/keycloak-json.png and /dev/null differ
diff --git a/getting_started/images/keycloak_logo.png b/getting_started/images/keycloak_logo.png
deleted file mode 100755
index 4883f52302..0000000000
Binary files a/getting_started/images/keycloak_logo.png and /dev/null differ
diff --git a/getting_started/images/master_realm.png b/getting_started/images/master_realm.png
deleted file mode 100644
index e02c0f4cc6..0000000000
Binary files a/getting_started/images/master_realm.png and /dev/null differ
diff --git a/getting_started/images/success.png b/getting_started/images/success.png
deleted file mode 100644
index 2dcbd3026d..0000000000
Binary files a/getting_started/images/success.png and /dev/null differ
diff --git a/getting_started/images/update-password.png b/getting_started/images/update-password.png
deleted file mode 100644
index c3e68b6e18..0000000000
Binary files a/getting_started/images/update-password.png and /dev/null differ
diff --git a/getting_started/images/user-credentials.png b/getting_started/images/user-credentials.png
deleted file mode 100644
index 723058d5a7..0000000000
Binary files a/getting_started/images/user-credentials.png and /dev/null differ
diff --git a/getting_started/images/vanilla.png b/getting_started/images/vanilla.png
deleted file mode 100644
index 7b610a280c..0000000000
Binary files a/getting_started/images/vanilla.png and /dev/null differ
diff --git a/getting_started/images/welcome.png b/getting_started/images/welcome.png
deleted file mode 100644
index 3cfd5bcc67..0000000000
Binary files a/getting_started/images/welcome.png and /dev/null differ
diff --git a/getting_started/index.adoc b/getting_started/index.adoc
deleted file mode 100644
index d5f1f21bfc..0000000000
--- a/getting_started/index.adoc
+++ /dev/null
@@ -1,17 +0,0 @@
-:toc: left
-:toclevels: 3
-:sectanchors:
-:linkattrs:
-:context:
-
-include::topics/templates/document-attributes-community.adoc[]
-
-:getting_started_guide:
-
-= {gettingstarted_name}
-
-:release_header_guide: {gettingstarted_name_short}
-:release_header_latest_link: {gettingstarted_link_latest}
-include::topics/templates/release-header.adoc[]
-
-include::topics.adoc[]
diff --git a/getting_started/master-docinfo.xml b/getting_started/master-docinfo.xml
deleted file mode 100644
index 81c7e5894f..0000000000
--- a/getting_started/master-docinfo.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-{project_name_full}
-{project_versionDoc}
-For Use with {project_name_full} {project_versionDoc}
-{gettingstarted_name}
-{project_versionDoc}
-
-This guide helps you practice using {project_name} to evaluate it before you use it in a production environment. It includes instructions for installing the {project_name} server in standalone mode, creating accounts and realms for managing users and applications, and securing a {appserver_name} server application.
-
-
-
- Red Hat Customer Content Services
-
-
- Copyright 2021 Red Hat, Inc.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
diff --git a/getting_started/master.adoc b/getting_started/master.adoc
deleted file mode 100644
index 889762316b..0000000000
--- a/getting_started/master.adoc
+++ /dev/null
@@ -1,13 +0,0 @@
-:toc:
-:toclevels: 3
-:numbered:
-:linkattrs:
-:context:
-
-include::topics/templates/document-attributes-product.adoc[]
-
-:getting_started_guide:
-
-= {gettingstarted_name}
-
-include::topics.adoc[]
\ No newline at end of file
diff --git a/getting_started/pom.xml b/getting_started/pom.xml
deleted file mode 100644
index b973b3301a..0000000000
--- a/getting_started/pom.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
- 4.0.0
-
-
- org.keycloak.documentation
- documentation-parent
- 999-SNAPSHOT
- ../
-
-
- Getting Started
- getting-started
- pom
-
-
-
-
- org.keycloak.documentation
- header-maven-plugin
-
-
- add-file-headers
-
-
-
-
- org.asciidoctor
- asciidoctor-maven-plugin
-
-
- asciidoc-to-html
-
-
-
-
- maven-antrun-plugin
-
-
- echo-output
-
-
-
-
-
-
diff --git a/getting_started/topics.adoc b/getting_started/topics.adoc
deleted file mode 100644
index db184ae2fe..0000000000
--- a/getting_started/topics.adoc
+++ /dev/null
@@ -1,7 +0,0 @@
-ifeval::[{project_community}==true]
-include::topics/introduction-keycloak.adoc[]
-endif::[]
-include::topics/templates/making-open-source-more-inclusive.adoc[]
-include::topics/assembly-installing-standalone.adoc[]
-include::topics/assembly-creating-first-realm.adoc[]
-include::topics/assembly-securing-sample-app.adoc[]
diff --git a/getting_started/topics/assembly-creating-first-realm.adoc b/getting_started/topics/assembly-creating-first-realm.adoc
deleted file mode 100644
index 8901a6a685..0000000000
--- a/getting_started/topics/assembly-creating-first-realm.adoc
+++ /dev/null
@@ -1,22 +0,0 @@
-// UserStory: As an RH SSO customer, I want to perform initial admin procedures
-
-// This assembly is included in the following assemblies:
-//
-//
-
-// Retains the context of the parent assembly if this assembly is nested within another assembly.
-// See also the complementary step on the last line of this file.
-ifdef::context[:parent-context: {context}]
-
-[id="creating-first-realm_{context}"]
-== Creating a realm and a user
-The first use of the {project_name} admin console is to create a realm and create a user in that realm. You use that user to log in to your new realm and visit the built-in account console, to which all users have access.
-
-include::first-realm/con-realms-apps.adoc[leveloffset=2]
-include::first-realm/proc-create-realm.adoc[leveloffset=2]
-include::first-realm/proc-create-user.adoc[leveloffset=2]
-include::first-realm/proc-view-account.adoc[leveloffset=2]
-
-// Restore the context to what it was before this assembly.
-ifdef::parent-context[:context: {parent-context}]
-ifndef::parent-context[:!context:]
diff --git a/getting_started/topics/assembly-installing-standalone.adoc b/getting_started/topics/assembly-installing-standalone.adoc
deleted file mode 100644
index 62aca067df..0000000000
--- a/getting_started/topics/assembly-installing-standalone.adoc
+++ /dev/null
@@ -1,36 +0,0 @@
-// UserStory: As an RH SSO customer, I want to perform a quick setup of SSO.
-
-// This assembly is included in the following assemblies:
-//
-//
-
-// Retains the context of the parent assembly if this assembly is nested within another assembly.
-// See also the complementary step on the last line of this file.
-ifdef::context[:parent-context: {context}]
-
-[id="installing-standalone_{context}"]
-== Installing a sample instance of {project_name}
-
-This section describes how to install and start a {project_name} server in standalone mode, set up the initial admin user, and log in to the {project_name} Admin Console.
-
-ifeval::[{project_product}==true]
-.Additional Resources
-This installation is intended for practice use of {project_name}. For instructions on installation in a production environment and full details on all product features, see the other guides in the link:https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/{project_versionDoc}[{project_name}] documentation.
-endif::[]
-
-ifeval::[{project_community}==true]
-
-include::standalone/proc-installing-server-community.adoc[leveloffset=2]
-endif::[]
-
-ifeval::[{project_product}==true]
-include::standalone/proc-installing-server-product.adoc[leveloffset=2]
-endif::[]
-
-include::standalone/proc-starting-server.adoc[leveloffset=2]
-include::standalone/proc-creating-admin.adoc[leveloffset=2]
-include::standalone/proc-logging-in-admin-console.adoc[leveloffset=2]
-
-// Restore the context to what it was before this assembly.
-ifdef::parent-context[:context: {parent-context}]
-ifndef::parent-context[:!context:]
diff --git a/getting_started/topics/assembly-securing-sample-app.adoc b/getting_started/topics/assembly-securing-sample-app.adoc
deleted file mode 100644
index d5b36c5bca..0000000000
--- a/getting_started/topics/assembly-securing-sample-app.adoc
+++ /dev/null
@@ -1,28 +0,0 @@
-// UserStory: As an RH SSO customer, I want to complete the initial configuration of my standalone server
-
-// This assembly is included in the following assemblies:
-//
-//
-
-// Retains the context of the parent assembly if this assembly is nested within another assembly.
-// See also the complementary step on the last line of this file.
-ifdef::context[:parent-context: {context}]
-
-[id="securing-sample-app_{context}"]
-== Securing a sample application
-
-Now that you have an admin account, a realm, and a user, you can use {project_name} to secure a sample {appserver_name} servlet application. You install a {appserver_name} client adapter, register the application in the admin console, modify the {appserver_name} instance to work with {project_name}, and use {project_name} with some sample code to secure the application.
-
-.Prerequisites
-
-* You need to adjust the port used by {project_name} to avoid port conflicts with {appserver_name}.
-
-include::sample-app/proc-adjusting-ports.adoc[leveloffset=2]
-include::sample-app/proc-installing-client-adapter.adoc[leveloffset=2]
-include::sample-app/proc-registering-app.adoc[leveloffset=2]
-include::sample-app/proc-modifying-app.adoc[leveloffset=2]
-include::sample-app/proc-installing-sample-code.adoc[leveloffset=2]
-
-// Restore the context to what it was before this assembly.
-ifdef::parent-context[:context: {parent-context}]
-ifndef::parent-context[:!context:]
diff --git a/getting_started/topics/first-realm/con-realms-apps.adoc b/getting_started/topics/first-realm/con-realms-apps.adoc
deleted file mode 100644
index 1f7708181c..0000000000
--- a/getting_started/topics/first-realm/con-realms-apps.adoc
+++ /dev/null
@@ -1,12 +0,0 @@
-// UserStory: As an RH SSO customer, I need to know what are the purposes of different realms
-
-[id="realms-apps_{context}"]
-= Realms and users
-When you log in to the admin console, you work in a realm, which is a space where you manage objects. Two types of realms exist:
-
-* `Master realm` - This realm was created for you when you first started {project_name}. It contains the admin account you created at the first login. You use this realm only to create other realms.
-
-* `Other realms` - These realms are created by the admin in the master realm. In these realms, administrators create users and applications. The applications are owned by the users.
-
-image:images/master_realm.png[Realms and applications]
-
diff --git a/getting_started/topics/first-realm/proc-create-realm.adoc b/getting_started/topics/first-realm/proc-create-realm.adoc
deleted file mode 100644
index b9c437b8f9..0000000000
--- a/getting_started/topics/first-realm/proc-create-realm.adoc
+++ /dev/null
@@ -1,33 +0,0 @@
-// UserStory: As an RH SSO customer, I need to know how to create a realm that protects applications
-
-[id="create-realm_{context}"]
-= Creating a realm
-
-As the admin in the master realm, you create the realms where administrators create users and applications.
-
-.Prerequisites
-
-* {project_name} is installed.
-* You have the initial admin account for the admin console.
-
-.Procedure
-
-. Go to http://localhost:8080/auth/admin/ and log in to the {project_name} admin console using the admin account.
-
-. From the *Master* menu, click *Add Realm*. When you are logged in to the master realm, this menu lists all other realms.
-
-. Type `demo` in the *Name* field.
-+
-.A new realm
-image:images/add-demo-realm.png[A new realm]
-+
-NOTE: The realm name is case-sensitive, so make note of the case that you use.
-
-. Click *Create*.
-+
-The main admin console page opens with realm set to `demo`.
-+
-.Demo realm
-image:images/demo-realm.png[Demo realm]
-
-. Switch between managing the `master` realm and the realm you just created by clicking entries in the *Select realm* drop-down list.
diff --git a/getting_started/topics/first-realm/proc-create-user.adoc b/getting_started/topics/first-realm/proc-create-user.adoc
deleted file mode 100644
index 5968fb0189..0000000000
--- a/getting_started/topics/first-realm/proc-create-user.adoc
+++ /dev/null
@@ -1,37 +0,0 @@
-// UserStory: As an RH SSO customer, I want to create a user in my first realm
-
-[id="create-user_{context}"]
-= Creating a user
-
-In the `demo` realm, you create a new user and a temporary password for that new user.
-
-.Procedure
-
-. From the menu, click *Users* to open the user list page.
-
-. On the right side of the empty user list, click *Add User* to open the Add user page.
-
-. Enter a name in the `Username` field.
-+
-This is the only required field.
-+
-.Add user page
-image:images/add-user.png[Add user page]
-
-. Flip the *Email Verified* switch to *On* and click *Save*.
-+
-The management page for the new user opens.
-
-. Click the *Credentials* tab to set a temporary password for the new user.
-
-. Type a new password and confirm it.
-
-. Click *Set Password* to set the user password to the new one you specified.
-+
-.Manage Credentials page
-image:images/user-credentials.png[Manage Credentials page]
-+
-[NOTE]
-====
-This password is temporary and the user will be required to change it at the first login. If you prefer to create a password that is persistent, flip the *Temporary* switch to *Off* and click *Set Password*.
-====
diff --git a/getting_started/topics/first-realm/proc-view-account.adoc b/getting_started/topics/first-realm/proc-view-account.adoc
deleted file mode 100644
index a340e1fff7..0000000000
--- a/getting_started/topics/first-realm/proc-view-account.adoc
+++ /dev/null
@@ -1,26 +0,0 @@
-// UserStory: As an RH SSO customer, I want to test the login for the first user
-
-[id="view-account_{context}"]
-= Logging into the Account Console
-Every user in a realm has access to the account console. You use this console to update your profile information and change your credentials. You can now test logging in with that user in the realm that you created.
-
-.Procedure
-. Log out of the admin console by opening the user menu and selecting *Sign Out*.
-
-. Go to http://localhost:8080/auth/realms/demo/account and log in to your `demo` realm as the user that you just created.
-
-. When you are asked to supply a new password, enter a password that you can remember.
-+
-.Update password
-image:images/update-password.png[Update password]
-+
-The account console opens for this user.
-+
-.Account console
-image:images/account-console.png[]
-
-. Complete the required fields with any values to test using this page.
-
-.Next steps
-
-You are now ready for the final procedure, which is to secure a sample application that runs on {appserver_name}. See xref:securing-sample-app_{context}[Securing a sample application].
diff --git a/getting_started/topics/introduction-keycloak.adoc b/getting_started/topics/introduction-keycloak.adoc
deleted file mode 100644
index 089cf20c23..0000000000
--- a/getting_started/topics/introduction-keycloak.adoc
+++ /dev/null
@@ -1,4 +0,0 @@
-
-== Trying out Keycloak
-
-This guide helps you practice using {project_name} to evaluate it before you use it in a production environment. It includes instructions for installing the {project_name} server in standalone mode, creating accounts and realms for managing users and applications, and securing a {appserver_name} server application.
\ No newline at end of file
diff --git a/getting_started/topics/sample-app/proc-adjusting-ports.adoc b/getting_started/topics/sample-app/proc-adjusting-ports.adoc
deleted file mode 100644
index 6e3defa39b..0000000000
--- a/getting_started/topics/sample-app/proc-adjusting-ports.adoc
+++ /dev/null
@@ -1,59 +0,0 @@
-
-[id="adjusting-ports_{context}"]
-= Adjusting the port used by {project_name}
-
-The instructions in this guide apply to running {appserver_name} on the same machine as the {project_name} server. In this situation, even though {appserver_name} is bundled with {project_name}, you cannot use {appserver_name} as an application container. You must run a separate {appserver_name} instance for your servlet application.
-
-To avoid port conflicts, you need different ports to run {project_name} and {appserver_name}.
-
-.Prerequisites
-
-* You have an admin account for the admin console.
-* You created a demo realm.
-* You created a user in the demo realm.
-
-.Procedure
-
-ifeval::[{project_community}==true]
-. Download WildFly from link:https://www.wildfly.org/[WildFly.org].
-endif::[]
-ifeval::[{project_product}==true]
-. Download JBoss EAP 7.3 from the https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?product=appplatform&downloadType=distributions[Red Hat customer portal].
-endif::[]
-
-. Unzip the downloaded {appserver_name}.
-+
-[source,bash,subs=+attributes]
-----
-$ unzip .zip
-----
-
-. Change to the {project_name} root directory.
-
-. Start the {project_name} server by supplying a value for the `jboss.socket.binding.port-offset` system property. This value is added to the base value of every port opened by the {project_name} server. In this example, *100* is the value.
-
-+
-.Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ cd bin
-$ ./standalone.sh -Djboss.socket.binding.port-offset=100
-----
-
-+
-.Windows
-[source,bash,subs=+attributes]
-----
-> ...\bin\standalone.bat -Djboss.socket.binding.port-offset=100
-----
-
-+
-.Windows Powershell
-[source,bash,subs=+attributes]
-----
-> ...\bin\standalone.bat -D"jboss.socket.binding.port-offset=100"
-----
-
-. Confirm that the {project_name} server is running. Go to http://localhost:8180/auth/admin/ .
-+
-If the admin console opens, you are ready to install a client adapter that enables {appserver_name} to work with {project_name}.
diff --git a/getting_started/topics/sample-app/proc-installing-client-adapter.adoc b/getting_started/topics/sample-app/proc-installing-client-adapter.adoc
deleted file mode 100644
index 55711244c2..0000000000
--- a/getting_started/topics/sample-app/proc-installing-client-adapter.adoc
+++ /dev/null
@@ -1,111 +0,0 @@
-
-[id="installing-client-adapter_{context}"]
-= Installing the {appserver_name} client adapter
-
-When {appserver_name} and {project_name} are installed on the same machine, {appserver_name} requires some modification. To make this modification, you install a {project_name} client adapter.
-
-.Prerequisites
-
-* {appserver_name} is installed.
-
-* You have a backup of the `../standalone/configuration/standalone.xml` file if you have customized this file.
-
-.Procedure
-ifeval::[{project_community}==true]
-. Download the *WildFly OpenID Connect Client Adapter* distribution from link:https://www.keycloak.org/downloads.html[keycloak.org].
-endif::[]
-
-ifeval::[{project_product}==true]
-. Download the *Client Adapter for EAP 7* from the https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?downloadType=distributions&product=core.service.rhsso[Red Hat customer portal].
-endif::[]
-
-. Change to the root directory of {appserver_name}.
-
-. Unzip the downloaded client adapter in this directory. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ unzip .zip
-----
-
-. Change to the bin directory.
-+
-[source,bash,subs=+attributes]
-----
-$ cd bin
-----
-
-. Run the appropriate script for your platform.
-+
-[NOTE]
-====
-If you receive a `file not found`, make sure that you used `unzip` in the previous step. This method of extraction installs the files in the right place.
-====
-
-ifeval::[{project_product}==true]
-+
-.Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ ./jboss-cli.sh --file=adapter-elytron-install-offline.cli
-----
-
-+
-.Windows
-[source,bash,subs=+attributes]
-----
-> jboss-cli.bat --file=adapter-elytron-install-offline.cli
-----
-endif::[]
-
-ifeval::[{project_community}==true]
-+
-.WildFly 10 on Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ ./jboss-cli.sh --file=adapter-install-offline.cli
-----
-
-+
-.WildFly 10 on Windows
-[source,bash,subs=+attributes]
-----
-> jboss-cli.bat --file=adapter-install-offline.cli
-----
-
-+
-.Wildfly 11 on Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ ./jboss-cli.sh --file=adapter-elytron-install-offline.cli
-----
-
-+
-.Wildfly 11 on Windows
-[source,bash,subs=+attributes]
-----
-> jboss-cli.bat --file=adapter-elytron-install-offline.cli
-----
-endif::[]
-
-+
-[NOTE]
-====
-This script makes the necessary edits to the `.../standalone/configuration/standalone.xml` file.
-====
-
-. Start the application server.
-
-+
-.Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ ./standalone.sh
-----
-
-+
-.Windows
-[source,bash,subs=+attributes]
-----
-> ...\standalone.bat
-----
diff --git a/getting_started/topics/sample-app/proc-installing-sample-code.adoc b/getting_started/topics/sample-app/proc-installing-sample-code.adoc
deleted file mode 100644
index 598b0193df..0000000000
--- a/getting_started/topics/sample-app/proc-installing-sample-code.adoc
+++ /dev/null
@@ -1,54 +0,0 @@
-
-[id="installing-sample-code_{context}"]
-= Installing sample code to secure the application
-
-The final procedure is to make this application secure by installing some sample code from the {quickstartRepo_link} repository. The quickstarts work with the most recent {project_name} release.
-
-The sample code is the *app-profile-jee-vanilla* quickstart. It demonstrates how to change a Jakarta EE application that is secured with basic authentication without changing the WAR. The {project_name} client adapter subsystem changes the authentication method and injects the configuration.
-
-.Prerequisites
-
-You have the following installed on your machine and available in your PATH.
-
-* Java JDK 8
-* Apache Maven 3.1.1 or higher
-* Git
-
-You have a *keycloak.json* file.
-
-.Procedure
-
-. Make sure your {appserver_name} application server is started.
-. Download the code and change directories using the following commands.
-+
-[source, subs="attributes"]
-----
-$ git clone {quickstartRepo_link}
-$ cd {quickstartRepo_dir}/app-profile-jee-vanilla/config
-----
-
-. Copy the `keycloak.json` file to the current directory.
-
-. Move one level up to the `app-profile-jee-vanilla` directory.
-
-. Install the code using the following command.
-+
-[source, subs="attributes"]
-----
-$ mvn clean wildfly:deploy
-----
-
-. Confirm that the application installation succeeded. Go to http://localhost:8080/vanilla where a login page is displayed.
-+
-.Login page confirming success
-image:images/vanilla.png[Login page confirming success]
-
-. Log in using the account that you created in the demo realm.
-+
-.Login page to demo realm
-image:images/demo-login.png[Login page to demo realm]
-+
-A message appears indicating you have completed a successful use of {project_name} to protect a sample {appserver_name} application. Congratulations!
-+
-.Complete success
-image:images/success.png[Complete success]
diff --git a/getting_started/topics/sample-app/proc-modifying-app.adoc b/getting_started/topics/sample-app/proc-modifying-app.adoc
deleted file mode 100644
index 2c7b41019e..0000000000
--- a/getting_started/topics/sample-app/proc-modifying-app.adoc
+++ /dev/null
@@ -1,57 +0,0 @@
-
-[id="modifying-app_{context}"]
-= Modifying the {appserver_name} instance
-
-The {appserver_name} servlet application requires additional configuration before it is secured by {project_name}.
-
-.Prerequisites
-
-* You created a client named *vanilla* in the *demo* realm.
-
-* You saved a template XML file for this client.
-
-.Procedure
-
-. Go to the `standalone/configuration` directory in your {appserver_name} root directory.
-. Open the `standalone.xml` file and search for the following text:
-+
-[source,xml]
-----
-
-----
-
-. Change the XML entry from self-closing to using a pair of opening and closing tags as shown here:
-+
-[source,xml]
-----
-
-
-----
-
-. Paste the contents of the XML template within the `` element, as shown in this example:
-+
-[source,xml]
-----
-
-
- demo
- http://localhost:8180/auth
- true
- EXTERNAL
- vanilla
-
-
-----
-
-. Change `WAR MODULE NAME.war` to `vanilla.war`:
-+
-[source,xml]
-----
-
-
- ...
-
-----
-
-. Reboot the application server.
-
diff --git a/getting_started/topics/sample-app/proc-registering-app.adoc b/getting_started/topics/sample-app/proc-registering-app.adoc
deleted file mode 100644
index a249bbad88..0000000000
--- a/getting_started/topics/sample-app/proc-registering-app.adoc
+++ /dev/null
@@ -1,46 +0,0 @@
-
-[id="registering-app_{context}"]
-= Registering the {appserver_name} application
-
-You can now define and register the client in the {project_name} admin console.
-
-.Prerequisites
-
-* You installed a client adapter to work with {appserver_name}.
-
-.Procedure
-
-. Log in to the admin console with your admin account: http://localhost:8180/auth/admin/
-
-. In the top left drop-down list, select the `Demo` realm.
-
-. Click `Clients` in the left side menu to open the Clients page.
-+
-.Clients
-image:images/clients.png[Clients]
-
-. On the right side, click *Create*.
-
-. On the Add Client dialog, create a client called *vanilla* by completing the fields as shown below:
-+
-.Add Client
-image:images/add-client.png[Add Client]
-
-. Click *Save*.
-
-. On the *Vanilla* client page that appears, click the *Installation* tab.
-
-. Select *Keycloak OIDC JSON* to generate a file that you need in a later procedure.
-+
-.Keycloak.json file
-image:images/keycloak-json.png[Keycloak.json file]
-
-. Click *Download* to save *Keycloak.json* in a location that you can find later.
-
-
-. Select *Keycloak OIDC JBoss Subsystem XML* to generate an XML template.
-+
-.Template XML
-image:images/client-install-selected.png[Template XML]
-
-. Click *Download* to save a copy for use in the next procedure, which involves {appserver_name} configuration.
\ No newline at end of file
diff --git a/getting_started/topics/standalone/proc-creating-admin.adoc b/getting_started/topics/standalone/proc-creating-admin.adoc
deleted file mode 100644
index 016174f8c7..0000000000
--- a/getting_started/topics/standalone/proc-creating-admin.adoc
+++ /dev/null
@@ -1,20 +0,0 @@
-
-[id="create-admin_{context}"]
-= Creating the admin account
-
-Before you can use {project_name}, you need to create an admin account which you use to log in to the {project_name} admin console.
-
-.Prerequisites
-* You saw no errors when you started the {project_name} server.
-
-.Procedure
-
-. Open http://localhost:8080/auth in your web browser.
-+
-The welcome page opens, confirming that the server is running.
-+
-.Welcome page
-image:images/welcome.png[Welcome page]
-
-. Enter a username and password to create an initial admin user.
-
diff --git a/getting_started/topics/standalone/proc-installing-server-community.adoc b/getting_started/topics/standalone/proc-installing-server-community.adoc
deleted file mode 100644
index 19f79d19cd..0000000000
--- a/getting_started/topics/standalone/proc-installing-server-community.adoc
+++ /dev/null
@@ -1,30 +0,0 @@
-
-[id="installing-server-community_{context}"]
-= Installing the Server
-You can install the server on Linux or Windows. The server download ZIP file contains the scripts and binaries to run the {project_name} server.
-
-.Procedure
-
-. Download *keycloak-{project_version}.[zip|tar.gz]* from https://www.keycloak.org/downloads.html[Keycloak downloads].
-
-. Place the file in a directory you choose.
-
-. Unpack the ZIP file using the appropriate `unzip` utility, such as unzip, tar, or Expand-Archive.
-
-+
-.Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ unzip keycloak-{project_version}.zip
-
-or
-
-$ tar -xvzf keycloak-{project_version}.tar.gz
-----
-
-+
-.Windows
-[source,bash,subs=+attributes]
-----
-> Expand-Archive -Path 'C:Downloads\keycloak-{project_version}.zip' -DestinationPath 'C:\Downloads'
-----
diff --git a/getting_started/topics/standalone/proc-installing-server-product.adoc b/getting_started/topics/standalone/proc-installing-server-product.adoc
deleted file mode 100644
index 982b445b68..0000000000
--- a/getting_started/topics/standalone/proc-installing-server-product.adoc
+++ /dev/null
@@ -1,33 +0,0 @@
-
-[id="installing-server-product_{context}"]
-= Installing the {project_name} server
-
-For this sample instance of {project_name}, this procedure involves installation in standalone mode. The server download ZIP file contains the scripts and binaries to run the {project_name} server. You can install the server on Linux or Windows.
-
-.Procedure
-
-. Go to the https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?downloadType=distributions&product=core.service.rhsso[Red Hat customer portal].
-
-. Download the {project_name} Server: *rh-sso-{project_version_base}.zip*
-
-. Place the file in a directory you choose.
-
-. Unpack the ZIP file using the appropriate `unzip` utility, such as unzip, tar, or Expand-Archive.
-
-+
-.Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ unzip rhsso-{project_version_base}.zip
-
-or
-
-$ tar -xvzf rh-sso-{project_version_base}.tar.gz
-----
-
-+
-.Windows
-[source,bash,subs=+attributes]
-----
-> Expand-Archive -Path 'C:Downloads\rhsso-{project_version_base}.zip' -DestinationPath 'C:\Downloads'
-----
diff --git a/getting_started/topics/standalone/proc-logging-in-admin-console.adoc b/getting_started/topics/standalone/proc-logging-in-admin-console.adoc
deleted file mode 100644
index e7e554af0f..0000000000
--- a/getting_started/topics/standalone/proc-logging-in-admin-console.adoc
+++ /dev/null
@@ -1,30 +0,0 @@
-[id="logging-in-admin-console_{context}"]
-= Logging into the admin console
-
-After you create the initial admin account, you can log in to the admin console. In this console, you add users and register applications to be secured by {project_name}.
-
-.Prerequisites
-
-* You have an admin account for the admin console.
-
-.Procedure
-. Click the *Administration Console* link on the *Welcome* page or go directly to http://localhost:8080/auth/admin/ (the console URL).
-+
-[NOTE]
-====
-The Administration Console is generally referred to as the admin console for short in {project_name} documentation.
-====
-
-. Enter the username and password you created on the *Welcome* page to open the *admin console*.
-+
-.Admin console login screen
-image:images/admin-login.png[Admin console login screen]
-+
-The initial screen for the admin console appears.
-+
-.Admin console
-image:images/admin-console.png[Admin console]
-
-.Next steps
-
-Now that you can log into the admin console, you can begin creating realms where administrators can create users and give them access to applications. For more details, see xref:creating-first-realm_{context}[Creating a realm and a user].
diff --git a/getting_started/topics/standalone/proc-starting-server.adoc b/getting_started/topics/standalone/proc-starting-server.adoc
deleted file mode 100644
index 50e251c734..0000000000
--- a/getting_started/topics/standalone/proc-starting-server.adoc
+++ /dev/null
@@ -1,26 +0,0 @@
-[id="starting-server_{context}"]
-= Starting the {project_name} server
-
-You start the server on the system where you installed it.
-
-.Prerequisites
-* You saw no errors during the {project_name} server installation.
-
-.Procedure
-. Go to the `bin` directory of the server distribution.
-. Run the `standalone` boot script.
-
-+
-.Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ cd bin
-$ ./standalone.sh
-----
-
-+
-.Windows
-[source,bash,subs=+attributes]
-----
-> ...\bin\standalone.bat
-----
diff --git a/getting_started/topics/templates b/getting_started/topics/templates
deleted file mode 120000
index d191264115..0000000000
--- a/getting_started/topics/templates
+++ /dev/null
@@ -1 +0,0 @@
-../../topics/templates
\ No newline at end of file
diff --git a/openshift-openj9/docinfo-footer.html b/openshift-openj9/docinfo-footer.html
deleted file mode 120000
index 5509bda2dd..0000000000
--- a/openshift-openj9/docinfo-footer.html
+++ /dev/null
@@ -1 +0,0 @@
-../openshift/docinfo-footer.html
\ No newline at end of file
diff --git a/openshift-openj9/docinfo.html b/openshift-openj9/docinfo.html
deleted file mode 120000
index 699b60e03f..0000000000
--- a/openshift-openj9/docinfo.html
+++ /dev/null
@@ -1 +0,0 @@
-../openshift/docinfo.html
\ No newline at end of file
diff --git a/openshift-openj9/images b/openshift-openj9/images
deleted file mode 120000
index 4dbae04539..0000000000
--- a/openshift-openj9/images
+++ /dev/null
@@ -1 +0,0 @@
-../openshift/images
\ No newline at end of file
diff --git a/openshift-openj9/index.adoc b/openshift-openj9/index.adoc
deleted file mode 100644
index b8e5ae40bc..0000000000
--- a/openshift-openj9/index.adoc
+++ /dev/null
@@ -1,19 +0,0 @@
-:toc:
-:toclevels: 3
-:numbered:
-:linkattrs:
-
-include::topics/templates/document-attributes-product.adoc[]
-:project_templates_url: {project_templates_base_url}/openj9
-:project_templates_version: {openshift_openj9_project_templates_version}
-:openshift_name: {openshift_openj9_name}
-:openshift_link: {openshift_openj9_link}
-:openshift_image_platforms: {openshift_openj9_platforms}
-:openshift_name_other: {openshift_openjdk_name}
-:openshift_link_other: {openshift_openjdk_link}
-
-:openshift:
-
-= {openshift_openj9_name}
-
-include::topics.adoc[]
diff --git a/openshift-openj9/master-docinfo.xml b/openshift-openj9/master-docinfo.xml
deleted file mode 100644
index 950565bc45..0000000000
--- a/openshift-openj9/master-docinfo.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-{project_name_full}
-{project_versionDoc}
-For use with {project_name_full} {project_versionDoc}
-{openshift_openj9_name}
-{project_versionDoc}
-
- This guide consists of basic information and instructions to get started with {project_name_full} {project_versionDoc} for OpenShift on OpenJ9
-
-
- Red Hat Customer Content Services
-
-
- Copyright 2021 Red Hat, Inc.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
diff --git a/openshift-openj9/master.adoc b/openshift-openj9/master.adoc
deleted file mode 100644
index f0b3a64517..0000000000
--- a/openshift-openj9/master.adoc
+++ /dev/null
@@ -1,21 +0,0 @@
-:toc:
-:toclevels: 3
-:numbered:
-:linkattrs:
-
-include::topics/templates/document-attributes-product.adoc[]
-:project_templates_url: {project_templates_base_url}/openj9
-:project_templates_version: {openshift_openj9_project_templates_version}
-:openshift_name: {openshift_openj9_name}
-:openshift_link: {openshift_openj9_link}
-:openshift_image_repository: {openshift_openj9_image_repository}
-:openshift_image_stream: {openshift_openj9_image_stream}
-:openshift_image_platforms: {openshift_openj9_platforms}
-:openshift_name_other: {openshift_openjdk_name}
-:openshift_link_other: {openshift_openjdk_link}
-
-:openshift:
-
-= {openshift_openj9_name}
-
-include::topics.adoc[]
diff --git a/openshift-openj9/pom.xml b/openshift-openj9/pom.xml
deleted file mode 100644
index 3c6f0a4e0b..0000000000
--- a/openshift-openj9/pom.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
- 4.0.0
-
-
- org.keycloak.documentation
- documentation-parent
- 999-SNAPSHOT
- ../
-
-
- Red Hat Single Sign-On for OpenShift on OpenJ9
- openshift-openj9
- pom
-
-
-
-
- org.keycloak.documentation
- header-maven-plugin
-
-
- add-file-headers
-
-
-
-
- org.asciidoctor
- asciidoctor-maven-plugin
-
-
- asciidoc-to-html
-
-
-
-
- maven-antrun-plugin
-
-
- echo-output
-
-
-
-
-
-
diff --git a/openshift-openj9/topics b/openshift-openj9/topics
deleted file mode 120000
index a71556c2e6..0000000000
--- a/openshift-openj9/topics
+++ /dev/null
@@ -1 +0,0 @@
-../openshift/topics
\ No newline at end of file
diff --git a/openshift-openj9/topics.adoc b/openshift-openj9/topics.adoc
deleted file mode 120000
index 261a98d417..0000000000
--- a/openshift-openj9/topics.adoc
+++ /dev/null
@@ -1 +0,0 @@
-../openshift/topics.adoc
\ No newline at end of file
diff --git a/openshift/.asciidoctorconfig b/openshift/.asciidoctorconfig
deleted file mode 100644
index acb09a0956..0000000000
--- a/openshift/.asciidoctorconfig
+++ /dev/null
@@ -1 +0,0 @@
-:imagesdir: {asciidoctorconfigdir}
diff --git a/openshift/docinfo-footer.html b/openshift/docinfo-footer.html
deleted file mode 120000
index a39d3bd0f6..0000000000
--- a/openshift/docinfo-footer.html
+++ /dev/null
@@ -1 +0,0 @@
-../aggregation/navbar.html
\ No newline at end of file
diff --git a/openshift/docinfo.html b/openshift/docinfo.html
deleted file mode 120000
index 14514f94d2..0000000000
--- a/openshift/docinfo.html
+++ /dev/null
@@ -1 +0,0 @@
-../aggregation/navbar-head.html
\ No newline at end of file
diff --git a/openshift/images/add_from_catalog.png b/openshift/images/add_from_catalog.png
deleted file mode 100644
index 8ea491fda6..0000000000
Binary files a/openshift/images/add_from_catalog.png and /dev/null differ
diff --git a/openshift/images/choose_developer_role.png b/openshift/images/choose_developer_role.png
deleted file mode 100644
index 5425b1a59d..0000000000
Binary files a/openshift/images/choose_developer_role.png and /dev/null differ
diff --git a/openshift/images/choose_template.png b/openshift/images/choose_template.png
deleted file mode 100644
index 160a2ad435..0000000000
Binary files a/openshift/images/choose_template.png and /dev/null differ
diff --git a/openshift/images/import_realm_error.png b/openshift/images/import_realm_error.png
deleted file mode 100644
index 940ca1c2a9..0000000000
Binary files a/openshift/images/import_realm_error.png and /dev/null differ
diff --git a/openshift/images/instantiate_template.png b/openshift/images/instantiate_template.png
deleted file mode 100644
index 89e4211686..0000000000
Binary files a/openshift/images/instantiate_template.png and /dev/null differ
diff --git a/openshift/images/sso_app_jee_jsp.png b/openshift/images/sso_app_jee_jsp.png
deleted file mode 100644
index 2c8726ff1c..0000000000
Binary files a/openshift/images/sso_app_jee_jsp.png and /dev/null differ
diff --git a/openshift/images/sso_app_jee_jsp_logged_in.png b/openshift/images/sso_app_jee_jsp_logged_in.png
deleted file mode 100644
index c5b0933c94..0000000000
Binary files a/openshift/images/sso_app_jee_jsp_logged_in.png and /dev/null differ
diff --git a/openshift/images/sso_keyword.png b/openshift/images/sso_keyword.png
deleted file mode 100644
index 58b397c260..0000000000
Binary files a/openshift/images/sso_keyword.png and /dev/null differ
diff --git a/openshift/images/verify_deployment.png b/openshift/images/verify_deployment.png
deleted file mode 100644
index 766057e5d0..0000000000
Binary files a/openshift/images/verify_deployment.png and /dev/null differ
diff --git a/openshift/index.adoc b/openshift/index.adoc
deleted file mode 100644
index 2d2e3dd385..0000000000
--- a/openshift/index.adoc
+++ /dev/null
@@ -1,19 +0,0 @@
-:toc:
-:toclevels: 3
-:numbered:
-:linkattrs:
-
-include::topics/templates/document-attributes-product.adoc[]
-:project_templates_url: {project_templates_base_url}
-:project_templates_version: {openshift_openjdk_project_templates_version}
-:openshift_name: {openshift_openjdk_name}
-:openshift_link: {openshift_openjdk_link}
-:openshift_image_platforms: {openshift_openjdk_platforms}
-:openshift_name_other: {openshift_openj9_name}
-:openshift_link_other: {openshift_openj9_link}
-
-:openshift:
-
-= {openshift_name}
-
-include::topics.adoc[]
diff --git a/openshift/master-docinfo.xml b/openshift/master-docinfo.xml
deleted file mode 100644
index 840c9ac499..0000000000
--- a/openshift/master-docinfo.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-{project_name_full}
-{project_versionDoc}
-For Use with {project_name_full} {project_versionDoc}
-{openshift_openjdk_name}
-{project_versionDoc}
-
- This guide consists of basic information and instructions to get started with {project_name_full} {project_versionDoc} for OpenShift
-
-
- Red Hat Customer Content Services
-
-
- Copyright {cdate} Red Hat, Inc.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
diff --git a/openshift/master.adoc b/openshift/master.adoc
deleted file mode 100644
index 4d2375465f..0000000000
--- a/openshift/master.adoc
+++ /dev/null
@@ -1,23 +0,0 @@
-:toc:
-:toclevels: 3
-:numbered:
-:linkattrs:
-
-include::topics/templates/document-attributes-product.adoc[]
-:project_templates_url: {project_templates_base_url}
-:project_templates_version: {openshift_openjdk_project_templates_version}
-:openshift_name: {openshift_openjdk_name}
-:openshift_link: {openshift_openjdk_link}
-:openshift_image_platforms: {openshift_openjdk_platforms}
-:openshift_image_repository: {openshift_openjdk_image_repository}
-:openshift_image_stream: {openshift_openjdk_image_stream}
-:openshift_name_other: {openshift_openj9_name}
-:openshift_link_other: {openshift_openj9_link}
-
-:openshift:
-
-= {openshift_name}
-
-include::topics/templates/making-open-source-more-inclusive.adoc[]
-
-include::topics.adoc[]
diff --git a/openshift/pom.xml b/openshift/pom.xml
deleted file mode 100644
index cc3eba2720..0000000000
--- a/openshift/pom.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
- 4.0.0
-
-
- org.keycloak.documentation
- documentation-parent
- 999-SNAPSHOT
- ../
-
-
- Red Hat Single Sign-On for OpenShift
- openshift
- pom
-
-
-
-
- org.keycloak.documentation
- header-maven-plugin
-
-
- add-file-headers
-
-
-
-
- org.asciidoctor
- asciidoctor-maven-plugin
-
-
- asciidoc-to-html
-
-
-
-
- maven-antrun-plugin
-
-
- echo-output
-
-
-
-
-
-
diff --git a/openshift/topics.adoc b/openshift/topics.adoc
deleted file mode 100644
index 2cd21289c5..0000000000
--- a/openshift/topics.adoc
+++ /dev/null
@@ -1,9 +0,0 @@
-include::topics/introduction.adoc[leveloffset=+0]
-
-include::topics/get_started.adoc[leveloffset=+0]
-
-include::topics/advanced_concepts.adoc[leveloffset=+0]
-
-include::topics/tutorials.adoc[leveloffset=+0]
-
-include::topics/reference.adoc[leveloffset=+0]
diff --git a/openshift/topics/advanced_concepts.adoc b/openshift/topics/advanced_concepts.adoc
deleted file mode 100644
index 5b8c71a02b..0000000000
--- a/openshift/topics/advanced_concepts.adoc
+++ /dev/null
@@ -1,757 +0,0 @@
-== Performing advanced procedures
-
-[role="_abstract"]
-This chapter describes advanced procedures, such as setting up keystores and a truststore for the {project_name} server, creating an administrator account, as well as an overview of available {project_name} client registration methods, and guidance on configuring clustering.
-
-=== Deploying passthrough TLS termination templates
-
-You can deploy using these templates. They require HTTPS, JGroups keystores and the {project_name} server truststore to already exist, and therefore can be used to instantiate the {project_name} server pod using your custom HTTPS, JGroups keystores and {project_name} server truststore.
-
-==== Preparing the deployment
-
-.Procedure
-. Log in to the OpenShift CLI with a user that holds the _cluster:admin_ role.
-
-. Create a new project:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-project sso-app-demo
-----
-. Add the `view` role to the link:{ocpdocs_default_service_accounts_link}[`default`] service account. This enables the service account to view all the resources in the sso-app-demo namespace, which is necessary for managing the cluster.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
-----
-
-[[Configuring-Keystores]]
-==== Creating HTTPS and JGroups Keystores, and Truststore for the {project_name} Server
-
-In this procedure, the *_openssl_* toolkit is used to generate a CA certificate to sign the HTTPS keystore, and create a truststore for the {project_name} server. The *_keytool_*, a package *included with the Java Development Kit*, is then used to generate self-signed certificates for these keystores.
-
-The {project_name} application templates using xref:../introduction/introduction.adoc#reencrypt-templates[re-encryption TLS termination] do not *require* or *expect* the HTTPS and JGroups keystores and {project_name} server truststore to be prepared beforehand.
-
-====
-[NOTE]
-If you want to provision the {project_name} server using existing HTTPS / JGroups keystores, use some of the passthrough templates instead.
-====
-
-The re-encryption templates use OpenShift's internal link:{ocpdocs_serving_x509_secrets_link}[service serving x509 certificate secrets] to automatically create the HTTPS and JGroups keystores.
-
-The {project_name} server truststore is also created automatically, containing the */var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt* CA certificate file, which is used to create these cluster certificates. Moreover, the truststore for the {project_name} server is pre-populated with the all known, trusted CA certificate files found in the Java system path.
-
-.Prerequisites
-
-The {project_name} application templates using passthrough TLS termination require the following to be deployed:
-
-* An xref:create-https-keystore[HTTPS keystore] used for encryption of https traffic,
-* The xref:create-jgroups-keystore[JGroups keystore] used for encryption of JGroups communications between nodes in the cluster, and
-* xref:create-server-truststore[{project_name} server truststore] used for securing the {project_name} requests
-
-[NOTE]
-====
-For production environments Red Hat recommends that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS).
-
-See the https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.1/html-single/security_guide/index#Generate_a_SSL_Encryption_Key_and_Certificate[JBoss Enterprise Application Platform Security Guide] for more information on how to create a keystore with self-signed or purchased SSL certificates.
-====
-
-[[create-https-keystore]]
-*_Create the HTTPS keystore:_*
-
-[[generate-ca-certificate]]
-
-.Procedure
-
-. Generate a CA certificate. Pick and remember the password. Provide identical password, when xref:signing-csr-with-ca-certificate[signing the certificate sign request with the CA certificate] below:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ openssl req -new -newkey rsa:4096 -x509 -keyout xpaas.key -out xpaas.crt -days 365 -subj "/CN=xpaas-sso-demo.ca"
-----
-. Generate a private key for the HTTPS keystore. Provide `mykeystorepass` as the keystore password:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -genkeypair -keyalg RSA -keysize 2048 -dname "CN=secure-sso-sso-app-demo.openshift.example.com" -alias jboss -keystore keystore.jks
-----
-. Generate a certificate sign request for the HTTPS keystore. Provide `mykeystorepass` as the keystore password:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -certreq -keyalg rsa -alias jboss -keystore keystore.jks -file sso.csr
-----
-
-[[signing-csr-with-ca-certificate]]
-[start=4]
-. Sign the certificate sign request with the CA certificate. Provide the same password that was used to xref:generate-ca-certificate[generate the CA certificate]:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ openssl x509 -req <(printf "subjectAltName=DNS:secure-sso-sso-app-demo.openshift.example.com") -CA xpaas.crt -CAkey xpaas.key -in sso.csr -out sso.crt -days 365 -CAcreateserial
-----
-+
-[NOTE]
-====
-To make the preceding command work on one line, the command includes the process substitution (`<() syntax`). Be sure that your current shell environment supports such syntax. Otherwise, you can encounter a `syntax error near unexpected token `('` message.
-====
-. Import the CA certificate into the HTTPS keystore. Provide `mykeystorepass` as the keystore password. Reply `yes` to `Trust this certificate? [no]:` question:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -import -file xpaas.crt -alias xpaas.ca -keystore keystore.jks
-----
-. Import the signed certificate sign request into the HTTPS keystore. Provide `mykeystorepass` as the keystore password:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -import -file sso.crt -alias jboss -keystore keystore.jks
-----
-
-[[create-jgroups-keystore]]
-*_Generate a secure key for the JGroups keystore:_*
-
-Provide `password` as the keystore password:
-
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -genseckey -alias secret-key -storetype JCEKS -keystore jgroups.jceks
-----
-
-[[create-server-truststore]]
-*_Import the CA certificate into a new {project_name} server truststore:_*
-
-Provide `mykeystorepass` as the truststore password. Reply `yes` to `Trust this certificate? [no]:` question:
-
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -import -file xpaas.crt -alias xpaas.ca -keystore truststore.jks
-----
-
-[[Configuring-Secrets]]
-==== Creating secrets
-
-.Procedure
-
-You create objects called secrets that OpenShift uses to hold sensitive information, such as passwords or keystores.
-
-. Create the secrets for the HTTPS and JGroups keystores, and {project_name} server truststore, generated in the xref:Configuring-Keystores[previous section].
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc create secret generic sso-app-secret --from-file=keystore.jks --from-file=jgroups.jceks --from-file=truststore.jks
-----
-. Link these secrets to the default service account, which is used to run {project_name} pods.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc secrets link default sso-app-secret
-----
-
-[role="_additional-resources"]
-.Additional resources
-* link:{ocpdocs_secrets_link}[What is a secret?]
-* link:{ocpdocs_default_service_accounts_link}[Default project service accounts and roles]
-
-==== Deploying a Passthrough TLS template using the OpenShift CLI
-
-After you create xref:Configuring-Keystores[keystores] and xref:Configuring-Secrets[secrets], deploy a passthrough TLS termination template by using the `oc` command.
-
-===== `oc` command guidelines
-In the following `oc` command, the values of *_SSO_ADMIN_USERNAME_*, *_SSO_ADMIN_PASSWORD_*, *_HTTPS_PASSWORD_*, *_JGROUPS_ENCRYPT_PASSWORD_*, and *_SSO_TRUSTSTORE_PASSWORD_* variables match the default values in the *_{project_templates_version}-https_* {project_name} application template.
-
-For production environments, Red Hat recommends that you consult the on-site policy for your organization for guidance on generating a strong user name and password for the administrator user account of the {project_name} server, and passwords for the HTTPS and JGroups keystores, and the truststore of the {project_name} server.
-
-Also, when you create the template, make the passwords match the passwords provided when you created the keystores. If you used a different username or password, modify the values of the parameters in your template to match your environment.
-
-[NOTE]
-====
-You can determine the alias names associated with the certificate by using the following *_keytool_* commands. The *_keytool_* is a package included with the Java Development Kit.
-
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -v -list -keystore keystore.jks | grep Alias
-Enter keystore password: mykeystorepass
-Alias name: xpaas.ca
-Alias name: jboss
-----
-
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -v -list -keystore jgroups.jceks -storetype jceks | grep Alias
-Enter keystore password: password
-Alias name: secret-key
-----
-
-The *_SSO_ADMIN_USERNAME_*, *_SSO_ADMIN_PASSWORD_*, and the *_SSO_REALM_* template parameters in the following command are optional.
-====
-
-===== Sample `oc` command
-
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-app --template={project_templates_version}-https \
- -p HTTPS_SECRET="sso-app-secret" \
- -p HTTPS_KEYSTORE="keystore.jks" \
- -p HTTPS_NAME="jboss" \
- -p HTTPS_PASSWORD="mykeystorepass" \
- -p JGROUPS_ENCRYPT_SECRET="sso-app-secret" \
- -p JGROUPS_ENCRYPT_KEYSTORE="jgroups.jceks" \
- -p JGROUPS_ENCRYPT_NAME="secret-key" \
- -p JGROUPS_ENCRYPT_PASSWORD="password" \
- -p SSO_ADMIN_USERNAME="admin" \
- -p SSO_ADMIN_PASSWORD="redhat" \
- -p SSO_REALM="demorealm" \
- -p SSO_TRUSTSTORE="truststore.jks" \
- -p SSO_TRUSTSTORE_PASSWORD="mykeystorepass" \
- -p SSO_TRUSTSTORE_SECRET="sso-app-secret"
---> Deploying template "openshift/{project_templates_version}-https" to project sso-app-demo
-
- {project_name} {project_version} (Ephemeral with passthrough TLS)
- ---------
- An example {project_name} 7 application. For more information about using this template, see \https://github.com/jboss-openshift/application-templates.
-
- A new {project_name} service has been created in your project. The admin username/password for accessing the master realm via the {project_name} console is admin/redhat. Please be sure to create the following secrets: "sso-app-secret" containing the keystore.jks file used for serving secure content; "sso-app-secret" containing the jgroups.jceks file used for securing JGroups communications; "sso-app-secret" containing the truststore.jks file used for securing {project_name} requests.
-
- * With parameters:
- * Application Name=sso
- * Custom http Route Hostname=
- * Custom https Route Hostname=
- * Server Keystore Secret Name=sso-app-secret
- * Server Keystore Filename=keystore.jks
- * Server Keystore Type=
- * Server Certificate Name=jboss
- * Server Keystore Password=mykeystorepass
- * Datasource Minimum Pool Size=
- * Datasource Maximum Pool Size=
- * Datasource Transaction Isolation=
- * JGroups Secret Name=sso-app-secret
- * JGroups Keystore Filename=jgroups.jceks
- * JGroups Certificate Name=secret-key
- * JGroups Keystore Password=password
- * JGroups Cluster Password=yeSppLfp # generated
- * ImageStream Namespace=openshift
- * {project_name} Administrator Username=admin
- * {project_name} Administrator Password=redhat
- * {project_name} Realm=demorealm
- * {project_name} Service Username=
- * {project_name} Service Password=
- * {project_name} Trust Store=truststore.jks
- * {project_name} Trust Store Password=mykeystorepass
- * {project_name} Trust Store Secret=sso-app-secret
- * Container Memory Limit=1Gi
-
---> Creating resources ...
- service "sso" created
- service "secure-sso" created
- service "sso-ping" created
- route "sso" created
- route "secure-sso" created
- deploymentconfig "sso" created
---> Success
- Run 'oc status' to view your app.
-----
-
-[role="_additional-resources"]
-.Additional resources
-* xref:../introduction/introduction.adoc#passthrough-templates[Passthrough TLS Termination]
-
-[[advanced-concepts-sso-hostname-spi-setup]]
-=== Customizing the Hostname for the {project_name} Server
-
-The hostname SPI introduced a flexible way to configure the hostname for the {project_name} server.
-The default hostname provider one is `default`. This provider provides enhanced functionality over the original `request` provider which is now deprecated. Without additional settings, it uses the request headers to determine the hostname similarly to the original `request` provider.
-
-For configuration options of the `default` provider, refer to the {installguide_link}#_hostname[{installguide_name}].
-The `frontendUrl` option can be configured via `SSO_FRONTEND_URL` environment variable.
-
-[NOTE]
-For backward compatibility, `SSO_FRONTEND_URL` settings is ignored if `SSO_HOSTNAME` is also set.
-
-Another option of hostname provider is `fixed`, which allows configuring a fixed hostname. The latter makes sure that only valid hostnames can be used and allows internal applications to invoke the {project_name} server through an alternative URL.
-
-.Procedure
-
-Run the following commands to set the `fixed` hostname SPI provider for the {project_name} server:
-
-. Deploy the {project_openshift_product_name} image with *_SSO_HOSTNAME_* environment variable set to the desired hostname of the {project_name} server.
-+
-[source,yaml,subs="verbatim,macros,attributes"]
-----
-$ oc new-app --template={project_templates_version}-x509-https \
- -p SSO_HOSTNAME="rh-sso-server.openshift.example.com"
-----
-
-. Identify the name of the route for the {project_name} service.
-+
-[source,yaml,subs="verbatim,macros,attributes"]
-----
-$ oc get routes
-NAME HOST/PORT
-sso sso-sso-app-demo.openshift.example.com
-----
-
-. Change the `host:` field to match the hostname specified as the value of the *_SSO_HOSTNAME_* environment variable above.
-+
-[NOTE]
-====
-Adjust the `rh-sso-server.openshift.example.com` value in the following command as necessary.
-====
-+
-----
-$ oc patch route/sso --type=json -p '[{"op": "replace", "path": "/spec/host", "value": "rh-sso-server.openshift.example.com"}]'
-----
-+
-If successful, the previous command will return the following output:
-+
-----
-route "sso" patched
-----
-
-[[sso-connecting-to-an-external-database]]
-=== Connecting to an external database
-
-{project_name} can be configured to connect to an external (to OpenShift cluster) database. In order to achieve this, you need to modify the `sso-{database name}` Endpoints object to point to the proper address. The procedure is described in the link:{ocpdocs_ingress_service_external_ip_link}[OpenShift manual].
-
-The easiest way to get started is to deploy {project_name} from a template and then modify the Endpoints object. You might also need to update some of the datasource configuration variables in the DeploymentConfig. Once you're done, just roll a new deployment out.
-
-[[sso-using-custom-jdbc-driver]]
-=== Using Custom JDBC Driver
-
-To connect to any database, the JDBC driver for that database must be present and {project_name} configured
-properly. Currently, the only JDBC driver available in the image is the PostgreSQL JDBC driver. For any other
-database, you need to extend the {project_name} image with a custom JDBC driver and a CLI script to register
-it and set up the connection properties. The following steps illustrate how to do that, taking MariaDB driver
-as an example. Update the example for other database drivers accordingly.
-
-.Procedure
-
-. Create an empty directory.
-. Download the JDBC driver binaries into this directory.
-. Create a new `Dockerfile` file in this directory with the following contents. For other databases, replace `mariadb-java-client-2.5.4.jar` with the filename of the respective driver:
-+
-[source,subs="attributes+,macros+,+quotes"]
-----
-*FROM* {openshift_image_repository}:latest
-
-*COPY* sso-extensions.cli /opt/eap/extensions/
-*COPY* mariadb-java-client-2.5.4.jar /opt/eap/extensions/jdbc-driver.jar
-----
-. Create a new `sso-extensions.cli` file in this directory with the following contents. Update the values of the variables in italics according to the deployment needs:
-+
-[source,bash,subs="attributes+,macros+,+quotes"]
-----
-batch
-
-set DB_DRIVER_NAME=_mariadb_
-set DB_USERNAME=_username_
-set DB_PASSWORD=_password_
-set DB_DRIVER=_org.mariadb.jdbc.Driver_
-set DB_XA_DRIVER=_org.mariadb.jdbc.MariaDbDataSource_
-set DB_JDBC_URL=_jdbc:mariadb://jdbc-host/keycloak_
-set DB_EAP_MODULE=_org.mariadb_
-
-set FILE=/opt/eap/extensions/jdbc-driver.jar
-
-module add --name=$DB_EAP_MODULE --resources=$FILE --dependencies=javax.api,javax.resource.api
-/subsystem=datasources/jdbc-driver=$DB_DRIVER_NAME:add( \
- driver-name=$DB_DRIVER_NAME, \
- driver-module-name=$DB_EAP_MODULE, \
- driver-class-name=$DB_DRIVER, \
- driver-xa-datasource-class-name=$DB_XA_DRIVER \
-)
-/subsystem=datasources/data-source=KeycloakDS:remove()
-/subsystem=datasources/data-source=KeycloakDS:add( \
- jndi-name=java:jboss/datasources/KeycloakDS, \
- enabled=true, \
- use-java-context=true, \
- connection-url=$DB_JDBC_URL, \
- driver-name=$DB_DRIVER_NAME, \
- user-name=$DB_USERNAME, \
- password=$DB_PASSWORD \
-)
-
-run-batch
-----
-. In this directory, build your image by typing the following command, replacing the `project/name:tag` with arbitrary name. `docker` can be used instead of `podman`.
-+
-[source,bash,subs="attributes+,macros+,+quotes"]
-----
-podman build -t docker-registry-default/project/name:tag .
-----
-. After the build finishes, push your image to the registry used by OpenShift to deploy your image. Refer to the link:{ocpdocs_cluster_local_registry_access_link}[OpenShift guide] for details.
-
-[[sso-administrator-setup]]
-=== Creating the Administrator Account for {project_name} Server
-
-{project_name} does not provide any pre-configured management account out of the box. This administrator account is necessary for logging into the `master` realm's management console and performing server maintenance operations such as creating realms or users or registering applications intended to be secured by {project_name}.
-
-The administrator account can be created:
-
-* By providing values for the xref:sso-admin-template-parameters[*_SSO_ADMIN_USERNAME_* and *_SSO_ADMIN_PASSWORD_* parameters], when deploying the {project_name} application template, or
-* By xref:sso-admin-remote-shell[a remote shell session to particular {project_name} pod], if the {project_openshift_product_name} image is deployed without an application template.
-
-[NOTE]
-====
-{project_name} allows an initial administrator account to be created by the link:{project_doc_base_url}/getting_started_guide/index#create-admin_[Welcome Page] web form, but only if the Welcome Page is accessed from localhost; this method of administrator account creation is not applicable for the {project_openshift_product_name} image.
-====
-
-[[sso-admin-template-parameters]]
-==== Creating the Administrator Account using template parameters
-
-When deploying {project_name} application template, the *_SSO_ADMIN_USERNAME_* and *_SSO_ADMIN_PASSWORD_* parameters denote the username and password of the {project_name} server's administrator account to be created for the `master` realm.
-
-*Both of these parameters are required.* If not specified, they are auto generated and displayed as an OpenShift instructional message when the template is instantiated.
-
-The lifespan of the {project_name} server's administrator account depends upon the storage type used to store the {project_name} server's database:
-
-* For an in-memory database mode (*_{project_templates_version}-https_* and *_{project_templates_version}-x509-https_* templates), the account exists throughout the lifecycle of the particular {project_name} pod (stored account data is lost upon pod destruction),
-* For an ephemeral database mode *_{project_templates_version}-postgresql_* templates), the account exists throughout the lifecycle of the database pod. Even if the {project_name} pod is destructed, the stored account data is preserved under the assumption that the database pod is still running,
-* For persistent database mode (*_{project_templates_version}-postgresql-persistent_*, and *_{project_templates_version}-x509-postgresql-persistent_* templates), the account exists throughout the lifecycle of the persistent medium used to hold the database data. This means that the stored account data is preserved even when both the {project_name} and the database pods are destructed.
-
-It is a common practice to deploy an {project_name} application template to get the corresponding OpenShift deployment config for the application, and then reuse that deployment config multiple times (every time a new {project_name} application needs to be instantiated).
-
-In the case of *ephemeral or persistent database mode*, after creating the RH_SSO server's administrator account, remove the *_SSO_ADMIN_USERNAME_* and *_SSO_ADMIN_PASSWORD_* variables from the deployment config before deploying new {project_name} applications.
-
-.Procedure
-
-Run the following commands to prepare the previously created deployment config of the {project_name} application for reuse after the administrator account has been created:
-
-. Identify the deployment config of the {project_name} application.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get dc -o name
-deploymentconfig/sso
-deploymentconfig/sso-postgresql
-----
-. Clear the *_SSO_ADMIN_USERNAME_* and *_SSO_ADMIN_PASSWORD_* variables setting.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc set env dc/sso \
- -e SSO_ADMIN_USERNAME="" \
- -e SSO_ADMIN_PASSWORD=""
-----
-
-[[sso-admin-remote-shell]]
-==== Creating the Administrator Account via a remote shell session to {project_name} Pod
-
-You use the following commands to create an administrator account for the `master` realm of the {project_name} server, when deploying the {project_openshift_product_name} image directly from the image stream without using a template.
-
-.Prerequisite
-
-* {project_name} application pod has been started.
-
-.Procedure
-
-. Identify the {project_name} application pod.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get pods
-NAME READY STATUS RESTARTS AGE
-sso-12-pt93n 1/1 Running 0 1m
-sso-postgresql-6-d97pf 1/1 Running 0 2m
-----
-. Open a remote shell session to the {project_openshift_product_name} container.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc rsh sso-12-pt93n
-sh-4.2$
-----
-. Create the {project_name} server administrator account for the `master` realm at the command line with the `add-user-keycloak.sh` script.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-sh-4.2$ cd /opt/eap/bin/
-sh-4.2$ ./add-user-keycloak.sh \
- -r master \
- -u sso_admin \
- -p sso_password
-Added 'sso_admin' to '/opt/eap/standalone/configuration/keycloak-add-user.json', restart server to load user
-----
-+
-[NOTE]
-====
-The 'sso_admin' / 'sso_password' credentials in the example above are for demonstration purposes only. Refer to the password policy applicable within your organization for guidance on how to create a secure user name and password.
-====
-. Restart the underlying JBoss EAP server instance to load the newly added user account. Wait for the server to restart properly.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-sh-4.2$ ./jboss-cli.sh --connect ':reload'
-{
- "outcome" => "success",
- "result" => undefined
-}
-----
-+
-[WARNING]
-====
-When restarting the server it is important to restart just the JBoss EAP process within the running {project_name} container, and not the whole container. This is because restarting the whole container will recreate it from scratch, without the {project_name} server administration account for the `master` realm.
-====
-. Log in to the `master` realm's Admin Console of the {project_name} server using the credentials created in the steps above. In the browser, navigate to *\http://sso-./auth/admin* for the {project_name} web server, or to *\https://secure-sso-./auth/admin* for the encrypted {project_name} web server, and specify the user name and password used to create the administrator user.
-
-[role="_additional-resources"]
-.Additional resources
-* xref:../introduction/introduction.adoc#sso-templates[Templates for use with this software]
-
-=== Customizing the default behavior of the {project_name} image
-
-You can change the default behavior of the {project_name} image such as enabling TechPreview features or enabling debugging. This section describes how to make this change by using the JAVA_OPTS_APPEND variable.
-
-.Prerequisites
-
-This procedure assumes that the {project_openshift_product_name} image has been previously xref:Example-Deploying-SSO[deployed using one of the following templates:]
-
-* *_{project_templates_version}-postgresql_*
-* *_{project_templates_version}-postgresql-persistent_*
-* *_{project_templates_version}-x509-postgresql-persistent_*
-
-.Procedure
-
-You can use the OpenShift web console or the CLI to change the default behavior.
-
-If you use the OpenShift web console, you add the JAVA_OPTS_APPEND variable to the sso deployment config. For example, to enable TechPreview features, you set the variable as follows:
-
-[source,bash,subs=+attributes]
-----
-JAVA_OPTS_APPEND="-Dkeycloak.profile=preview"
-----
-
-If you use the CLI, use the following commands to enable TechPreview features when the {project_name} pod was deployed using a template that is mentioned under Prerequisites.
-
-. Scale down the {project_name} pod:
-
-+
-[source,bash,subs=+attributes]
-----
-$ oc get dc -o name
-deploymentconfig/sso
-deploymentconfig/sso-postgresql
-
-$ oc scale --replicas=0 dc sso
-deploymentconfig "sso" scaled
-----
-
-+
-[NOTE]
-====
-In the preceding command, ``sso-postgresql`` appears because a PostgreSQL template was used to deploy the {project_openshift_product_name} image.
-====
-
-. Edit the deployment config to set the JAVA_OPTS_APPEND variable. For example, to enable TechPreview features, you set the variable as follows:
-
-+
-[source,bash,subs=+attributes]
-----
-oc env dc/sso -e "JAVA_OPTS_APPEND=-Dkeycloak.profile=preview"
-----
-
-. Scale up the {project_name} pod:
-
-+
-[source,bash,subs=+attributes]
-----
-$ oc scale --replicas=1 dc sso
-deploymentconfig "sso" scaled
-----
-
-. Test a TechPreview feature of your choice.
-
-
-=== Deployment process
-
-Once deployed, the *_{project_templates_version}-https_* and *_{project_templates_version}-x509-https_* templates create a single pod that contains both the database and the {project_name} servers. The *_{project_templates_version}-postgresql_*, *_{project_templates_version}-postgresql-persistent_*, and *_{project_templates_version}-x509-postgresql-persistent_* templates create two pods, one for the database server and one for the {project_name} web server.
-
-After the {project_name} web server pod has started, it can be accessed at its custom configured hostnames, or at the default hostnames:
-
-* *\http://sso-__.__/auth/admin*: for the {project_name} web server, and
-* *\https://secure-sso-__.__/auth/admin*: for the encrypted {project_name} web server.
-
-Use the xref:sso-administrator-setup[administrator user credentials] to log in into the `master` realm's Admin Console.
-
-[[SSO-Clients]]
-=== {project_name} clients
-
-Clients are {project_name} entities that request user authentication. A client can be an application requesting {project_name} to provide user authentication, or it can be making requests for access tokens to start services on behalf of an authenticated user. See the link:{project_doc_base_url}/server_administration_guide/index#assembly-managing-clients_server_administration_guide[Managing Clients chapter of the {project_name} documentation] for more information.
-
-{project_name} provides link:{project_doc_base_url}/server_administration_guide/clients#oidc_clients[OpenID-Connect] and link:{project_doc_base_url}/server_administration_guide/index#client-saml-configuration[SAML] client protocols.
-
-OpenID-Connect is the preferred protocol and uses three different access types:
-
-- *public*: Useful for JavaScript applications that run directly in the browser and require no server configuration.
-- *confidential*: Useful for server-side clients, such as EAP web applications, that need to perform a browser login.
-- *bearer-only*: Useful for back-end services that allow bearer token requests.
-
-It is required to specify the client type in the ** key of the application *web.xml* file. This file is read by the image at deployment. Set the value of ** element to:
-
-* *KEYCLOAK* for the OpenID Connect client.
-* *KEYCLOAK-SAML* for the SAML client.
-
-The following is an example snippet for the application *web.xml* to configure an OIDC client:
-
-[source,bash,subs="attributes+,macros+"]
-----
-...
-
- KEYCLOAK
-
-...
-----
-
-[[Auto-Man-Client-Reg]]
-==== Automatic and manual {project_name} client registration methods
-A client application can be automatically registered to an {project_name} realm by using credentials passed in variables specific to the *_eap64-sso-s2i_*, *_eap71-sso-s2i_*, and *_datavirt63-secure-s2i_* templates.
-
-Alternatively, you can manually register the client application by configuring and exporting the {project_name} client adapter and including it in the client application configuration.
-
-===== Automatic {project_name} client registration
-
-Automatic {project_name} client registration is determined by {project_name} environment variables specific to the *_eap64-sso-s2i_*, *_eap71-sso-s2i_*, and *_datavirt63-secure-s2i_* templates. The {project_name} credentials supplied in the template are then used to register the client to the {project_name} realm during deployment of the client application.
-
-The {project_name} environment variables included in the *_eap64-sso-s2i_*, *_eap71-sso-s2i_*, and *_datavirt63-secure-s2i_* templates are:
-
-[cols="2*", options="header"]
-|===
-|Variable
-|Description
-|*_HOSTNAME_HTTP_*
-|Custom hostname for http service route. Leave blank for default hostname of ..
-
-|*_HOSTNAME_HTTPS_*
-|Custom hostname for https service route. Leave blank for default hostname of ..
-
-|*_SSO_URL_*
-|The {project_name} web server authentication address: $$https://secure-sso-$$__.__/auth
-
-|*_SSO_REALM_*
-|The {project_name} realm created for this procedure.
-
-|*_SSO_USERNAME_*
-|The name of the _realm management user_.
-
-|*_SSO_PASSWORD_*
-| The password of the user.
-
-|*_SSO_PUBLIC_KEY_*
-|The public key generated by the realm. It is located in the *Keys* tab of the *Realm Settings* in the {project_name} console.
-
-|*_SSO_BEARER_ONLY_*
-|If set to *true*, the OpenID Connect client is registered as bearer-only.
-
-|*_SSO_ENABLE_CORS_*
-|If set to *true*, the {project_name} adapter enables Cross-Origin Resource Sharing (CORS).
-|===
-
-If the {project_name} client uses the SAML protocol, the following additional variables need to be configured:
-
-[cols="2*", options="header"]
-|===
-|Variable
-|Description
-|*_SSO_SAML_KEYSTORE_SECRET_*
-|Secret to use for access to SAML keystore. The default is _sso-app-secret_.
-
-|*_SSO_SAML_KEYSTORE_*
-|Keystore filename in the SAML keystore secret. The default is _keystore.jks_.
-
-|*_SSO_SAML_KEYSTORE_PASSWORD_*
-|Keystore password for SAML. The default is _mykeystorepass_.
-
-|*_SSO_SAML_CERTIFICATE_NAME_*
-|Alias for keys/certificate to use for SAML. The default is _jboss_.
-|===
-
-See xref:Example-EAP-Auto[Example Workflow: Automatically Registering EAP Application in {project_name} with OpenID-Connect Client] for an end-to-end example of the automatic client registration method using an OpenID-Connect client.
-
-===== Manual {project_name} client registration
-
-Manual {project_name} client registration is determined by the presence of a deployment file in the client application's _../configuration/_ directory. These files are exported from the client adapter in the {project_name} web console. The name of this file is different for OpenID-Connect and SAML clients:
-
-[horizontal]
-*OpenID-Connect*:: _../configuration/secure-deployments_
-*SAML*:: _../configuration/secure-saml-deployments_
-
-These files are copied to the {project_name} adapter configuration section in the _standalone-openshift.xml_ at when the application is deployed.
-
-There are two methods for passing the {project_name} adapter configuration to the client application:
-
-* Modify the deployment file to contain the {project_name} adapter configuration so that it is included in the _standalone-openshift.xml_ file at deployment, or
-* Manually include the OpenID-Connect _keycloak.json_ file, or the SAML _keycloak-saml.xml_ file in the client application's *../WEB-INF* directory.
-
-See xref:Example-EAP-Manual[Example Workflow: Manually Configure an Application to Use {project_name} Authentication, Using SAML Client] for an end-to-end example of the manual {project_name} client registration method using a SAML client.
-
-=== Using {project_name} vault with OpenShift secrets
-Several fields in the {project_name} administration support obtaining the value
- of a secret from an external vault, see link:{adminguide_link}#_vault-administration[{adminguide_name}].
-The following example shows how to set up the file-based plaintext vault in OpenShift
-and set it up to be used for obtaining an SMTP password.
-
-.Procedure
-
-. Specify a directory for the vault using the *_SSO_VAULT_DIR_* environment variable.
-You can introduce the *_SSO_VAULT_DIR_* environment variable directly in the environment in your deployment configuration. It can also be included in the template by addding the following snippets at the appropriate places in the template:
-+
-[source,json,subs="attributes+,macros+"]
-----
-"parameters": [
- ...
- {
- "displayName": "RH-SSO Vault Secret directory",
- "description": "Path to the RH-SSO Vault directory.",
- "name": "SSO_VAULT_DIR",
- "value": "",
- "required": false
- }
- ...
-]
-
-env: [
- ...
- {
- "name": "SSO_VAULT_DIR",
- "value": "${SSO_VAULT_DIR}"
- }
- ...
-]
-----
-
-+
-[NOTE]
-====
-The files plaintext vault provider will be configured only when you set
- *_SSO_VAULT_DIR_* environment variable.
-====
-
-. Create a secret in your OpenShift cluster:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc create secret generic rhsso-vault-secrets --from-literal=master_smtp-password=mySMTPPsswd
-----
-
-. Mount a volume to your deployment config using the `${SSO_VAULT_DIR}` as the path.
-For a deployment that is already running:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-oc set volume dc/sso --add --mount-path=${SSO_VAULT_DIR} --secret-name=rhsso-vault-secrets
-----
-
-. After a pod is created you can use a customized string within your {project_name}
-configuration to refer to the secret. For example, for using `mySMTPPsswd` secret
-created in this tutorial, you can use `${vault.smtp-password}` within the `master`
-realm in the configuration of the smtp password and it will be replaced by `mySMTPPsswd` when used.
-
-
-=== Limitations
-OpenShift does not currently accept OpenShift role mapping from external providers. If {project_name} is used as an authentication gateway for OpenShift, users created in {project_name} must have the roles added using the OpenShift Administrator `oc adm policy` command.
-
-For example, to allow an {project_name}-created user to view a project namespace in OpenShift:
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc adm policy add-role-to-user view -n
-----
diff --git a/openshift/topics/get_started.adoc b/openshift/topics/get_started.adoc
deleted file mode 100644
index c18c0dd739..0000000000
--- a/openshift/topics/get_started.adoc
+++ /dev/null
@@ -1,228 +0,0 @@
-== Configuring {project_openshift_product_name}
-
-[id="image-streams-applications-templates"]
-=== Using the {project_openshift_product_name} Image Streams and application templates
-
-[role="_abstract"]
-Red Hat JBoss Middleware for OpenShift images are pulled on demand from the secured Red Hat Registry: link:https://catalog.redhat.com/[registry.redhat.io], which requires authentication. To retrieve content, you will need to log into the registry using the Red Hat account.
-
-To consume container images from *_registry.redhat.io_* in shared environments such as OpenShift, it is recommended for an administrator to use a Registry Service Account, also referred to as authentication tokens, in place of an individual person's Red Hat Customer Portal credentials.
-
-.Procedure
-
-. To create a Registry Service Account, navigate to the link:https://access.redhat.com/terms-based-registry/[Registry Service Account Management Application], and log in if necessary.
-
-. From the *_Registry Service Accounts_* page, click *_Create Service Account_*.
-. Provide a name for the Service Account, for example *_registry.redhat.io-sa_*. It will be prepended with a fixed, random string.
-.. Enter a description for the Service Account, for example *_Service account to consume container images from registry.redhat.io._*.
-.. Click *_Create_*.
-. After the Service Account was created, click the *_registry.redhat.io-sa_* link in the *_Account name_* column of the table presented on the *_Registry Service Accounts_* page.
-. Finally, click the *_OpenShift Secret_* tab, and perform all steps listed on that page.
-
-See the link:https://access.redhat.com/RegistryAuthentication[Red Hat Container Registry Authentication] article for more information.
-
-.Procedure
-
-. Ensure that you are logged in as a cluster administrator or a user with project administrator access to the global `openshift` project:
-
-. Choose a command based on your version of OpenShift Container Platform.
-
-.. If you are running an OpenShift Container Platform v3 based cluster instance on (some) of your master host(s), perform the following:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc login -u system:admin
-----
-.. If you are running an OpenShift Container Platform v4 based cluster instance, link:https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html#cli-logging-in_cli-developer-commands[log in to the CLI] as the link:https://docs.openshift.com/container-platform/latest/authentication/remove-kubeadmin.html#understanding-kubeadmin_removing-kubeadmin[kubeadmin] user:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc login -u kubeadmin -p password \https://openshift.example.com:6443
-----
-
-. Run the following commands to update the core set of {project_name} {project_version} resources for OpenShift in the `openshift` project:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ for resource in {project_templates_version}-image-stream.json \
- {project_templates_version}-https.json \
- {project_templates_version}-postgresql.json \
- {project_templates_version}-postgresql-persistent.json \
- {project_templates_version}-x509-https.json \
- {project_templates_version}-x509-postgresql-persistent.json
-do
- oc replace -n openshift --force -f \
- {project_templates_url}/$\{resource}
-done
-----
-. Run the following command to install the {project_name} {project_version} OpenShift image streams in the `openshift` project:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc -n openshift import-image {openshift_image_repository}:{project_latest_image_tag} --from=registry.redhat.io/{openshift_image_repository}:{project_latest_image_tag} --confirm
-----
-
-[[Example-Deploying-SSO]]
-=== Deploying the {project_name} Image
-[[Preparing-SSO-Authentication-for-OpenShift-Deployment]]
-==== Preparing for the deployment
-
-.Procedure
-
-. Log in to the OpenShift CLI with a user that holds the _cluster:admin_ role.
-
-. Create a new project:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-project sso-app-demo
-----
-. Add the `view` role to the link:{ocpdocs_default_service_accounts_link}[`default`] service account. This enables the service account to view all the resources in the *sso-app-demo* namespace, which is necessary for managing the cluster.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
-----
-
-==== Deploying the {project_name} Image using the application template
-
-You can deploy the template using one of these interfaces:
-
-* xref:deploy_cli[OpenShift CLI]
-* xref:deploy3x[OpenShift 3.x web console]
-* xref:deploy4x[OpenShift 4.x web console]
-
-[id="deploy_cli"]
-===== Deploying the Template using OpenShift CLI
-
-.Prerequisites
-
-* Perform the steps described in xref:image-streams-applications-templates[Using the {project_openshift_product_name} Image Streams and application templates].
-
-.Procedure
-
-. List the available {project_name} application templates:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get templates -n openshift -o name | grep -o '{project_templates_version}.\+'
-{project_templates_version}-https
-{project_templates_version}-postgresql
-{project_templates_version}-postgresql-persistent
-{project_templates_version}-x509-https
-{project_templates_version}-x509-postgresql-persistent
-----
-. Deploy the selected one:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-app --template={project_templates_version}-x509-https
---> Deploying template "openshift/{project_templates_version}-x509-https" to project sso-app-demo
-
- {project_name} {project_versionDoc} (Ephemeral)
- ---------
- An example {project_name} 7 application. For more information about using this template, see https://github.com/jboss-openshift/application-templates.
-
- A new {project_name} service has been created in your project. The admin username/password for accessing the master realm using the {project_name} console is IACfQO8v/nR7llVSVb4Dye3TNRbXoXhRpAKTmiCRc. The HTTPS keystore used for serving secure content, the JGroups keystore used for securing JGroups communications, and server truststore used for securing {project_name} requests were automatically created using OpenShift's service serving x509 certificate secrets.
-
- * With parameters:
- * Application Name=sso
- * JGroups Cluster Password=jg0Rssom0gmHBnooDF3Ww7V4Mu5RymmB # generated
- * Datasource Minimum Pool Size=
- * Datasource Maximum Pool Size=
- * Datasource Transaction Isolation=
- * ImageStream Namespace=openshift
- * {project_name} Administrator Username=IACfQO8v # generated
- * {project_name} Administrator Password=nR7llVSVb4Dye3TNRbXoXhRpAKTmiCRc # generated
- * {project_name} Realm=
- * {project_name} Service Username=
- * {project_name} Service Password=
- * Container Memory Limit=1Gi
-
---> Creating resources ...
- service "sso" created
- service "secure-sso" created
- service "sso-ping" created
- route "sso" created
- route "secure-sso" created
- deploymentconfig "sso" created
---> Success
- Run 'oc status' to view your app.
-----
-
-[id="deploy3x"]
-===== Deploying the Template using the OpenShift 3.x Web Console
-
-.Prerequisites
-
-* Perform the steps described in xref:image-streams-applications-templates[Using the {project_openshift_product_name} Image Streams and application templates].
-
-.Procedure
-
-. Log in to the OpenShift web console and select the *sso-app-demo* project space.
-. Click *Add to Project*, then *Browse Catalog* to list the default image streams and templates.
-. Use the *Filter by Keyword* search bar to limit the list to those that match _sso_. You may need to click *Middleware*, then *Integration* to show the desired application template.
-. Select an {project_name} application template. This example uses *_{project_name} {project_versionDoc} (Ephemeral)_*.
-. Click *Next* in the *Information* step.
-. From the *Add to Project* drop-down menu, select the _sso-app-demo_ project space. Then click *Next*.
-. Select *Do not bind at this time* radio button in the *Binding* step. Click *Create* to continue.
-. In the *Results* step, click the *Continue to the project overview* link to verify the status of the deployment.
-
-[id="deploy4x"]
-===== Deploying the Template using the OpenShift 4.x Web Console
-
-.Prerequisites
-
-* Perform the steps described in xref:image-streams-applications-templates[Using the {project_openshift_product_name} Image Streams and application templates].
-
-.Procedure
-
-. Log in to the OpenShift web console and select the _sso-app-demo_ project space.
-
-. On the left sidebar, click the *Administrator* tab and then click *> Developer*.
-+
-image:images/choose_developer_role.png[]
-
-. Click *From Catalog*.
-+
-image:images/add_from_catalog.png[]
-
-. Search for *sso*.
-+
-image:images/sso_keyword.png[]
-
-. Choose a template such as *Red Hat Single Sign-On {project_version_base} on OpenJDK (Ephemeral)*.
-+
-image:images/choose_template.png[]
-
-. Click *Instantiate Template*.
-+
-image:images/instantiate_template.png[]
-
-. Adjust the template parameters if necessary and click *Create*.
-
-. Verify the Red Hat Single Sign-On for OpenShift image was deployed.
-+
-image:images/verify_deployment.png[]
-
-=== Accessing the Administrator Console of the {project_name} Pod
-
-.Procedure
-
-. After the template is deployed, identify the available routes.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get routes
-NAME HOST/PORT
-sso sso-sso-app-demo.openshift.example.com
-----
-
-. Access the {project_name} Admin Console.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-\https://sso-sso-app-demo.openshift.example.com/auth/admin
-----
-
-. Provide the login credentials for the xref:sso-administrator-setup[administrator account].
diff --git a/openshift/topics/introduction.adoc b/openshift/topics/introduction.adoc
deleted file mode 100644
index 0e92c302de..0000000000
--- a/openshift/topics/introduction.adoc
+++ /dev/null
@@ -1,65 +0,0 @@
-== Introduction to {project_openshift_product_name}
-
-=== What is {project_name}?
-[role="_abstract"]
-{project_name} is an integrated sign-on solution available as a Red Hat JBoss Middleware for OpenShift containerized image. The {project_openshift_product_name} image provides an authentication server for users to centrally log in, log out, register, and manage user accounts for web applications, mobile applications, and RESTful web services.
-
-{openshift_name} is available on the following platforms: x86_64, IBM Z, and IBM Power Systems.
-
-=== Comparison: {project_openshift_product_name} Image versus Red Hat Single Sign-On
-The {project_openshift_product_name} image version number {project_version} is based on {project_name} {project_version}. There are some important differences in functionality between the {project_openshift_product_name} image and {project_name} that should be considered:
-
-The {project_openshift_product_name} image includes all of the functionality of {project_name}. In addition, the {project_name}-enabled JBoss EAP image automatically handles OpenID Connect or SAML client registration and configuration for *_.war_* deployments that contain *KEYCLOAK* or *KEYCLOAK-SAML* in their respective *web.xml* files.
-
-[[sso-templates]]
-=== Templates for use with this software
-
-Red Hat offers multiple OpenShift application templates using the {project_openshift_product_name} image version number {project_version}. These templates define the resources needed to develop {project_name} {project_version} server based deployment. The templates can mainly be split into two categories: passthrough templates and reencryption templates. Some other miscellaneous templates also exist.
-
-[[passthrough-templates]]
-==== Passthrough templates
-
-These templates require that HTTPS, JGroups keystores, and a truststore for the {project_name} server exist beforehand. They secure the TLS communication using passthrough TLS termination.
-
-* *_{project_templates_version}-https_*: {project_name} {project_version} backed by internal H2 database on the same pod.
-
-* *_{project_templates_version}-postgresql_*: {project_name} {project_version} backed by ephemeral PostgreSQL database on a separate pod.
-
-* *_{project_templates_version}-postgresql-persistent_*: {project_name} {project_version} backed by persistent PostgreSQL database on a separate pod.
-
-[NOTE]
-Templates for using {project_name} with MySQL / MariaDB databases have been removed and are not available since {project_name} version 7.4.
-
-==== Re-encryption templates
-[[reencrypt-templates]]
-
-These templates use OpenShift's internal link:{ocpdocs_serving_x509_secrets_link}[service serving x509 certificate secrets] to automatically create the HTTPS keystore used for serving secure content. The JGroups cluster traffic is authenticated using the `AUTH` protocol and encrypted using the `ASYM_ENCRYPT` protocol. The {project_name} server truststore is also created automatically, containing the */var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt* CA certificate file, which is used to sign the certificate for HTTPS keystore.
-
-Moreover, the truststore for the {project_name} server is pre-populated with the all known, trusted CA certificate files found in the Java system path. These templates secure the TLS communication using re-encryption TLS termination.
-
-* *_{project_templates_version}-x509-https_*: {project_name} {project_version} with auto-generated HTTPS keystore and {project_name} truststore, backed by internal H2 database. The `ASYM_ENCRYPT` JGroups protocol is used for encryption of cluster traffic.
-* *_{project_templates_version}-x509-postgresql-persistent_*: {project_name} {project_version} with auto-generated HTTPS keystore and {project_name} truststore, backed by persistent PostgreSQL database. The `ASYM_ENCRYPT` JGroups protocol is used for encryption of cluster traffic.
-
-==== Other templates
-
-Other templates that integrate with {project_name} are also available:
-
-* *_eap64-sso-s2i_*: {project_name}-enabled Red Hat JBoss Enterprise Application Platform 6.4.
-* *_eap71-sso-s2i_*: {project_name}-enabled Red Hat JBoss Enterprise Application Platform 7.1.
-* *_datavirt63-secure-s2i_*: {project_name}-enabled Red Hat JBoss Data Virtualization 6.3.
-
-These templates contain environment variables specific to {project_name} that enable automatic {project_name} client registration when deployed.
-
-[role="_additional-resources"]
-.Additional resources
-
-* xref:Auto-Man-Client-Reg[Automatic and Manual {project_name} Client Registration Methods]
-* link:{ocp311docs_passthrough_route_link}[Passthrough TLS termination]
-* link:{ocp311docs_reencrypt_route_link}[Re-encryption TLS termination]
-
-=== Version compatibility and support
-For details about OpenShift image version compatibility, see the https://access.redhat.com/articles/2342861[Supported Configurations] page.
-
-NOTE: The {project_openshift_product_name} image version number between 7.0 and 7.4 are deprecated and they will no longer receive updates of image and application templates.
-
-To deploy new applications, use the version 7.5 or {project_version} of the {project_openshift_product_name} image along with the application templates specific to these image versions.
diff --git a/openshift/topics/reference.adoc b/openshift/topics/reference.adoc
deleted file mode 100644
index b2f501cdde..0000000000
--- a/openshift/topics/reference.adoc
+++ /dev/null
@@ -1,642 +0,0 @@
-== Reference
-
-[[sso-artifact-repository-mirrors-section]]
-=== Artifact repository mirrors
-
-// This page describes MAVEN_MIRROR_URL variable usage
-// It requires 'bcname' attribute to be set to the name of the product
-
-[role="_abstract"]
-A repository in Maven holds build artifacts and dependencies of various types
-(all the project jars, library jar, plugins or any other project specific
-artifacts). It also specifies locations from where to download artifacts from,
-while performing the S2I build. Besides using central repositories, it is a
-common practice for organizations to deploy a local custom repository (mirror).
-
-Benefits of using a mirror are:
-
-* Availability of a synchronized mirror, which is geographically closer and
- faster.
-* Ability to have greater control over the repository content.
-* Possibility to share artifacts across different teams (developers, CI),
- without the need to rely on public servers and repositories.
-* Improved build times.
-
-Often, a repository manager can serve as local cache to a mirror. Assuming that
-the repository manager is already deployed and reachable externally at
-*_pass:[http://10.0.0.1:8080/repository/internal/]_*, the S2I build can then use this
-manager by supplying the `MAVEN_MIRROR_URL` environment variable to the
-build configuration of the application using the following procedure:
-
-.Procedure
-
-. Identify the name of the build configuration to apply `MAVEN_MIRROR_URL`
- variable against.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get bc -o name
-buildconfig/sso
-----
-. Update build configuration of `sso` with a `MAVEN_MIRROR_URL` environment variable.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc set env bc/sso \
- -e MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/"
-buildconfig "sso" updated
-----
-. Verify the setting.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc set env bc/sso --list
-# buildconfigs sso
-MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/
-----
-. Schedule new build of the application.
-
-NOTE: During application build, you will notice that Maven dependencies are
-pulled from the repository manager, instead of the default public repositories.
-Also, after the build is finished, you will see that the mirror is filled with
-all the dependencies that were retrieved and used during the build.
-
-[[env_vars]]
-=== Environment variables
-
-==== Information environment variables
-The following information environment variables are designed to convey
-information about the image and should not be modified by the user:
-
-.Information Environment Variables
-[cols="3",options="header"]
-|===
-|Variable Name |Description |Example Value
-|*_AB_JOLOKIA_AUTH_OPENSHIFT_*
-|-
-|*_true_*
-
-|*_AB_JOLOKIA_HTTPS_*
-|-
-|*_true_*
-
-|*_AB_JOLOKIA_PASSWORD_RANDOM_*
-|-
-|*_true_*
-
-|*_JBOSS_IMAGE_NAME_*
-|Image name, same as "name" label.
-|*_{openshift_image_repository}_*
-
-|*_JBOSS_IMAGE_VERSION_*
-|Image version, same as "version" label.
-|*_{project_versionDoc}_*
-
-|*_JBOSS_MODULES_SYSTEM_PKGS_*
-|-
-|*_org.jboss.logmanager,jdk.nashorn.api_*
-
-|===
-
-==== Configuration environment variables
-Configuration environment variables are designed to conveniently adjust the
-image without requiring a rebuild, and should be set by the user as desired.
-
-[[conf_env_vars]]
-.Configuration Environment Variables
-[cols="3",options="header"]
-|===
-|Variable Name |Description |Example Value
-|*_AB_JOLOKIA_AUTH_OPENSHIFT_*
-|Switch on client authentication for OpenShift TLS communication. The value of
-this parameter can be a relative distinguished name which must be contained in
-a presented client's certificate. Enabling this parameter will automatically
-switch Jolokia into https communication mode. The default CA cert is set to
-`/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.
-|*_true_*
-
-|*_AB_JOLOKIA_CONFIG_*
-|If set uses this file (including path) as Jolokia JVM agent properties (as
-described in Jolokia's
-link:https://jolokia.org/reference/html/agents.html#agents-jvm[reference
-manual]). If not set, the `/opt/jolokia/etc/jolokia.properties` file will be
-created using the settings as defined in this document, otherwise the rest of
-the settings in this document are ignored.
-|*_/opt/jolokia/custom.properties_*
-
-|*_AB_JOLOKIA_DISCOVERY_ENABLED_*
-|Enable Jolokia discovery. Defaults to *_false_*.
-|*_true_*
-
-|*_AB_JOLOKIA_HOST_*
-|Host address to bind to. Defaults to *_0.0.0.0_*.
-|*_127.0.0.1_*
-
-|*_AB_JOLOKIA_HTTPS_*
-|Switch on secure communication with https. By default self-signed server
-certificates are generated if no serverCert configuration is given in
-*_AB_JOLOKIA_OPTS_*. _NOTE: If the values is set to an empty string, https is
-turned `off`. If the value is set to a non empty string, https is turned `on`._
-|*_true_*
-
-|*_AB_JOLOKIA_ID_*
-|Agent ID to use ($HOSTNAME by default, which is the container id).
-|*_openjdk-app-1-xqlsj_*
-
-|*_AB_JOLOKIA_OFF_*
-|If set disables activation of Jolokia (i.e. echos an empty value). By default,
-Jolokia is enabled. _NOTE: If the values is set to an empty string, https is
-turned `off`. If the value is set to a non empty string, https is turned `on`._
-|*_true_*
-
-|*_AB_JOLOKIA_OPTS_*
-|Additional options to be appended to the agent configuration. They should be
-given in the format `"key=value, key=value, …<200b> "`
-|*_backlog=20_*
-
-|*_AB_JOLOKIA_PASSWORD_*
-|Password for basic authentication. By default authentication is switched off.
-|*_mypassword_*
-
-|*_AB_JOLOKIA_PASSWORD_RANDOM_*
-|If set, a random value is generated for *_AB_JOLOKIA_PASSWORD_*, and it is
-saved in the *_/opt/jolokia/etc/jolokia.pw_* file.
-|*_true_*
-
-|*_AB_JOLOKIA_PORT_*
-|Port to use (Default: *_8778_*).
-|*_5432_*
-
-|*_AB_JOLOKIA_USER_*
-|User for basic authentication. Defaults to *_jolokia_*.
-|*_myusername_*
-
-|*_CONTAINER_CORE_LIMIT_*
-|A calculated core limit as described in
-link:https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt[CFS
-Bandwidth Control.]
-|*_2_*
-
-|*_GC_ADAPTIVE_SIZE_POLICY_WEIGHT_*
-|The weighting given to the current Garbage Collection (GC) time versus previous
-GC times.
-|*_90_*
-
-|*_GC_MAX_HEAP_FREE_RATIO_*
-|Maximum percentage of heap free after GC to avoid shrinking.
-|*_40_*
-
-|*_GC_MAX_METASPACE_SIZE_*
-|The maximum metaspace size.
-|*_100_*
-
-|*_GC_TIME_RATIO_MIN_HEAP_FREE_RATIO_*
-|Minimum percentage of heap free after GC to avoid expansion.
-|*_20_*
-
-|*_GC_TIME_RATIO_*
-|Specifies the ratio of the time spent outside the garbage collection (for
-example, the time spent for application execution) to the time spent in the
-garbage collection.
-|*_4_*
-
-|*_JAVA_DIAGNOSTICS_*
-|Set this to get some diagnostics information to standard out when things are
-happening.
-|*_true_*
-
-|*_JAVA_INITIAL_MEM_RATIO_*
-|This is used to calculate a default initial heap memory based the maximal
-heap memory. The default is 100 which means 100% of the maximal heap is used
-for the initial heap size. You can skip this mechanism by setting this value
-to 0 in which case no `-Xms` option is added.
-|*_100_*
-
-|*_JAVA_MAX_MEM_RATIO_*
-|It is used to calculate a default maximal heap memory based on a containers
-restriction. If used in a Docker container without any memory constraints for
-the container then this option has no effect. If there is a memory constraint
-then `-Xmx` is set to a ratio of the container available memory as set here.
-The default is 50 which means 50% of the available memory is used as an upper
-boundary. You can skip this mechanism by setting this value to 0 in which case
-no `-Xmx` option is added.
-|*_40_*
-
-|*_JAVA_OPTS_APPEND_*
-|Server startup options.
-|*_-Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp_*
-
-|*_MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION_*
-|For backwards compatability, set to true to use `MyQueue` and `MyTopic` as
-physical destination name defaults instead of `queue/MyQueue` and `topic/MyTopic`.
-|*_false_*
-
-|*_OPENSHIFT_KUBE_PING_LABELS_*
-|Clustering labels selector.
-|*_app=sso-app_*
-
-|*_OPENSHIFT_KUBE_PING_NAMESPACE_*
-|Clustering project namespace.
-|*_myproject_*
-
-|*_SCRIPT_DEBUG_*
-|If set to `true`, ensurses that the bash scripts are executed with the `-x`
-option, printing the commands and their arguments as they are executed.
-|*_true_*
-
-|*_SSO_ADMIN_PASSWORD_*
-|Password of the administrator account for the `master` realm of the {project_name}
-server. *Required.* If no value is specified, it is auto generated and
-displayed as an OpenShift Instructional message when the template is
-instantiated.
-|*_adm-password_*
-
-|*_SSO_ADMIN_USERNAME_*
-|Username of the administrator account for the `master` realm of the {project_name}
-server. *Required.* If no value is specified, it is auto generated and
-displayed as an OpenShift Instructional message when the template is
-instantiated.
-|*_admin_*
-
-|*_SSO_HOSTNAME_*
-|Custom hostname for the {project_name} server. *Not set by default*. If not
-set, the `request` hostname SPI provider, which uses the request headers to
-determine the hostname of the {project_name} server is used. If set, the
-`fixed` hostname SPI provider, with the hostname of the {project_name} server
-set to the provided variable value, is used. See dedicated
-xref:../content/advanced_concepts/advanced_concepts.adoc#advanced-concepts-sso-hostname-spi-setup[Customizing Hostname for the {project_name} Server]
-section for additional steps to be performed, when *_SSO_HOSTNAME_* variable
-is set.
-|*_rh-sso-server.openshift.example.com_*
-
-|*_SSO_REALM_*
-|Name of the realm to be created in the {project_name} server if this environment variable
-is provided.
-|*_demo_*
-
-|*_SSO_SERVICE_PASSWORD_*
-|The password for the {project_name} service user.
-|*_mgmt-password_*
-
-|*_SSO_SERVICE_USERNAME_*
-|The username used to access the {project_name} service. This is used by clients to create
-the application client(s) within the specified {project_name} realm. This user is created
-if this environment variable is provided.
-|*_sso-mgmtuser_*
-
-|*_SSO_TRUSTSTORE_*
-|The name of the truststore file within the secret.
-|*_truststore.jks_*
-
-|*_SSO_TRUSTSTORE_DIR_*
-|Truststore directory.
-|*_/etc/sso-secret-volume_*
-
-|*_SSO_TRUSTSTORE_PASSWORD_*
-|The password for the truststore and certificate.
-|*_mykeystorepass_*
-
-|*_SSO_TRUSTSTORE_SECRET_*
-|The name of the secret containing the truststore file. Used for
-_sso-truststore-volume_ volume.
-|*_truststore-secret_*
-
-|===
-
-Available link:{ocpdocs_templates_link}[application templates]
-for {project_openshift_product_name} can combine the xref:conf_env_vars[aforementioned
-configuration variables] with common OpenShift variables (for example
-*_APPLICATION_NAME_* or *_SOURCE_REPOSITORY_URL_*), product specific variables
-(e.g. *_HORNETQ_CLUSTER_PASSWORD_*), or configuration variables typical to
-database images (e.g. *_POSTGRESQL_MAX_CONNECTIONS_*) yet. All of these different
-types of configuration variables can be adjusted as desired to achieve the
-deployed {project_name}-enabled application will align with the intended use case as much
-as possible. The list of configuration variables, available for each category
-of application templates for {project_name}-enabled applications, is described below.
-
-==== Template variables for all {project_name} images
-
-.Configuration Variables Available For All {project_name} Images
-[cols="2*", options="header"]
-|===
-|Variable
-|Description
-|*_APPLICATION_NAME_*
-|The name for the application.
-
-|*_DB_MAX_POOL_SIZE_*
-|Sets xa-pool/max-pool-size for the configured datasource.
-
-|*_DB_TX_ISOLATION_*
-|Sets transaction-isolation for the configured datasource.
-
-|*_DB_USERNAME_*
-|Database user name.
-
-|*_HOSTNAME_HTTP_*
-|Custom hostname for http service route. Leave blank for default hostname,
-e.g.: _.._.
-
-|*_HOSTNAME_HTTPS_*
-|Custom hostname for https service route. Leave blank for default hostname,
-e.g.: _.._.
-
-|*_HTTPS_KEYSTORE_*
-|The name of the keystore file within the secret. If defined along with
-*_HTTPS_PASSWORD_* and *_HTTPS_NAME_*, enable HTTPS and set the SSL certificate
-key file to a relative path under _$JBOSS_HOME/standalone/configuration_.
-
-|*_HTTPS_KEYSTORE_TYPE_*
-|The type of the keystore file (JKS or JCEKS).
-
-|*_HTTPS_NAME_*
-|The name associated with the server certificate (e.g. _jboss_). If defined
-along with *_HTTPS_PASSWORD_* and *_HTTPS_KEYSTORE_*, enable HTTPS and set the
-SSL name.
-
-|*_HTTPS_PASSWORD_*
-|The password for the keystore and certificate (e.g. _mykeystorepass_). If
-defined along with *_HTTPS_NAME_* and *_HTTPS_KEYSTORE_*, enable HTTPS and set
-the SSL key password.
-
-|*_HTTPS_SECRET_*
-|The name of the secret containing the keystore file.
-
-|*_IMAGE_STREAM_NAMESPACE_*
-|Namespace in which the ImageStreams for Red Hat Middleware images are
-installed. These ImageStreams are normally installed in the _openshift_
-namespace. You should only need to modify this if you've installed the
-ImageStreams in a different namespace/project.
-
-|*_JGROUPS_CLUSTER_PASSWORD_*
-|JGroups cluster password.
-
-|*_JGROUPS_ENCRYPT_KEYSTORE_*
-|The name of the keystore file within the secret.
-
-|*_JGROUPS_ENCRYPT_NAME_*
-|The name associated with the server certificate (e.g. _secret-key_).
-
-|*_JGROUPS_ENCRYPT_PASSWORD_*
-|The password for the keystore and certificate (e.g. _password_).
-
-|*_JGROUPS_ENCRYPT_SECRET_*
-|The name of the secret containing the keystore file.
-
-|*_SSO_ADMIN_USERNAME_*
-|Username of the administrator account for the `master` realm of the {project_name}
-server. *Required.* If no value is specified, it is auto generated and
-displayed as an OpenShift instructional message when the template is
-instantiated.
-
-|*_SSO_ADMIN_PASSWORD_*
-|Password of the administrator account for the `master` realm of the {project_name}
-server. *Required.* If no value is specified, it is auto generated and
-displayed as an OpenShift instructional message when the template is
-instantiated.
-
-|*_SSO_REALM_*
-|Name of the realm to be created in the {project_name} server if this environment variable
-is provided.
-
-|*_SSO_SERVICE_USERNAME_*
-|The username used to access the {project_name} service. This is used by clients to create
-the application client(s) within the specified {project_name} realm. This user is created
-if this environment variable is provided.
-
-|*_SSO_SERVICE_PASSWORD_*
-|The password for the {project_name} service user.
-
-|*_SSO_TRUSTSTORE_*
-|The name of the truststore file within the secret.
-
-|*_SSO_TRUSTSTORE_SECRET_*
-|The name of the secret containing the truststore file. Used for
-*_sso-truststore-volume_* volume.
-
-|*_SSO_TRUSTSTORE_PASSWORD_*
-|The password for the truststore and certificate.
-|===
-
-==== Template variables specific to *_{project_templates_version}-postgresql_*, *_{project_templates_version}-postgresql-persistent_*, and *_{project_templates_version}-x509-postgresql-persistent_*
-
-.Configuration Variables Specific To {project_name}-enabled PostgreSQL Applications With Ephemeral Or Persistent Storage
-[cols="2*", options="header"]
-|===
-|Variable
-|Description
-|*_DB_USERNAME_*
-|Database user name.
-
-|*_DB_PASSWORD_*
-|Database user password.
-
-|*_DB_JNDI_*
-|Database JNDI name used by application to resolve the datasource,
-e.g. _java:/jboss/datasources/postgresql_
-
-|*_POSTGRESQL_MAX_CONNECTIONS_*
-|The maximum number of client connections allowed. This also sets the maximum
-number of prepared transactions.
-
-|*_POSTGRESQL_SHARED_BUFFERS_*
-|Configures how much memory is dedicated to PostgreSQL for caching data.
-|===
-
-==== Template variables for general *eap64* and *eap71* S2I images
-
-.Configuration Variables For EAP 6.4 and EAP 7 Applications Built Via S2I
-[cols="2*", options="header"]
-|===
-|Variable
-|Description
-|*_APPLICATION_NAME_*
-|The name for the application.
-
-|*_ARTIFACT_DIR_*
-|Artifacts directory.
-
-|*_AUTO_DEPLOY_EXPLODED_*
-|Controls whether exploded deployment content should be automatically deployed.
-
-|*_CONTEXT_DIR_*
-|Path within Git project to build; empty for root project directory.
-
-|*_GENERIC_WEBHOOK_SECRET_*
-|Generic build trigger secret.
-
-|*_GITHUB_WEBHOOK_SECRET_*
-|GitHub trigger secret.
-
-|*_HORNETQ_CLUSTER_PASSWORD_*
-|HornetQ cluster administrator password.
-
-|*_HORNETQ_QUEUES_*
-|Queue names.
-
-|*_HORNETQ_TOPICS_*
-|Topic names.
-
-|*_HOSTNAME_HTTP_*
-|Custom host name for http service route. Leave blank for default host name,
-e.g.: _.._.
-
-|*_HOSTNAME_HTTPS_*
-|Custom host name for https service route. Leave blank for default host name,
-e.g.: _.._.
-
-|*_HTTPS_KEYSTORE_TYPE_*
-|The type of the keystore file (JKS or JCEKS).
-
-|*_HTTPS_KEYSTORE_*
-|The name of the keystore file within the secret. If defined along with
-*_HTTPS_PASSWORD_* and *_HTTPS_NAME_*, enable HTTPS and set the SSL certificate
-key file to a relative path under _$JBOSS_HOME/standalone/configuration_.
-
-|*_HTTPS_NAME_*
-|The name associated with the server certificate (e.g. _jboss_). If defined
-along with *_HTTPS_PASSWORD_* and *_HTTPS_KEYSTORE_*, enable HTTPS and set the
-SSL name.
-
-|*_HTTPS_PASSWORD_*
-|The password for the keystore and certificate (e.g. _mykeystorepass_). If
-defined along with *_HTTPS_NAME_* and *_HTTPS_KEYSTORE_*, enable HTTPS and set
-the SSL key password.
-
-|*_HTTPS_SECRET_*
-|The name of the secret containing the keystore file.
-
-|*_IMAGE_STREAM_NAMESPACE_*
-|Namespace in which the ImageStreams for Red Hat Middleware images are
-installed. These ImageStreams are normally installed in the _openshift_
-namespace. You should only need to modify this if you've installed the
-ImageStreams in a different namespace/project.
-
-|*_JGROUPS_CLUSTER_PASSWORD_*
-|JGroups cluster password.
-
-|*_JGROUPS_ENCRYPT_KEYSTORE_*
-|The name of the keystore file within the secret.
-
-|*_JGROUPS_ENCRYPT_NAME_*
-|The name associated with the server certificate (e.g. _secret-key_).
-
-|*_JGROUPS_ENCRYPT_PASSWORD_*
-|The password for the keystore and certificate (e.g. _password_).
-
-|*_JGROUPS_ENCRYPT_SECRET_*
-|The name of the secret containing the keystore file.
-
-|*_SOURCE_REPOSITORY_REF_*
-|Git branch/tag reference.
-
-|*_SOURCE_REPOSITORY_URL_*
-|Git source URI for application.
-|===
-
-==== Template variables specific to *_eap64-sso-s2i_* and *_eap71-sso-s2i_* for automatic client registration
-
-.Configuration Variables For EAP 6.4 and EAP 7 {project_name}-enabled Applications Built Via S2I
-[cols="2*", options="header"]
-|===
-|Variable
-|Description
-|*_SSO_URL_*
-|{project_name} server location.
-
-|*_SSO_REALM_*
-|Name of the realm to be created in the {project_name} server if this environment variable
-is provided.
-
-|*_SSO_USERNAME_*
-|The username used to access the {project_name} service. This is used to create the
-application client(s) within the specified {project_name} realm. This should match the
-*_SSO_SERVICE_USERNAME_* specified through one of the *{project_templates_version}-* templates.
-
-|*_SSO_PASSWORD_*
-|The password for the {project_name} service user.
-
-|*_SSO_PUBLIC_KEY_*
-|{project_name} public key. Public key is recommended to be passed into the template to
-avoid man-in-the-middle security attacks.
-
-|*_SSO_SECRET_*
-|The {project_name} client secret for confidential access.
-
-|*_SSO_SERVICE_URL_*
-|{project_name} service location.
-
-|*_SSO_TRUSTSTORE_SECRET_*
-|The name of the secret containing the truststore file. Used for
-*_sso-truststore-volume_* volume.
-
-|*_SSO_TRUSTSTORE_*
-|The name of the truststore file within the secret.
-
-|*_SSO_TRUSTSTORE_PASSWORD_*
-|The password for the truststore and certificate.
-
-|*_SSO_BEARER_ONLY_*
-|{project_name} client access type.
-
-|*_SSO_DISABLE_SSL_CERTIFICATE_VALIDATION_*
-|If true SSL communication between EAP and the {project_name} Server is insecure
-(i.e. certificate validation is disabled with curl)
-
-|*_SSO_ENABLE_CORS_*
-|Enable CORS for {project_name} applications.
-|===
-
-==== Template variables specific to *_eap64-sso-s2i_* and *_eap71-sso-s2i_* for automatic client registration with SAML clients
-
-.Configuration Variables For EAP 6.4 and EAP 7 {project_name}-enabled Applications Built Via S2I Using SAML Protocol
-[cols="2*", options="header"]
-|===
-|Variable
-|Description
-|*_SSO_SAML_CERTIFICATE_NAME_*
-|The name associated with the server certificate.
-
-|*_SSO_SAML_KEYSTORE_PASSWORD_*
-|The password for the keystore and certificate.
-
-|*_SSO_SAML_KEYSTORE_*
-|The name of the keystore file within the secret.
-
-|*_SSO_SAML_KEYSTORE_SECRET_*
-|The name of the secret containing the keystore file.
-
-|*_SSO_SAML_LOGOUT_PAGE_*
-|{project_name} logout page for SAML applications.
-|===
-
-=== Exposed ports
-[cols="2",options="header"]
-|===
-|Port Number | Description
-|*_8443_* | HTTPS
-
-|*_8778_* | Jolokia monitoring
-
-|===
-
-////
-=== Labels
-
-=== Datasources
-
-=== Clustering
-
-=== Security domains
-
-=== HTTPS
-
-=== Source-to-Image (S2I)
-
-=== Known issues
-* There is a known issue with the EAP6 Adapter _HttpServletRequest.logout()_ in which the adapter does not log out from the application, which can create a login loop. The workaround is to call _HttpSession.invalidate();_ after _request.logout()_ to clear the Keycloak token from the session. For more information, see https://issues.redhat.com/browse/KEYCLOAK-2665[KEYCLOAK-2665].
-* The SSO logs throw a duplication error if the SSO pod is restarted while backed by a database pod. This error can be safely ignored.
-* Setting _adminUrl_ to a "https://..." address in an OpenID Connect client will cause *javax.net.ssl.SSLHandshakeException* exceptions on the SSO server if the default secrets (*sso-app-secret* and *eap-app-secret*) are used. The application server must use either CA-signed certificates or configure the SSO trust store to trust the self-signed certificates.
-* If the client route uses a different domain suffix to the SSO service, the client registration script will erroneously configure the client on the SSO side, causing bad redirection.
-* The SSO-enabled JBoss EAP image does not properly set the *adminUrl* property during automatic client registration. As a workaround, log in to the SSO console after the application has started and manually modify the client registration *adminUrl* property to *http://__-__.__/__*.
-////
diff --git a/openshift/topics/templates b/openshift/topics/templates
deleted file mode 120000
index d191264115..0000000000
--- a/openshift/topics/templates
+++ /dev/null
@@ -1 +0,0 @@
-../../topics/templates
\ No newline at end of file
diff --git a/openshift/topics/tutorials.adoc b/openshift/topics/tutorials.adoc
deleted file mode 100644
index 63a4525487..0000000000
--- a/openshift/topics/tutorials.adoc
+++ /dev/null
@@ -1,1725 +0,0 @@
-== Tutorials
-
-[role="_abstract"]
-The tutorials in this chapter assume that you have an OpenShift instance similar to the one created by performing link:{ocpdocs_install_cluster_link}[the installation of the OpenShift Container Platform cluster].
-
-[[upgrading-sso-db-from-previous-version]]
-=== Updating a database for a new {project_openshift_product_name} image version
-
-Note the following points related to the update:
-
-* Rolling updates from a previous versions of {project_openshift_product_name} to version {project_version} are not supported as databases and caches are not backward compatible.
-* Instances from versions of the {project_openshift_product_name} cannot be runnng before upgrade. They cannot run concurrently against the same database.
-* Pre-generated scripts are not available. They are generated dynamically depending on the database.
-
-You have two choices for updating the database:
-
-* Allow {project_name} {project_version} to xref:automatic-db-migration[automatically migrate the database schema]
-* Update the database xref:manual-db-migration[manually]
-
-[NOTE]
-====
-By default the database is automatically migrated when you start {project_name} {project_version} for the first time.
-====
-
-[[automatic-db-migration]]
-==== Automatic database migration
-This process assumes that you are running a previous version of the {project_openshift_product_name} image, backed by a PostgreSQL database (deployed in ephemeral or persistent mode) that is running in a separate pod.
-
-.Prerequisites
-
-* Perform the steps described in xref:Preparing-SSO-Authentication-for-OpenShift-Deployment[Preparing {project_name} Authentication for OpenShift Deployment].
-
-.Procedure
-Use the following steps to automatically migrate the database schema:
-
-. Identify the deployment config used to deploy the containers, running previous version of the {project_openshift_product_name} image.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get dc -o name --selector=application=sso
-deploymentconfig/sso
-deploymentconfig/sso-postgresql
-----
-. Stop all pods running the previous version of the {project_openshift_product_name} image in the current namespace. They cannot run concurrently against the same database.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc scale --replicas=0 dc/sso
-deploymentconfig "sso" scaled
-----
-. Update the image change trigger in the existing deployment config to reference the {project_name} {project_version} image.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc patch dc/sso --type=json -p '[{"op": "replace", "path": "/spec/triggers/0/imageChangeParams/from/name", "value": "{openshift_image_stream}:{project_latest_image_tag}"}]'
-"sso" patched
-----
-. Start rollout of the new {project_name} {project_version} images based on the latest image defined in the image change triggers.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc rollout latest dc/sso
-deploymentconfig "sso" rolled out
-----
-. Deploy {project_name} {project_version} containers using the modified deployment config.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc scale --replicas=1 dc/sso
-deploymentconfig "sso" scaled
-----
-. (Optional) Verify the database has been successfully updated.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get pods --selector=application=sso
-NAME READY STATUS RESTARTS AGE
-sso-4-vg21r 1/1 Running 0 1h
-sso-postgresql-1-t871r 1/1 Running 0 2h
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc logs sso-4-vg21r | grep 'Updating'
-11:23:45,160 INFO [org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider] (ServerService Thread Pool -- 58) Updating database. Using changelog META-INF/jpa-changelog-master.xml
-----
-
-[[manual-db-migration]]
-==== Manual database migration
-
-The database migration process updates the data schema and performs manipulation of the data. This process also stops all pods running the previous version of the {project_openshift_product_name} image before dynamic generation of the SQL migration file.
-
-[NOTE]
-====
-This process assumes that you are running a previous version of the {project_openshift_product_name} image that is backed by a PostgreSQL database (deployed in ephemeral or persistent mode) and is running on a separate pod.
-====
-
-.Procedure
-Prepare your environment for script generation.
-
-. Configure {project_name} {project_version} with the correct datasource,
-. Set the following configuration options in the `standalone-openshift.xml` file:
-.. `initializeEmpty=false`,
-.. `migrationStrategy=manual`, and
-.. `migrationExport` to the location on the file system of the pod, where the output SQL migration file should be stored (for example, `migrationExport="${jboss.home.dir}/keycloak-database-update.sql"`).
-
-
-[role="_additional-resources"]
-.Additional resources
-
-* link:{project_doc_base_url}/server_installation_and_configuration_guide/database-1#database_configuration[Database configuration]
-
-.Procedure
-Perform the following to generate the SQL migration file for the database:
-
-. Prepare template of OpenShift link:{ocpdocs_jobs_link}[database migration job] to generate the SQL file.
-+
-[source,yaml,subs="verbatim,macros,attributes"]
-----
-$ cat job-to-migrate-db-to-{project_templates_version}.yaml.orig
-apiVersion: batch/v1
-kind: Job
-metadata:
- name: job-to-migrate-db-to-{project_templates_version}
-spec:
- autoSelector: true
- parallelism: 0
- completions: 1
- template:
- metadata:
- name: job-to-migrate-db-to-{project_templates_version}
- spec:
- containers:
- - env:
- - name: DB_SERVICE_PREFIX_MAPPING
- value: pass:[<<DB_SERVICE_PREFIX_MAPPING_VALUE>>]
- - name: pass:[<<PREFIX>>]_JNDI
- value: pass:[<<PREFIX_JNDI_VALUE>>]
- - name: pass:[<<PREFIX>>]_USERNAME
- value: pass:[<<PREFIX_USERNAME_VALUE>>]
- - name: pass:[<<PREFIX>>]_PASSWORD
- value: pass:[<<PREFIX_PASSWORD_VALUE>>]
- - name: pass:[<<PREFIX>>]_DATABASE
- value: pass:[<<PREFIX_DATABASE_VALUE>>]
- - name: TX_DATABASE_PREFIX_MAPPING
- value: pass:[<<TX_DATABASE_PREFIX_MAPPING_VALUE>>]
- - name: pass:[<<SERVICE_HOST>>]
- value: pass:[<<SERVICE_HOST_VALUE>>]
- - name: pass:[<<SERVICE_PORT>>]
- value: pass:[<<SERVICE_PORT_VALUE>>]
- image: pass:[<<SSO_IMAGE_VALUE>>]
- imagePullPolicy: Always
- name: job-to-migrate-db-to-{project_templates_version}
- # Keep the pod running after the SQL migration
- # file was generated, so we can retrieve it
- command:
- - "/bin/bash"
- - "-c"
- - "/opt/eap/bin/openshift-launch.sh || sleep 600"
- restartPolicy: Never
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ cp job-to-migrate-db-to-{project_templates_version}.yaml.orig \
- job-to-migrate-db-to-{project_templates_version}.yaml
-----
-. From deployment config used to run the previous version of the {project_openshift_product_name} image, copy the datasource definition and database access credentials to appropriate places of the template of the database migration job.
-+
-Use the following script to copy `DB_SERVICE_PREFIX_MAPPING` and `TX_DATABASE_PREFIX_MAPPING` variable values, together with values of environment variables specific to particular datasource (`_JNDI`, `_USERNAME`, `_PASSWORD`, and `_DATABASE`) from the deployment config named `sso` to the database job migration template named `job-to-migrate-db-to-{project_templates_version}.yaml`.
-+
-[NOTE]
-====
-Although the `DB_SERVICE_PREFIX_MAPPING` environment variable allows a link:https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.1/html-single/red_hat_jboss_enterprise_application_platform_for_openshift/index#datasources[comma-separated list of *-=* triplets] as its value, this example script accepts only one datasource triplet definition for demonstration purposes. You can modify the script for handling multiple datasource definition triplets.
-====
-+
-[source,bash,subs="verbatim,macros,attributes"]
-----
-$ cat mirror_sso_dc_db_vars.sh
-#!/bin/bash
-
-# IMPORTANT:
-#
-# If the name of the SSO deployment config differs from 'sso'
-# or if the file name of the YAML definition of the migration
-# job is different, update the following two variables
-SSO_DC_NAME="sso"
-JOB_MIGRATION_YAML="job-to-migrate-db-to-{project_templates_version}.yaml"
-
-# Get existing variables of the $SSO_DC_NAME deployment config
-# in an array
-declare -a SSO_DC_VARS=( \
- $(oc set env dc/${SSO_DC_NAME} --list \
- | sed '/^#/d') \
-)
-
-# Get the PREFIX used in the names of environment variables
-PREFIX=$( \
- grep -oP 'DB_SERVICE_PREFIX_MAPPING=\[^ ]++' \
- <<< "${SSO_DC_VARS[@]}" \
-)
-PREFIX=${PREFIX##*=}
-
-# Substitute:
-# * <> with actual $PREFIX value and
-# * <>#${PREFIX}#g" ${JOB_MIGRATION_YAML}
-sed -i "s#<>#\"${VALUE}\"#g" ${JOB_MIGRATION_YAML}
- # Character values do not need quotes
- else
- sed -i "s#<<${KEY}>>#${VALUE}#g" ${JOB_MIGRATION_YAML}
- fi
- # Verify that the value has been successfully propagated.
- if
- grep -q '(JNDI|USERNAME|PASSWORD|DATABASE)' <<< "${KEY}" &&
- pass:[grep -q "<<PREFIX${KEY#${PREFIX}}"] ${JOB_MIGRATION_YAML} ||
- grep -q "<<${KEY}>>" ${JOB_MIGRATION_YAML}
- then
- echo "Failed to update value of ${KEY%_VALUE}! Aborting."
- exit 1
- else
- printf '%-60s%-40s\n' \
- "Successfully updated ${KEY%_VALUE} to:" \
- "$VALUE"
- fi
- fi
-done
-----
-+
-[[get-db-credentials]]
-Run the script.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ chmod +x ./mirror_sso_dc_db_vars.sh
-$ ./mirror_sso_dc_db_vars.sh
-Successfully updated DB_SERVICE_PREFIX_MAPPING to: sso-postgresql=DB
-Successfully updated DB_JNDI to: java:jboss/datasources/KeycloakDS
-Successfully updated DB_USERNAME to: userxOp
-Successfully updated DB_PASSWORD to: tsWNhQHK
-Successfully updated DB_DATABASE to: root
-Successfully updated TX_DATABASE_PREFIX_MAPPING to: sso-postgresql=DB
-----
-. Build the {project_name} {project_version} database migration image using the link:https://github.com/iankko/openshift-examples/tree/KEYCLOAK-8500/sso-manual-db-migration[pre-configured source] and wait for the build to finish.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get is -n openshift | grep {project_templates_version} | cut -d ' ' -f1
-{openshift_image_stream}
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-build {openshift_image_stream}:{project_latest_image_tag}~https://github.com/iankko/openshift-examples.git#KEYCLOAK-8500 \
- --context-dir=sso-manual-db-migration \
- --name={project_templates_version}-db-migration-image
---> Found image bf45ac2 (7 days old) in image stream "openshift/{openshift_image_stream}" under tag "{project_latest_image_tag}" for "{openshift_image_stream}:{project_latest_image_tag}"
-
- Red Hat SSO {project_version}
- ---------------
- Platform for running Red Hat SSO
-
- Tags: sso, sso7, keycloak
-
- * A source build using source code from \https://github.com/iankko/openshift-examples.git#KEYCLOAK-8500 will be created
- * The resulting image will be pushed to image stream "{project_templates_version}-db-migration-image:latest"
- * Use 'start-build' to trigger a new build
-
---> Creating resources with label build={project_templates_version}-db-migration-image ...
- imagestream "{project_templates_version}-db-migration-image" created
- buildconfig "{project_templates_version}-db-migration-image" created
---> Success
- Build configuration "{project_templates_version}-db-migration-image" created and build triggered.
- Run 'oc logs -f bc/{project_templates_version}-db-migration-image' to stream the build progress.
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc logs -f bc/{project_templates_version}-db-migration-image --follow
-Cloning "https://github.com/iankko/openshift-examples.git#KEYCLOAK-8500" ...
-...
-Push successful
-----
-. Update the template of the database migration job (`job-to-migrate-db-to-{project_templates_version}.yaml`) with reference to the built `{project_templates_version}-db-migration-image` image.
-.. Get the docker pull reference for the image.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ PULL_REF=$(oc get istag -n $(oc project -q) --no-headers | grep {project_templates_version}-db-migration-image | tr -s ' ' | cut -d ' ' -f 2)
-----
-.. Replace the pass:[<<SSO_IMAGE_VALUE>>] field in the job template with the pull specification.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ sed -i "s#pass:[<<SSO_IMAGE_VALUE>>]#$PULL_REF#g" job-to-migrate-db-to-{project_templates_version}.yaml
-----
-.. Verify that the field is updated.
-. Instantiate database migration job from the job template.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc create -f job-to-migrate-db-to-{project_templates_version}.yaml
-job "job-to-migrate-db-to-{project_templates_version}" created
-----
-+
-[IMPORTANT]
-====
-The database migration process handles the data schema update and performs manipulation of the data, therefore, stop all pods running the previous version of the {project_openshift_product_name} image before dynamic generation of the SQL migration file.
-====
-+
-. Identify the deployment config used to deploy the containers, running previous version of the {project_openshift_product_name} image.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get dc -o name --selector=application=sso
-deploymentconfig/sso
-deploymentconfig/sso-postgresql
-----
-. Stop all pods running the previous version of the {project_openshift_product_name} image in the current namespace.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc scale --replicas=0 dc/sso
-deploymentconfig "sso" scaled
-----
-. Run the database migration job and wait for the pod to be running correctly.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get jobs
-NAME DESIRED SUCCESSFUL AGE
-job-to-migrate-db-to-{project_templates_version} 1 0 3m
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc scale --replicas=1 job/job-to-migrate-db-to-{project_templates_version}
-job "job-to-migrate-db-to-{project_templates_version}" scaled
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get pods
-NAME READY STATUS RESTARTS AGE
-sso-postgresql-1-n5p16 1/1 Running 1 19h
-job-to-migrate-db-to-{project_templates_version}-b87bb 1/1 Running 0 1m
-{project_templates_version}-db-migration-image-1-build 0/1 Completed 0 27m
-----
-+
-[NOTE]
-====
-By default, the database migration job terminates automatically after `600 seconds` after the migration file is generated. You can adjust this time period.
-====
-. Get the dynamically generated SQL database migration file from the pod.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ mkdir -p ./db-update
-$ oc rsync job-to-migrate-db-to-{project_templates_version}-b87bb:/opt/eap/keycloak-database-update.sql ./db-update
-receiving incremental file list
-keycloak-database-update.sql
-
-sent 30 bytes received 29,726 bytes 59,512.00 bytes/sec
-total size is 29,621 speedup is 1.00
-----
-. Inspect the `keycloak-database-update.sql` file for changes to be performed within manual database update to {project_name} {project_version} version.
-. Apply the database update manually.
-* Run the following commands if running some previous version of the {project_openshift_product_name} image, backed by the PostgreSQL database deployed in ephemeral or persistent mode, running on a separate pod:
-... Copy the generated SQL migration file to the PostgreSQL pod.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc rsync --no-perms=true ./db-update/ sso-postgresql-1-n5p16:/tmp
-sending incremental file list
-
-sent 77 bytes received 11 bytes 176.00 bytes/sec
-total size is 26,333 speedup is 299.24
-----
-... Start a shell session to the PostgreSQL pod.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc rsh sso-postgresql-1-n5p16
-sh-4.2$
-----
-... Use the `psql` tool to apply database update manually.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-sh-4.2$ alias psql="/opt/rh/rh-postgresql95/root/bin/psql"
-sh-4.2$ psql --version
-psql (PostgreSQL) 9.5.4
-sh-4.2$ psql -U _USERNAME -d _DATABASE -W -f /tmp/keycloak-database-update.sql
-Password for user _USERNAME:
-INSERT 0 1
-INSERT 0 1
-...
-----
-+
-[IMPORTANT]
-====
-Replace `_USERNAME` and `_DATABASE` with the actual database credentials retrieved xref:get-db-credentials[in previous section]. Also use value of `_PASSWORD` as the password for the database, when prompted.
-====
-... Close the shell session to the PostgreSQL pod. Continue with xref:image-change-trigger-update-step[updating image change trigger step].
-
-[[image-change-trigger-update-step]]
-[start=12]
-. Update the image change trigger in the existing deployment config to reference the {project_name} {project_version} image.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc patch dc/sso --type=json -p '[{"op": "replace", "path": "/spec/triggers/0/imageChangeParams/from/name", "value": "{openshift_image_stream}:{project_latest_image_tag}"}]'
-"sso" patched
-----
-. Start rollout of the new {project_name} {project_version} images based on the latest image defined in the image change triggers.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc rollout latest dc/sso
-deploymentconfig "sso" rolled out
-----
-. Deploy the {project_name} {project_version} containers using the modified deployment config.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc scale --replicas=1 dc/sso
-deploymentconfig "sso" scaled
-----
-
-=== Migrating the {project_name} server's database across environments
-This tutorial focuses on migrating the {project_name} server database from one environment to another or migrating to a different database.
-
-Export and import of {project_name} {project_version} database is triggered at {project_name} server boot time and its parameters are passed in via Java system properties. This means during one {project_name} server boot, only one of the possible migration actions, *_export_* or *_import_*, is performed.
-
-==== Deploying the {project_name} PostgreSQL application template
-
-.Prerequisites
-* The steps described in the xref:Preparing-SSO-Authentication-for-OpenShift-Deployment[Preparing {project_name} Authentication for OpenShift Deployment] section have been performed already.
-
-.Procedure
-. Log in to the OpenShift web console and select the *sso-app-demo* project space.
-. Click *Add to project* to list the default image streams and templates.
-. Use the *Filter by keyword* search bar to limit the list to those that match _sso_. You may need to click *See all* to show the desired application template.
-. Select *_{project_templates_version}-postgresql_* {project_name} application template. When deploying the template ensure to *keep the _SSO_REALM_ variable unset* (default value).
-+
-[WARNING]
-====
-When the *_SSO_REALM_* configuration variable is set on the {project_openshift_product_name} image, a database import is performed in order to create the {project_name} server realm requested in the variable. For the database export to be performed correctly, the *_SSO_REALM_* configuration variable cannot be simultaneously defined on such image.
-====
-+
-. Click *Create* to deploy the application template and start pod deployment. This may take a couple of minutes.
-+
-Then access the {project_name} web console at *$$https://secure-sso-$$__.__/auth/admin* using the xref:sso-administrator-setup[administrator account].
-+
-[NOTE]
-====
-This example workflow uses a self-generated CA to provide an end-to-end workflow for demonstration purposes. Accessing the {project_name} web console will prompt an insecure connection warning. +
-For production environments, Red Hat recommends that you use an SSL certificate purchased from a verified Certificate Authority.
-====
-
-[role="_additional-resources"]
-.Additional resources
-* link:{project_doc_base_url}/server_administration_guide/#assembly-exporting-importing_server_administration_guide[Importing and exporting the database]
-
-==== (Optional) Creating additional realms and users to be exported
-
-When performing {project_name} {project_version} server database export, only realms and users currently in the database are exported. If the exported JSON file should include also additional {project_name} realms and users, these need to be created. Use these procedures.
-
-. link:{project_doc_base_url}//server_administration_guide/index#proc-creating-a-realm_server_administration_guide[Create a realm]
-. link:{project_doc_base_url}//server_administration_guide/index#proc-creating-user_server_administration_guide[Create a user]
-
-Upon their creation, the database can be exported.
-
-[role="_additional-resources"]
-.Additional resources
-* link:{project_doc_base_url}//server_administration_guide/#assembly-exporting-importing_server_administration_guide[Importing and exporting the database]
-
-[[sso-export-the-database]]
-==== Export the {project_name} database as a JSON file on the OpenShift pod
-
-.Prerequisites
-
-* New realms and users are created.
-
-.Procedure
-. Get the {project_name} deployment config and scale it down to zero.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get dc -o name
-deploymentconfig/sso
-deploymentconfig/sso-postgresql
-
-$ oc scale --replicas=0 dc sso
-deploymentconfig "sso" scaled
-----
-. Instruct the {project_name} {project_version} server deployed on {project_openshift_product_name} image to perform database export at {project_name} server boot time.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc set env dc/sso \
- -e "JAVA_OPTS_APPEND= \
- -Dkeycloak.migration.action=export \
- -Dkeycloak.migration.provider=singleFile \
- -Dkeycloak.migration.file=/tmp/demorealm-export.json"
-----
-. Scale the {project_name} deployment config back up. This will start the {project_name} server and export its database.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc scale --replicas=1 dc sso
-deploymentconfig "sso" scaled
-----
-. (Optional) Verify that the export was successful.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get pods
-NAME READY STATUS RESTARTS AGE
-sso-4-ejr0k 1/1 Running 0 27m
-sso-postgresql-1-ozzl0 1/1 Running 0 4h
-
-$ oc logs sso-4-ejr0k | grep 'Export'
-09:24:59,503 INFO [org.keycloak.exportimport.singlefile.SingleFileExportProvider] (ServerService Thread Pool -- 57) Exporting model into file /tmp/demorealm-export.json
-09:24:59,998 INFO [org.keycloak.services] (ServerService Thread Pool -- 57) KC-SERVICES0035: Export finished successfully
-----
-
-==== Retrieve and import the exported JSON file
-
-.Procedure
-
-. Retrieve the JSON file of the {project_name} database from the pod.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get pods
-NAME READY STATUS RESTARTS AGE
-sso-4-ejr0k 1/1 Running 0 2m
-sso-postgresql-1-ozzl0 1/1 Running 0 4h
-
-$ oc rsync sso-4-ejr0k:/tmp/demorealm-export.json .
-----
-
-. (Optional) Import the JSON file of the {project_name} database into an {project_name} server running in another environment.
-+
-[NOTE]
-====
-For importing into an {project_name} server not running on OpenShift, see the link:{project_doc_base_url}/server_administration_guide/#assembly-exporting-importing_server_administration_guide[Importing and exporting the database].
-====
-+
-When the {project_name} server is running as a {project_name} {project_version} container on OpenShift, use the
-link:{project_doc_base_url}/server_administration_guide/#assembly-exporting-importing_server_administration_guide[Admin Console Export/Import] function to import the resources from a previously exported JSON file into the {project_name} server's database.
-
-.. Log into the `master` realm's Admin Console of the {project_name} server using the credentials used to create the administrator user. In the browser, navigate to *\http://sso-./auth/admin* for the {project_name} web server, or to *\https://secure-sso-./auth/admin* for the encrypted {project_name} web server.
-.. At the top of the sidebar choose the name of the {project_name} realm, the users, clients, realm roles, and client roles should be imported to. This example uses `master` realm.
-.. Click the *Import* link under *Manage* section at the bottom of the sidebar.
-.. In the page that opens, click *Select file* and then specify the location of the exported `demorealm-export.json` JSON file on the local file system.
-.. From the *Import from realm* drop-down menu, select the name of the {project_name} realm from which the data should be imported. This example uses `master` realm.
-.. Choose which of users, clients, realm roles, and client roles should be imported (all of them are imported by default).
-.. Choose a strategy to perform, when a resource already exists (one of *Fail*, *Skip*, or *Overwrite*).
-+
-[NOTE]
-====
-The attempt to import an object (user, client, realm role, or client role) fails if object with the same identifier already exists in the current database. Use *Skip* strategy to import the objects that are present in the `demorealm-export.json` file, but do not exist in current database.
-====
-.. Click *Import* to perform the import.
-+
-When importing objects from a non-master realm to `master` realm or vice versa, after clicking the *Import* button, it is sometimes possible to encounter an error like the following one:
-+
-[[realm-import-error-message]]
-[.text-center]
-image:images/import_realm_error.png[Example of Possible Error Message when Performing Partial Import from Previously Exported JSON File]
-+
-In such cases, it is necessary first to create the missing clients, having the *Access Type* set to *bearer-only*. These clients can be created by manual copy of their characteristics from the source {project_name} server, on which the export JSON file was created, to the target {project_name} server, where the JSON file is imported. After creation of the necessary clients, click the *Import* button again.
-+
-To suppress the xref:realm-import-error-message[above] error message, it is needed to create the missing `realm-management` client, of the *bearer-only* *Access Type*, and click the *Import* button again.
-+
-For *Skip* import strategy, the newly added objects are marked as *ADDED* and the object which were skipped are marked as *SKIPPED*, in the *Action* column on the import result page.
-+
-The Admin Console import allows you to *overwrite* resources if you choose (*Overwrite* strategy). On a production system use this feature with caution.
-
-
-[[OSE-SSO-AUTH-TUTE]]
-=== Configuring OpenShift 3.11 to use {project_name} for Authentication
-Configure OpenShift 3.11 to use the {project_name} deployment as the authorization gateway for OpenShift.
-
-This example adds {project_name} as an authentication method alongside the identity providers configured during the installation of the OpenShift Container Platform cluster. Once configured, the {project_name} method will be also available (together with the configured identity providers) for the user login to your OpenShift web console.
-
-[role="additional-resources"]
-.Additional resources
-* link:{ocpdocs_idp_config_link}[Identity providers]
-* link:{ocpdocs_install_cluster_link}[Installation of the OpenShift Container Platform cluster]
-
-==== Configuring {project_name} Credentials
-
-.Prerequisites
-* The steps described in the xref:Preparing-SSO-Authentication-for-OpenShift-Deployment[Preparing {project_name} Authentication for OpenShift Deployment] section have been performed already.
-
-.Procedure
-Log in to the encrypted {project_name} web server at *$$https://secure-sso-$$_sso-app-demo_._openshift32.example.com_/auth/admin* using the xref:sso-administrator-setup[administrator account created during the {project_name} deployment.
-
-*Create a Realm*
-
-. Hover your cursor over the realm namespace (default is *Master*) at the top of the sidebar and click *Add Realm*.
-. Enter a realm name (this example uses _OpenShift_) and click *Create*.
-
-*Create a User*
-
-Create a test user that can be used to demonstrate the {project_name}-enabled OpenShift login:
-
-. Click *Users* in the *Manage* sidebar to view the user information for the realm.
-. Click *Add User*.
-. Enter a valid *Username* (this example uses _testuser_) and any additional optional information and click *Save*.
-. Edit the user configuration:
-.. Click the *Credentials* tab in the user space and enter a password for the user.
-.. Ensure the *Temporary Password* option is set to *Off* so that it does not prompt for a password change later on, and click *Reset Password* to set the user password. A pop-up window prompts for additional confirmation.
-
-*Create and Configure an OpenID-Connect Client*
-
-. Click *Clients* in the *Manage* sidebar and click *Create*.
-. Enter the *Client ID*. This example uses _openshift-demo_.
-. Select a *Client Protocol* from the drop-down menu (this example uses *openid-connect*) and click *Save*. You will be taken to the configuration *Settings* page of the _openshift-demo_ client.
-. From the *Access Type* drop-down menu, select *confidential*. This is the access type for server-side applications.
-. In the *Valid Redirect URIs* dialog, enter the URI for the OpenShift web console, which is _$$https://openshift$$.example.com:8443/*_ in this example.
-
-The client *Secret* is needed to configure OpenID-Connect on the OpenShift master in the next section. You can copy it now from under the *Credentials* tab. The secret is for this example.
-
-[role="additional-resources"]
-.Additional resources
-* link:{project_doc_base_url}//server_administration_guide/index#assembly-managing-clients_server_administration_guide[Managing OpenID Connect and SAML Clients]
-
-==== Configuring OpenShift Master for {project_name} authentication
-Log in to the OpenShift master CLI.
-
-.Prerequisites
-You must have the permissions to edit the */etc/origin/master/master-config.yaml* file.
-
-.Procedure
-
-. Edit the */etc/origin/master/master-config.yaml* file and find the *identityProviders* section. For example, in the case the OpenShift master is configured with the link:{ocpdocs_htpasswd_idp_link}[HTPassword identity provider], the *identityProviders* section will look similar to the following one:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-identityProviders:
-- challenge: true
- login: true
- name: htpasswd_auth
- provider:
- apiVersion: v1
- file: /etc/origin/openshift-passwd
- kind: HTPasswdPasswordIdentityProvider
-----
-+
-Add {project_name} as a secondary identity provider with content similar to the following snippet:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-- name: rh_sso
- challenge: false
- login: true
- mappingMethod: add
- provider:
- apiVersion: v1
- kind: OpenIDIdentityProvider
- clientID: pass:quotes[_openshift-demo_]
- clientSecret: pass:quotes[_7b0384a2-b832-16c5-9d73-2957842e89h7_]
- pass:quotes[_ca: xpaas.crt_]
- urls:
- authorize: pass:quotes[_https://secure-sso-sso-app-demo.openshift32.example.com/auth/realms/OpenShift/protocol/openid-connect/auth_]
- token: pass:quotes[_https://secure-sso-sso-app-demo.openshift32.example.com/auth/realms/OpenShift/protocol/openid-connect/token_]
- userInfo: pass:quotes[_https://secure-sso-sso-app-demo.openshift32.example.com/auth/realms/OpenShift/protocol/openid-connect/userinfo_]
- claims:
- id:
- - sub
- preferredUsername:
- - preferred_username
- name:
- - name
- email:
- - email
-----
-.. The {project_name} *Secret* hash for the *clientSecret* can be found in the {project_name} web console: *Clients* -> *_openshift-demo_* -> *Credentials*
-.. The endpoints for the *urls* can be found by making a request with the {project_name} application. For example:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-
-----
-+
-The response includes the *authorization_endpoint*, *token_endpoint*, and *userinfo_endpoint*.
-+
-.. This example workflow uses a self-generated CA to provide an end-to-end workflow for demonstration purposes. For this reason, the *ca* is provided as . This CA certificate must also be copied into the */etc/origin/master* folder. This is not necessary if using a certificate purchased from a verified Certificate Authority.
-. Save the configuration and restart the OpenShift master:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ systemctl restart atomic-openshift-master
-----
-
-==== Logging in to OpenShift
-
-.Procedure
-
-. Navigate to the OpenShift web console, which in this example is _https://openshift.example.com:8443/console_.
-+
-The OpenShift login page now offers the options to log in either using *htpasswd_auth* or *rh-sso* identity providers? The former is still available because it is present in the */etc/origin/master/master-config.yaml*.
-
-. Select *rh-sso* and log in to OpenShift with the _testuser_ user created earlier in {project_name}.
-+
-No projects are visible to _testuser_ until they are added in the OpenShift CLI. This is the only way to provide user privileges in OpenShift because it currently does not accept external role mapping.
-
-. To provide _testuser_ `view` privileges for the *sso-app-demo*, use the OpenShift CLI:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc adm policy add-role-to-user view testuser -n sso-app-demo
-----
-
-[[binary-builds]]
-=== Creating an OpenShift application from maven binaries and securing it using {project_name}
-
-To deploy existing applications on OpenShift, you can use the binary source capability.
-
-==== Deploy Binary Build of EAP 6.4 / 7.1 JSP Service Invocation Application and Secure it Using {project_name}
-
-The following example uses both link:https://github.com/keycloak/keycloak-quickstarts/tree/latest/app-jee-jsp[app-jee-jsp] and link:https://github.com/keycloak/keycloak-quickstarts/tree/latest/service-jee-jaxrs[service-jee-jaxrs] quickstarts to deploy EAP 6.4 / 7.1 JSP service application that authenticates using the {project_name}.
-
-.Prerequisites
-
-* The {project_openshift_product_name} image has been previously deployed using one of the following templates:
-
-* *_{project_templates_version}-postgresql_*
-* *_{project_templates_version}-postgresql-persistent_*
-* *_{project_templates_version}-x509-postgresql-persistent_*
-
-===== Create {project_name} Realm, Roles, and User for the EAP 6.4 / 7.1 JSP Application
-
-The EAP 6.4 / 7.1 JSP service application requires dedicated {project_name} realm, username, and password to be able to authenticate using {project_name}. Perform the following steps after the {project_openshift_product_name} image has been deployed:
-
-*Create the {project_name} Realm*
-
-. Login to the Admin Console of the {project_name} server.
-+
-*\https://secure-sso-sso-app-demo.openshift.example.com/auth/admin*
-+
-Use the xref:sso-administrator-setup[credentials of the {project_name} administrator user].
-
-. Hover your cursor over the realm namespace (default is *Master*) at the top of the sidebar and click *Add Realm*.
-
-. Enter a realm name (this example uses `demo`) and click *Create*.
-
-[[copy-rsa-public-key]]
-*Copy the Public Key*
-
-In the newly created `demo` realm, click the *Keys* tab, then select *Active* tab, and copy the public key of type *RSA* that has been generated.
-
-[NOTE]
-====
-The {project_openshift_product_name} image version {project_version} generates multiple keys by default, for example *HS256*, *RS256*, or *AES*. To copy the public key information for the {project_openshift_product_name} {project_version} image, click the *Keys* tab, then select *Active* tab, and click the *Public key* button of that row in the keys table, where type of the key matches *RSA*. Then select and copy the content of the pop-up window that appears.
-====
-
-The information about the public key is necessary xref:sso-public-key-details[later to deploy] the {project_name}-enabled EAP 6.4 / 7.1 JSP application.
-
-*Create {project_name} Roles*
-
-The link:https://github.com/keycloak/keycloak-quickstarts/tree/latest/service-jee-jaxrs[service-jee-jaxrs] quickstart exposes three endpoints by the service:
-
-* `public` - Requires no authentication.
-* `secured` - Can be invoked by users with the `user` role.
-* `admin` - Can be invoked by users with the `admin` role.
-
-Create `user` and `admin` roles in {project_name}. These roles will be assigned to an {project_name} application user to authenticate access to user applications.
-
-. Click *Roles* in the *Configure* sidebar to list the roles for this realm.
-+
-[NOTE]
-====
-This is a new realm, so there should only be the default (`offline_access` and `uma_authorization`) roles.
-====
-. Click *Add Role*.
-. Enter the role name (`user`) and click *Save*.
-
-Repeat these steps for the `admin` role.
-
-*Create the {project_name} Realm Management User*
-
-. Click *Users* in the *Manage* sidebar to view the user information for the realm.
-. Click *Add User.*
-. Enter a valid *Username* (this example uses the user `appuser`) and click *Save*.
-. Edit the user configuration:
-.. Click the *Credentials* tab in the user space and enter a password for the user (this example uses the password `apppassword`).
-.. Ensure the *Temporary Password* option is set to *Off* so that it does not prompt for a password change later on, and click *Reset Password* to set the user password. A pop-up window will prompt you to confirm.
-
-===== Assign the user role to the realm management user
-
-Perform the following steps to tie the previously created `appuser` with the `user` {project_name} role:
-
-. Click *Role Mappings* to list the realm and client role configuration. In *Available Roles*, select the `user` role created earlier, and click *Add selected>*.
-. Click *Client Roles*, select *realm-management* entry from the list, select each record in the *Available Roles* list.
-+
-[NOTE]
-====
-You can select multiple items at once by holding the *Ctrl* key and simultaneously clicking the first `impersonation` entry. While keeping the *Ctrl* key and the left mouse button pressed, move to the end of the list to the `view-clients` entry and ensure each record is selected.
-====
-. Click *Add selected>* to assign the roles to the client.
-
-===== Prepare {project_name} Authentication for OpenShift Deployment of the EAP 6.4 / 7.1 JSP Application
-
-.Procedure
-
-. Create a new project for the EAP 6.4 / 7.1 JSP application.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-project eap-app-demo
-----
-. Add the `view` role to the link:{ocpdocs_default_service_accounts_link}[`default`] service account. This enables the service account to view all the resources in the `eap-app-demo` namespace, which is necessary for managing the cluster.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
-----
-. The EAP template requires an link:{openshift_link}#Configuring-Keystores[SSL keystore and a JGroups keystore]. This example uses `keytool`, a package included with the Java Development Kit, to generate self-signed certificates for these keystores.
-.. Generate a secure key for the SSL keystore (this example uses `password` as password for the keystore).
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -genkeypair \
--dname "CN=secure-eap-app-eap-app-demo.openshift.example.com" \
--alias https \
--storetype JKS \
--keystore eapkeystore.jks
-----
-.. Generate a secure key for the JGroups keystore (this example uses `password` as password for the keystore).
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -genseckey \
--alias jgroups \
--storetype JCEKS \
--keystore eapjgroups.jceks
-----
-.. Generate the EAP 6.4 / 7.1 for OpenShift secrets with the SSL and JGroup keystore files.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc create secret generic eap-ssl-secret --from-file=eapkeystore.jks
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc create secret generic eap-jgroup-secret --from-file=eapjgroups.jceks
-----
-.. Add the EAP application secret to the link:{ocpdocs_default_service_accounts_link}[`default`] service account.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc secrets link default eap-ssl-secret eap-jgroup-secret
-----
-
-===== Deploy binary build of the EAP 6.4 / 7.1 JSP application
-
-.Procedure
-
-. Clone the source code.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ git clone \https://github.com/keycloak/keycloak-quickstarts.git
-----
-link:{appserver_doc_base_url}/html-single/development_guide/index#use_the_maven_repository[Configure] the link:https://access.redhat.com/maven-repository[Red Hat JBoss Middleware Maven repository]
-
-. Build both the link:https://github.com/keycloak/keycloak-quickstarts/tree/latest/service-jee-jaxrs[service-jee-jaxrs] and link:https://github.com/keycloak/keycloak-quickstarts/tree/latest/app-jee-jsp[app-jee-jsp] applications.
-.. Build the `service-jee-jaxrs` application.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ cd keycloak-quickstarts/service-jee-jaxrs/
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ mvn clean package -DskipTests
-[INFO] Scanning for projects...
-[INFO]
-[INFO] ------------------------------------------------------------------------
-[INFO] Building Keycloak Quickstart: service-jee-jaxrs 3.1.0.Final
-[INFO] ------------------------------------------------------------------------
-...
-[INFO] ------------------------------------------------------------------------
-[INFO] BUILD SUCCESS
-[INFO] ------------------------------------------------------------------------
-[INFO] Total time: 2.153 s
-[INFO] Finished at: 2017-06-26T12:06:12+02:00
-[INFO] Final Memory: 25M/241M
-[INFO] ------------------------------------------------------------------------
-----
-.. *Comment out* the `app-jee-jsp/config/keycloak.json` requirement of the `maven-enforcer-plugin` plugin and build the `app-jee-jsp` application.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-service-jee-jaxrs]$ cd ../app-jee-jsp/
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ sed -i /\/s/^/\<\!--/ pom.xml
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ sed -i '/\(<\/executions>\)/a\-->' pom.xml
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ mvn clean package -DskipTests
-[INFO] Scanning for projects...
-[INFO]
-[INFO] ------------------------------------------------------------------------
-[INFO] Building Keycloak Quickstart: app-jee-jsp 3.1.0.Final
-[INFO] ------------------------------------------------------------------------
-...
-[INFO] Building war: /tmp/github/keycloak-quickstarts/app-jee-jsp/target/app-jsp.war
-[INFO] ------------------------------------------------------------------------
-[INFO] BUILD SUCCESS
-[INFO] ------------------------------------------------------------------------
-[INFO] Total time: 3.018 s
-[INFO] Finished at: 2017-06-26T12:22:25+02:00
-[INFO] Final Memory: 35M/310M
-[INFO] ------------------------------------------------------------------------
-----
-+
-[IMPORTANT]
-====
-The link:https://github.com/keycloak/keycloak-quickstarts/tree/latest/app-jee-jsp[app-jee-jsp] quickstart requires you to configure the adapter, and that the adapter configuration file (`keycloak.json`) is present in the `config/` directory in the root of the quickstart to successfully build the quickstart. But since this example configures the adapter later via selected environment variables available for the EAP 6.4 / 7.1 for OpenShift image, it is not necessary to specify the form of `keycloak.json` adapter configuration file at this moment.
-====
-
-[[directory-structure-binary-builds]]
-[start=4]
-. Prepare the directory structure on the local file system.
-+
-Application archives in the *deployments/* subdirectory of the main binary build directory are copied directly to the xref:standard-deployments-directory[standard deployments directory] of the image being built on OpenShift. For the application to deploy, the directory hierarchy containing the web application data must be correctly structured.
-+
-Create the main directory for the binary build on the local file system and *deployments/* subdirectory within it. Copy the previously built WAR archives of both the *service-jee-jaxrs* and *app-jee-jsp* quickstarts to the *deployments/* subdirectory:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ ls
-config pom.xml README.md src target
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ mkdir -p sso-eap7-bin-demo/deployments
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ cp target/app-jsp.war sso-eap7-bin-demo/deployments/
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ cp ../service-jee-jaxrs/target/service.war sso-eap7-bin-demo/deployments/
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ tree sso-eap7-bin-demo/
-sso-eap7-bin-demo/
-|__ deployments
- |__ app-jsp.war
- |__ service.war
-
-1 directory, 2 files
-
-----
-+
-[[standard-deployments-directory]]
-[NOTE]
-====
-The location of the standard deployments directory depends on the underlying base image, that was used to deploy the application. See the following table:
-
-.Standard Location of the Deployments Directory
-[cols="2", options="header"]
-|===
-| Name of the Underlying Base Image(s) | Standard Location of the Deployments Directory
-
-| EAP for OpenShift 6.4 and 7.1 | *_$JBOSS_HOME/standalone/deployments_*
-
-| Java S2I for OpenShift | *_/deployments_*
-
-| JWS for OpenShift | *_$JWS_HOME/webapps_*
-
-|===
-====
-. Identify the image stream for EAP 6.4 / 7.1 image.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get is -n openshift | grep eap | cut -d ' ' -f 1
-jboss-eap64-openshift
-jboss-eap71-openshift
-----
-
-[[eap-new-binary-build]]
-[start=6]
-. Create new binary build, specifying image stream and application name.
-+
-[NOTE]
-====
-Replace `--image-stream=jboss-eap71-openshift` parameter with the `--image-stream=jboss-eap64-openshift` one in the following oc command to deploy the JSP application on top of {appserver_name} 6.4 for OpenShift image.
-====
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-build --binary=true \
---image-stream=jboss-eap71-openshift \
---name=eap-app
---> Found image 31895a4 (3 months old) in image stream "openshift/jboss-eap71-openshift" under tag "latest" for "jboss-eap71-openshift"
-
- {appserver_name} {appserver_version}
- -------------
- Platform for building and running Jakarta EE applications on {appserver_name} {appserver_version}
-
- Tags: builder, javaee, eap, eap7
-
- * A source build using binary input will be created
- * The resulting image will be pushed to image stream "eap-app:latest"
- * A binary build was created, use 'start-build --from-dir' to trigger a new build
-
---> Creating resources with label build=eap-app ...
- imagestream "eap-app" created
- buildconfig "eap-app" created
---> Success
-----
-. Start the binary build. Instruct `oc` executable to use main directory of the binary build we created xref:directory-structure-binary-builds[in previous step] as the directory containing binary input for the OpenShift build. In the working directory of *app-jee-jsp* issue the following command.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-app-jee-jsp]$ oc start-build eap-app \
---from-dir=./sso-eap7-bin-demo/ \
---follow
-Uploading directory "sso-eap7-bin-demo" as binary input for the build ...
-build "eap-app-1" started
-Receiving source from STDIN as archive ...
-Copying all war artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
-Copying all ear artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
-Copying all rar artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
-Copying all jar artifacts from /home/jboss/source/. directory into /opt/eap/standalone/deployments for later deployment...
-Copying all war artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
-'/home/jboss/source/deployments/app-jsp.war' -> '/opt/eap/standalone/deployments/app-jsp.war'
-'/home/jboss/source/deployments/service.war' -> '/opt/eap/standalone/deployments/service.war'
-Copying all ear artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
-Copying all rar artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
-Copying all jar artifacts from /home/jboss/source/deployments directory into /opt/eap/standalone/deployments for later deployment...
-Pushing image 172.30.82.129:5000/eap-app-demo/eap-app:latest ...
-Pushed 6/7 layers, 86% complete
-Pushed 7/7 layers, 100% complete
-Push successful
-----
-. Create a new OpenShift application based on the build.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-app eap-app
---> Found image 6b13d36 (2 minutes old) in image stream "eap-app-demo/eap-app" under tag "latest" for "eap-app"
-
- eap-app-demo/eap-app-1:aa2574d9
- -------------------------------
- Platform for building and running Jakarta EE applications on {appserver_name} {appserver_version}
-
- Tags: builder, javaee, eap, eap7
-
- * This image will be deployed in deployment config "eap-app"
- * Ports 8080/tcp, 8443/tcp, 8778/tcp will be load balanced by service "eap-app"
- * Other containers can access this service through the hostname "eap-app"
-
---> Creating resources ...
- deploymentconfig "eap-app" created
- service "eap-app" created
---> Success
- Run 'oc status' to view your app.
-----
-. Stop all running containers of the EAP 6.4 / 7.1 JSP application in the current namespace.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get dc -o name
-deploymentconfig/eap-app
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc scale dc/eap-app --replicas=0
-deploymentconfig "eap-app" scaled
-----
-. Further configure the EAP 6.4 / 7.1 JSP application prior the deployment.
-[[sso-public-key-details]]
-.. Configure the application with proper details about the {project_name} server instance.
-+
-[WARNING]
-====
-Ensure to replace the value of *_SSO_PUBLIC_KEY_* variable below with the actual content of the RSA public key for the `demo` realm, that has been xref:copy-rsa-public-key[copied].
-====
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc set env dc/eap-app \
--e HOSTNAME_HTTP="eap-app-eap-app-demo.openshift.example.com" \
--e HOSTNAME_HTTPS="secure-eap-app-eap-app-demo.openshift.example.com" \
--e SSO_DISABLE_SSL_CERTIFICATE_VALIDATION="true" \
--e SSO_USERNAME="appuser" \
--e SSO_PASSWORD="apppassword" \
--e SSO_REALM="demo" \
--e SSO_URL="https://secure-sso-sso-app-demo.openshift.example.com/auth" \
--e SSO_PUBLIC_KEY="MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkdhXyKx97oIoO6HwnV/MiX2EHO55Sn+ydsPzbjJevI5F31UvUco9uA8dGl6oM8HrnaWWv+i8PvmlaRMhhl6Xs68vJTEc6d0soP+6A+aExw0coNRp2PDwvzsXVWPvPQg3+iytStxu3Icndx+gC0ZYnxoRqL7rY7zKcQBScGEr78Nw6vZDwfe6d/PQ6W4xVErNytX9KyLFVAE1VvhXALyqEM/EqYGLmpjw5bMGVKRXnhmVo9E88CkFDH8E+aPiApb/gFul1GJOv+G8ySLoR1c8Y3L29F7C81odkVBp2yMm3RVFIGSPTjHqjO/nOtqYIfY4Wyw9mRIoY5SyW7044dZXRwIDAQAB" \
--e SSO_SECRET="0bb8c399-2501-4fcd-a183-68ac5132868d"
-deploymentconfig "eap-app" updated
-----
-.. Configure the application with details about both the SSL and JGroups keystore.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc set env dc/eap-app \
--e HTTPS_KEYSTORE_DIR="/etc/eap-secret-volume" \
--e HTTPS_KEYSTORE="eapkeystore.jks" \
--e HTTPS_PASSWORD="password" \
--e JGROUPS_ENCRYPT_SECRET="eap-jgroup-secret" \
--e JGROUPS_ENCRYPT_KEYSTORE_DIR="/etc/jgroups-encrypt-secret-volume" \
--e JGROUPS_ENCRYPT_KEYSTORE="eapjgroups.jceks" \
--e JGROUPS_ENCRYPT_PASSWORD="password"
-deploymentconfig "eap-app" updated
-----
-.. Define OpenShift volumes for both the SSL and JGroups secrets created earlier.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc volume dc/eap-app --add \
---name="eap-keystore-volume" \
---type=secret \
---secret-name="eap-ssl-secret" \
---mount-path="/etc/eap-secret-volume"
-deploymentconfig "eap-app" updated
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc volume dc/eap-app --add \
---name="eap-jgroups-keystore-volume" \
---type=secret \
---secret-name="eap-jgroup-secret" \
---mount-path="/etc/jgroups-encrypt-secret-volume"
-deploymentconfig "eap-app" updated
-----
-.. Configure the deployment config of the application to run application pods under the `default` OpenShift service account (default setting).
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc patch dc/eap-app --type=json \
--p '[{"op": "add", "path": "/spec/template/spec/serviceAccountName", "value": "default"}]'
-"eap-app" patched
-----
-. Deploy container of the EAP 6.4 / 7.1 JSP application using the modified deployment config.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc scale dc/eap-app --replicas=1
-deploymentconfig "eap-app" scaled
-----
-. Expose the service as route.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get svc -o name
-service/eap-app
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get route
-No resources found.
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc expose svc/eap-app
-route "eap-app" exposed
-----
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc get route
-NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
-eap-app eap-app-eap-app-demo.openshift.example.com eap-app 8080-tcp None
-----
-
-===== Access the application
-
-Access the application in your browser using the URL *\http://eap-app-eap-app-demo.openshift.example.com/app-jsp*. You should see output like on the following image:
-
-[.text-center]
-image:images/sso_app_jee_jsp.png[{project_name} Example JSP Application]
-
-.Procedure
-Perform the following to test the application:
-
-. Click the *INVOKE PUBLIC* button to access the `public` endpoint that doesn't require authentication.
-+
-You should see the *Message: public* output.
-. Click the *LOGIN* button to be redirected for user authentication to the {project_name} server instance against the `demo` realm.
-+
-Specify the username and password of the {project_name} user configured earlier (`appuser` / `apppassword`). Click *Log in*. The look of the application changes as detailed in the following image:
-+
-[.text-center]
-image:images/sso_app_jee_jsp_logged_in.png[]
-
-. Click the *INVOKE SECURED* button to access the `secured` endpoint.
-+
-You should see the *Message: secured* output.
-. Click the *INVOKE ADMIN* button to access the `admin` endpoint.
-+
-You should see *403 Forbidden* output.
-+
-[NOTE]
-====
-The `admin` endpoint requires users with `admin` {project_name} role to invoke properly. Access for the `appuser` is forbidden because they only have `user` role privilege, which allows them to access the `secured` endpoint.
-====
-
-.Procedure
-
-Perform the following steps to add the `appuser` to the `admin` {project_name} role:
-
-. Access the Admin Console of the {project_name} server's instance.
-+
-*\https://secure-sso-sso-app-demo.openshift.example.com/auth/admin*.
-+
-Use the xref:sso-administrator-setup[credentials of the {project_name} administrator user].
-. Click *Users* in the *Manage* sidebar to view the user information for the `demo` realm.
-. Click *View all users* button.
-. Click the ID link for the *appuser* or alternatively click the *Edit* button in the *Actions* column.
-. Click the *Role Mappings* tab.
-. Select `admin` entry from the *Available Roles* list in the *Realm Roles* row.
-. Click *Add selected>* button to add the `admin` role to the user.
-. Return to EAP 6.4 / 7.1 JSP service application.
-+
-*\http://eap-app-eap-app-demo.openshift.example.com/app-jsp*.
-. Click the *LOGOUT* button to reload role mappings for the `appuser`.
-. Click the *LOGIN* button again and provider `appuser` credentials.
-. Click the *INVOKE ADMIN* button again.
-+
-You should see the *Message: admin* output already.
-
-[[Example-EAP-Auto]]
-=== Automatically registering an EAP application in {project_name} with an OpenID-Connect client
-This example prepares {project_name} realm, role, and user credentials for an EAP project using an OpenID-Connect client adapter. These credentials are then provided in the EAP for OpenShift template for automatic {project_name} client registration. Once deployed, the {project_name} user can be used to authenticate and access {appserver_name}.
-
-[NOTE]
-====
-This example uses a OpenID-Connect client but an SAML client could also be used. See xref:../advanced_concepts/advanced_concepts.adoc#SSO-Clients[{project_name} Clients] and xref:../advanced_concepts/advanced_concepts.adoc#Auto-Man-Client-Reg[Automatic and Manual {project_name} Client Registration Methods] for more information on the differences between OpenID-Connect and SAML clients.
-====
-
-.Prerequisites
-* The steps described in the xref:Preparing-SSO-Authentication-for-OpenShift-Deployment[Preparing {project_name} Authentication for OpenShift Deployment] section have been performed already.
-
-
-==== Preparing {project_name} authentication for OpenShift deployment
-
-Log in to the OpenShift CLI with a user that holds the _cluster:admin_ role.
-
-. Create a new project:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-project eap-app-demo
-----
-//. Create a service account to be used for the {project_name} deployment:
-//+
-//[source,bash,subs="attributes+,macros+"]
-//----
-//$ oc create serviceaccount eap-service-account
-//----
-. Add the `view` role to the link:{ocpdocs_default_service_accounts_link}[`default`] service account. This enables the service account to view all the resources in the `eap-app-demo` namespace, which is necessary for managing the cluster.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
-----
-. The EAP template requires an xref:Configuring-Keystores[SSL keystore and a JGroups keystore]. +
-This example uses `keytool`, a package included with the Java Development Kit, to generate self-signed certificates for these keystores. The following commands will prompt for passwords. +
-.. Generate a secure key for the SSL keystore:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -genkeypair -alias https -storetype JKS -keystore eapkeystore.jks
-----
-.. Generate a secure key for the JGroups keystore:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -genseckey -alias jgroups -storetype JCEKS -keystore eapjgroups.jceks
-----
-. Generate the EAP for OpenShift secrets with the SSL and JGroup keystore files:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc create secret generic eap-ssl-secret --from-file=eapkeystore.jks
-$ oc create secret generic eap-jgroup-secret --from-file=eapjgroups.jceks
-----
-. Add the EAP secret to the `default` service account:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc secrets link default eap-ssl-secret eap-jgroup-secret
-----
-
-==== Preparing the {project_name} credentials
-Log in to the encrypted {project_name} web server at *$$https://secure-sso-$$__.__/auth/admin* using the xref:sso-administrator-setup[administrator account] created during the {project_name} deployment.
-
-.Procedure
-
-*Create a Realm*
-
-. Hover your cursor over the realm namespace at the top of the sidebar and click *Add Realm*.
-. Enter a realm name (this example uses _eap-demo_) and click *Create*.
-
-*Copy the Public Key*
-
-In the newly created _eap-demo_ realm, click the *Keys* tab and copy the generated public key. This example uses the variable __ for brevity. This is used later to deploy the {project_name}-enabled {appserver_name} image.
-
-*Create a Role*
-
-Create a role in {project_name} with a name that corresponds to the JEE role defined in the *web.xml* of the example EAP application. This role is assigned to an {project_name} _application user_ to authenticate access to user applications.
-
-. Click *Roles* in the *Configure* sidebar to list the roles for this realm. This is a new realm, so there should only be the default _offline_access_ role.
-. Click *Add Role*.
-. Enter the role name (this example uses the role _eap-user-role_) and click *Save*.
-
-*Create Users and Assign Roles*
-
-Create two users:
-- Assign the _realm management user_ the *realm-management* roles to handle automatic {project_name} client registration in the {project_name} server.
-- Assign the _application user_ the JEE role, created in the previous step, to authenticate access to user applications.
-
-Create the _realm management user_:
-
-. Click *Users* in the *Manage* sidebar to view the user information for the realm.
-. Click *Add User*.
-. Enter a valid *Username* (this example uses the user _eap-mgmt-user_) and click *Save*.
-. Edit the user configuration. Click the *Credentials* tab in the user space and enter a password for the user. After the password has been confirmed you can click *Reset Password* to set the user password. A pop-up window prompts for additional confirmation.
-. Click *Role Mappings* to list the realm and client role configuration. In the *Client Roles* drop-down menu, select *realm-management* and add all of the available roles to the user. This provides the user {project_name} server rights that can be used by the {appserver_name} image to create clients.
-
-Create the _application user_:
-
-. Click *Users* in the *Manage* sidebar to view the user information for the realm.
-. Click *Add User*.
-. Enter a valid *Username* and any additional optional information for the _application user_ and click *Save*.
-. Edit the user configuration. Click the *Credentials* tab in the user space and enter a password for the user. After the password has been confirmed you can click *Reset Password* to set the user password. A pop-up window prompts for additional confirmation.
-. Click *Role Mappings* to list the realm and client role configuration. In *Available Roles*, add the role created earlier.
-
-==== Deploy the {project_name}-enabled {appserver_name} image
-
-,Procedure
-
-. Return to the OpenShift web console and click *Add to project* to list the default image streams and templates.
-. Use the *Filter by keyword* search bar to limit the list to those that match _sso_. You may need to click *See all* to show the desired application template.
-. Select the *_eap71-sso-s2i_* image to list all of the deployment parameters. Include the following {project_name} parameters to configure the {project_name} credentials during the EAP build:
-+
-[cols="2*", options="header"]
-|===
-|Variable
-|Example Value
-|*_APPLICATION_NAME_*
-|_sso_
-
-|*_HOSTNAME_HTTPS_*
-|_secure-sample-jsp.eap-app-demo.openshift32.example.com_
-
-|*_HOSTNAME_HTTP_*
-|_sample-jsp.eap-app-demo.openshift32.example.com_
-
-|*_SOURCE_REPOSITORY_URL_*
-|_$$https://repository-example.com/developer/application$$_
-
-|*_SSO_URL_*
-|_$$https://secure-sso-sso-app-demo.openshift32.example.com/auth$$_
-
-|*_SSO_REALM_*
-|_eap-demo_
-
-|*_SSO_USERNAME_*
-|_eap-mgmt-user_
-
-|*_SSO_PASSWORD_*
-| _password_
-
-|*_SSO_PUBLIC_KEY_*
-|__
-
-|*_HTTPS_KEYSTORE_*
-|_eapkeystore.jks_
-
-|*_HTTPS_PASSWORD_*
-|_password_
-
-|*_HTTPS_SECRET_*
-|_eap-ssl-secret_
-
-|*_JGROUPS_ENCRYPT_KEYSTORE_*
-|_eapjgroups.jceks_
-
-|*_JGROUPS_ENCRYPT_PASSWORD_*
-|_password_
-
-|*_JGROUPS_ENCRYPT_SECRET_*
-|_eap-jgroup-secret_
-|===
-. Click *Create* to deploy the {appserver_name} image.
-
-It may take several minutes for the {appserver_name} image to deploy.
-
-==== Log in to the {appserver_name} Server using {project_name}
-
-.Procedure
-
-. Access the {appserver_name} application server and click *Login*. You are redirected to the {project_name} login.
-. Log in using the {project_name} user created in the example. You are authenticated against the {project_name} server and returned to the {appserver_name} application server.
-
-
-[[Example-EAP-Manual]]
-=== Manually registering EAP application in {project_name} with SAML client
-
-This example prepares {project_name} realm, role, and user credentials for an EAP project and configures an EAP for OpenShift deployment. Once deployed, the {project_name} user can be used to authenticate and access {appserver_name}.
-
-[NOTE]
-====
-This example uses a SAML client but an OpenID-Connect client could also be used. See xref:../advanced_concepts/advanced_concepts.adoc#SSO-Clients[{project_name} Clients] and xref:../advanced_concepts/advanced_concepts.adoc#Auto-Man-Client-Reg[Automatic and Manual {project_name} Client Registration Methods] for more information on the differences between SAML and OpenID-Connect clients.
-====
-
-.Prerequisites
-* The steps described in the xref:Preparing-SSO-Authentication-for-OpenShift-Deployment[Preparing {project_name} Authentication for OpenShift Deployment] section have been performed already.
-
-
-==== Preparing the {project_name} credentials
-
-.Procedure
-
-Log in to the encrypted {project_name} web server at *$$https://secure-sso-$$__.__/auth/admin* using the xref:sso-administrator-setup[administrator account] created during the {project_name} deployment.
-
-*Create a Realm*
-
-. Hover your cursor over the realm namespace (default is *Master*) at the top of the sidebar and click *Add Realm*.
-. Enter a realm name (this example uses _saml-demo_) and click *Create*.
-
-*Copy the Public Key*
-
-In the newly created _saml-demo_ realm, click the *Keys* tab and copy the generated public key. This example uses the variable _realm-public-key_ for brevity. This is needed later to deploy the {project_name}-enabled {appserver_name} image.
-
-*Create a Role*
-
-Create a role in {project_name} with a name that corresponds to the JEE role defined in the *web.xml* of the example EAP application. This role will be assigned to an {project_name} _application user_ to authenticate access to user applications.
-
-. Click *Roles* in the *Configure* sidebar to list the roles for this realm. This is a new realm, so there should only be the default _offline_access_ role.
-. Click *Add Role*.
-. Enter the role name (this example uses the role _saml-user-role_) and click *Save*.
-
-*Create Users and Assign Roles*
-
-Create two users:
-- Assign the _realm management user_ the *realm-management* roles to handle automatic {project_name} client registration in the {project_name} server.
-- Assign the _application user_ the JEE role, created in the previous step, to authenticate access to user applications.
-
-Create the _realm management user_:
-
-. Click *Users* in the *Manage* sidebar to view the user information for the realm.
-. Click *Add User*.
-. Enter a valid *Username* (this example uses the user _app-mgmt-user_) and click *Save*.
-. Edit the user configuration. Click the *Credentials* tab in the user space and enter a password for the user. After the password has been confirmed you can click *Reset Password* to set the user password. A pop-up window prompts for additional confirmation.
-////
-Need for the SAML?
-. Click *Role Mappings* to list the realm and client role configuration. In the *Client Roles* drop-down menu, select *realm-management* and add all of the available roles to the user. This provides the user {project_name} server rights that can be used by the {appserver_name} image to create clients.
-////
-
-Create the _application user_:
-
-. Click *Users* in the *Manage* sidebar to view the user information for the realm.
-. Click *Add User*.
-. Enter a valid *Username* and any additional optional information for the _application user_ and click *Save*.
-. Edit the user configuration. Click the *Credentials* tab in the user space and enter a password for the user. After the password has been confirmed you can click *Reset Password* to set the user password. A pop-up window prompts for additional confirmation.
-. Click *Role Mappings* to list the realm and client role configuration. In *Available Roles*, add the role created earlier.
-
-*Create and Configure a SAML Client*:
-
-Clients are {project_name} entities that request user authentication. This example configures a SAML client to handle authentication for the EAP application. This section saves two files, *keystore.jks* and *keycloak-saml-subsystem.xml* that are needed later in the procedure.
-
-Create the SAML Client:
-
-. Click *Clients* in the *Configure* sidebar to list the clients in the realm. Click *Create*.
-. Enter a valid *Client ID*. This example uses _sso-saml-demo_.
-. In the *Client Protocol* drop-down menu, select *saml*.
-. Enter the *Root URL* for the application. This example uses _$$https://demoapp-eap-app-demo.openshift32.example.com$$_.
-. Click *Save*.
-
-Configure the SAML Client:
-
-In the *Settings* tab, set the *Root URL* and the *Valid Redirect URLs* for the new *_sso-saml-demo_* client:
-
-. For the *Root URL*, enter the same address used when creating the client. This example uses _$$https://demoapp-eap-app-demo.openshift32.example.com$$_.
-. For the *Valid Redirect URLs*, enter an address for users to be redirected to at when they log in or out. This example uses a redirect address relative to the root _$$https://demoapp-eap-app-demo.openshift32.example.com/*$$_.
-
-Export the SAML Keys:
-
-. Click the *SAML Keys* tab in the _sso-saml-demo_ client space and click *Export*.
-. For this example, leave the *Archive Format* as *JKS*. This example uses the default *Key Alias* of _sso-saml-demo_ and default *Realm Certificate Alias* of _saml-demo_.
-. Enter the *Key Password* and the *Store Password*. This example uses _password_ for both.
-. Click *Download* and save the *keystore-saml.jks* file for use later.
-. Click the *_sso-saml-demo_* client to return to the client space ready for the next step.
-
-Download the Client Adapter:
-
-. Click *Installation*.
-. Use the *Format Option* drop-down menu to select a format. This example uses *Keycloak SAML Wildfly/JBoss Subsystem*.
-. Click *Download* and save the file *keycloak-saml-subsystem.xml*.
-
-The *keystore-saml.jks* will be used with the other EAP keystores in the next section to create an OpenShift secret for the EAP application project. Copy the *keystore-saml.jks* file to an OpenShift node. +
-The *keycloak-saml-subsystem.xml* will be modified and used in the application deployment. Copy it into the */configuration* folder of the application as *secure-saml-deployments*.
-
-==== Preparing {project_name} authentication for OpenShift deployment
-Log in to the OpenShift CLI with a user that holds the _cluster:admin_ role.
-
-.Procedure
-
-. Create a new project:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc new-project eap-app-demo
-----
-//. Create a service account to be used for the SSO deployment:
-//+
-//[source,bash,subs="attributes+,macros+"]
-//----
-//$ oc create serviceaccount app-service-account
-//----
-. Add the `view` role to the link:{ocpdocs_default_service_accounts_link}[`default`] service account. This enables the service account to view all the resources in the `eap-app-demo` namespace, which is necessary for managing the cluster.
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
-----
-+
-. The EAP template requires an xref:Configuring-Keystores[SSL keystore and a JGroups keystore]. +
-This example uses `keytool`, a package included with the Java Development Kit, to generate self-signed certificates for these keystores. The following commands will prompt for passwords. +
-.. Generate a secure key for the SSL keystore:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -genkeypair -alias https -storetype JKS -keystore eapkeystore.jks
-----
-.. Generate a secure key for the JGroups keystore:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ keytool -genseckey -alias jgroups -storetype JCEKS -keystore eapjgroups.jceks
-----
-. Generate the EAP for OpenShift secrets with the SSL and JGroup keystore files:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc create secret generic eap-ssl-secret --from-file=eapkeystore.jks
-$ oc create secret generic eap-jgroup-secret --from-file=eapjgroups.jceks
-----
-. Add the EAP application secret to the EAP service account created earlier:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-$ oc secrets link default eap-ssl-secret eap-jgroup-secret
-----
-
-[[modified-saml-xml]]
-==== Modifying the *secure-saml-deployments* file
-
-.Prerequisites
-
-* The *keycloak-saml-subsystem.xml*, exported from the {project_name} client in a previous section, should have been copied into the */configuration* folder of the application and renamed *secure-saml-deployments*. EAP searches for this file when it starts and copies it to the *standalone-openshift.xml* file inside the {project_name} SAML adapter configuration.
-
-.Procedure
-
-. Open the */configuration/secure-saml-deployments* file in a text editor.
-. Replace the *YOUR-WAR.war* value of the *secure-deployment name* tag with the application *.war* file. This example uses _sso-saml-demo.war_.
-. Replace the *SPECIFY YOUR LOGOUT PAGE!* value of the *logout page* tag with the url to redirect users when they log out of the application. This example uses */index.jsp*.
-. Delete the ** and ** tags and keys and replace it with keystore information:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-...
-
-
-
-
-
-
-
-
-----
-+
-The mount path of the *keystore-saml.jks* (in this example *_/etc/eap-secret-volume/keystore-saml.jks_*) can be specified in the application template with the parameter *EAP_HTTPS_KEYSTORE_DIR*. +
-The aliases and passwords for the *PrivateKey* and the *Certificate* were configured when the SAML Keys were exported from the {project_name} client.
-. Delete the second ** tag and key and replace it with the the realm certificate information:
-+
-[source,bash,subs="attributes+,macros+"]
-----
-...
-
-
-
-
-
-
-
-...
-----
-+
-The certificate alias and password were configured when the SAML Keys were exported from the {project_name} client.
-. Save and close the */configuration/secure-saml-deployments* file.
-
-==== Configuring SAML Client Registration in the application *web.xml*
-
-The client type must also be specified by the ** key in the application *web.xml*. This file is read by the image at deployment.
-
-Open the application *web.xml* file and ensure it includes the following:
-[source,bash,subs="attributes+,macros+"]
-----
-...
-
- KEYCLOAK-SAML
-
-...
-----
-
-==== Deploying the application
-
-You do not need to include any {project_name} configuration for the image because that has been configured in the application itself. Navigating to the application login page redirects you to the {project_name} login. Log in to the application through {project_name} using the _application user_ user created earlier.
-
-
diff --git a/pom.xml b/pom.xml
index 92e28a08fa..3c37b42daa 100644
--- a/pom.xml
+++ b/pom.xml
@@ -36,7 +36,6 @@
securing_appsserver_adminserver_development
- server_installationrelease_notesupgradingaggregation
@@ -55,11 +54,6 @@
masterrhsso-images
-
- getting_started
- openshift
- openshift-openj9
- tests
diff --git a/release_notes/topics/15_1_0.adoc b/release_notes/topics/15_1_0.adoc
index a9136b627f..bb801ab417 100644
--- a/release_notes/topics/15_1_0.adoc
+++ b/release_notes/topics/15_1_0.adoc
@@ -41,5 +41,4 @@ Thanks to https://github.com/rhyamada[Ronaldo Yamada] for the contribution.
== Deprecated features in the {project_operator}
With this release, we have deprecated and/or marked as unsupported some features in the {project_operator}. This
-concerns the Backup CRD and the operator managed Postgres Database. For more details, please see the
-link:{installguide_link}#_operator_production_usage[related chapter in the {installguide_name}].
+concerns the Backup CRD and the operator managed Postgres Database.
diff --git a/securing_apps/topics/docker/docker-overview.adoc b/securing_apps/topics/docker/docker-overview.adoc
index 8a02aa2a1c..db1e4a0263 100644
--- a/securing_apps/topics/docker/docker-overview.adoc
+++ b/securing_apps/topics/docker/docker-overview.adoc
@@ -1,7 +1,7 @@
== Configuring a Docker registry to use {project_name}
-NOTE: Docker authentication is disabled by default. To enable see link:{installguide_profile_link}[{installguide_profile_name}].
+NOTE: Docker authentication is disabled by default. To enable see the https://www.keycloak.org/server/features[Enabling and disabling features] guide.
This section describes how you can configure a Docker registry to use {project_name} as its authentication server.
diff --git a/securing_apps/topics/oidc/fapi-support.adoc b/securing_apps/topics/oidc/fapi-support.adoc
index 5d742c218d..853eae0c4b 100644
--- a/securing_apps/topics/oidc/fapi-support.adoc
+++ b/securing_apps/topics/oidc/fapi-support.adoc
@@ -45,5 +45,5 @@ the cipher suites and TLS protocol versions used. To match these requirements, y
key-manager="kcKeyManager" trust-manager="kcTrustManager" \
cipher-suite-filter="TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384" protocols="TLSv1.2" />
-The references to `kcKeyManager` and `kcTrustManager` refers to the corresponding Keystore and Truststore. See the documentation of Wildfly Elytron subsystem for more details and also
-other parts of {project_name} documentation such as link:{installguide_link}#_network[Network Setup Section] or link:{adminguide_link}#_x509[X.509 Authentication Section].
+As confidential information is being exchanged, all interactions shall be encrypted with TLS (HTTPS). Moreover, there are some requirements in the FAPI specification for
+the cipher suites and TLS protocol versions used. To match these requirements, you can consider configuring allowed ciphers. This configuration can be done by setting the `https-protocols` and `https-cipher-suites` options. For more details, see https://www.keycloak.org/server/enabletls[Configuring TLS] guide.
diff --git a/securing_apps/topics/oidc/javascript-adapter.adoc b/securing_apps/topics/oidc/javascript-adapter.adoc
index 83dc49e297..373298f33e 100644
--- a/securing_apps/topics/oidc/javascript-adapter.adoc
+++ b/securing_apps/topics/oidc/javascript-adapter.adoc
@@ -336,7 +336,7 @@ the browser is restrictive regarding cookies. The adapter tries to detect this s
===== Browsers with "SameSite=Lax by Default" Policy
All features are supported if SSL / TLS connection is configured on the {project_name} side as well as on the application
-side. See link:{installguide_link}#_setting_up_ssl[configuring the SSL / TLS]. Affected is for example Chrome starting with
+side. Affected is for example Chrome starting with
version 84.
===== Browsers with Blocked Third-Party Cookies
diff --git a/securing_apps/topics/token-exchange/token-exchange.adoc b/securing_apps/topics/token-exchange/token-exchange.adoc
index f3665ecbdb..79719584b9 100644
--- a/securing_apps/topics/token-exchange/token-exchange.adoc
+++ b/securing_apps/topics/token-exchange/token-exchange.adoc
@@ -9,7 +9,7 @@ include::../templates/techpreview.adoc[]
[NOTE]
====
-In order to use token exchange you should also enable the `token_exchange` feature. Please, take a look at link:{installguide_profile_link}[{installguide_profile_name}].
+In order to use token exchange you should also enable the `token_exchange` feature. Please, take a look at the https://www.keycloak.org/server/features[Enabling and disabling features] guide.
====
=== How token exchange works
diff --git a/server_admin/keycloak-images/add-event-types.png b/server_admin/images/add-event-types.png
similarity index 100%
rename from server_admin/keycloak-images/add-event-types.png
rename to server_admin/images/add-event-types.png
diff --git a/server_admin/keycloak-images/event-listeners.png b/server_admin/images/event-listeners.png
similarity index 100%
rename from server_admin/keycloak-images/event-listeners.png
rename to server_admin/images/event-listeners.png
diff --git a/server_admin/keycloak-images/search-user-event.png b/server_admin/images/search-user-event.png
similarity index 100%
rename from server_admin/keycloak-images/search-user-event.png
rename to server_admin/images/search-user-event.png
diff --git a/server_admin/keycloak-images/user-events-settings.png b/server_admin/images/user-events-settings.png
similarity index 100%
rename from server_admin/keycloak-images/user-events-settings.png
rename to server_admin/images/user-events-settings.png
diff --git a/server_admin/keycloak-images/user-events.png b/server_admin/images/user-events.png
similarity index 100%
rename from server_admin/keycloak-images/user-events.png
rename to server_admin/images/user-events.png
diff --git a/server_admin/topics.adoc b/server_admin/topics.adoc
index 8dc75c6736..91f0148e2c 100644
--- a/server_admin/topics.adoc
+++ b/server_admin/topics.adoc
@@ -6,9 +6,7 @@ include::topics/assembly-creating-first-admin.adoc[]
include::topics/admin-console.adoc[]
include::topics/user-federation.adoc[]
include::topics/user-federation/ldap.adoc[]
-ifeval::["{kc_dist}" == "wildfly"]
include::topics/user-federation/sssd.adoc[]
-endif::[]
include::topics/user-federation/custom.adoc[]
include::topics/assembly-managing-users.adoc[]
include::topics/sessions.adoc[]
@@ -22,20 +20,10 @@ include::topics/assembly-roles-groups.adoc[]
include::topics/authentication.adoc[]
include::topics/authentication/password-policies.adoc[]
include::topics/authentication/otp-policies.adoc[]
-ifeval::[{project_community}==true]
include::topics/authentication/flows.adoc[]
-endif::[]
-ifeval::[{project_product}==true]
-include::topics/authentication/flows-prod.adoc[]
-endif::[]
include::topics/authentication/kerberos.adoc[]
include::topics/authentication/x509.adoc[]
-ifeval::[{project_community}==true]
include::topics/authentication/webauthn.adoc[]
-endif::[]
-ifeval::[{project_product}==true]
-include::topics/authentication/webauthn-prod.adoc[]
-endif::[]
include::topics/authentication/recovery-codes.adoc[]
include::topics/authentication/conditions.adoc[]
include::topics/identity-broker.adoc[]
@@ -71,14 +59,9 @@ include::topics/admin-console-permissions/fine-grain.adoc[]
include::topics/assembly-managing-clients.adoc[]
include::topics/vault.adoc[]
include::topics/events.adoc[]
-ifeval::["{kc_dist}" == "wildfly"]
-include::topics/export-import.adoc[]
-endif::[]
include::topics/threat.adoc[]
include::topics/threat/host.adoc[]
-ifeval::["{kc_dist}" == "wildfly"]
include::topics/threat/admin.adoc[]
-endif::[]
include::topics/threat/brute-force.adoc[]
include::topics/threat/read-only-attributes.adoc[]
include::topics/threat/clickjacking.adoc[]
diff --git a/server_admin/topics/MigrationFromOlderVersions.adoc b/server_admin/topics/MigrationFromOlderVersions.adoc
deleted file mode 100644
index f3c0dec475..0000000000
--- a/server_admin/topics/MigrationFromOlderVersions.adoc
+++ /dev/null
@@ -1,206 +0,0 @@
-
-// MOVED TO SEPARATE UPGRADE GUIDE!
-// Add Keycloak migration changes to upgrading/topics/keycloak/changes.adoc
-
-
-== Migration from older versions
-
-To upgrade to a new version of Keycloak first download and install the new version of Keycloak. Once the new version
-is installed migrate the config files, database, keycloak-server.json, providers, themes and applications to the new applications.
-
-This chapter contains some general migration details which are applicable to all versions. There are instructions for
-migration that is only applicable to a specific release. If you are upgrading from a very old version you need to go
-through all intermediate version specific migration.
-
-It's highly recommended that you backup your database prior to upgrading Keycloak.
-
-Migration from a candidate release (CR) to a Final release is not supported. We do however recommend that you test
-migration for a CR so we can resolve any potential issues before the Final is released.
-
-=== Migration of config files using migration scripts
-As part of your migration, you should use your old versions of config files `standalone.xml`, `standalone-ha.xml`, and/or `domain.xml`.
-These files typically contain configuration that is unique to your own environment. So, the first thing to do as part
-of a migration is to copy those files to the new Keycloak server installation, replacing the default versions.
-
-If migrating from Keycloak version 2.1.0 or older, you should also copy `keycloak-server.json` to `standalone/configuration`
-and/or `domain/configuration`.
-
-There will be configuration in those config files that pertains to Keycloak, which will need to be upgraded. For that,
-you should run one or more of the appropriate upgrade scripts. They are `migrate-standalone.cli`, `migrate-standalone-ha.cli`,
-`migrate-domain-standalone.cli` and `migrate-domain-clustered.cli`.
-
-The server should not be running when you execute a migration script.
-
-.Example for running migrate-standalone.cli
-[source]
-----
-$ .../bin/jboss-cli.sh --file=migrate-standalone.cli
-----
-
-Note that for upgrading domain.xml, there are two migration scripts, `migrate-domain-standalone.cli` and
-`migrate-domain-clustered.cli`. These scripts migrate separate profiles within your domain.xml that were
-originally shipped with Keycloak. If you have changed the name of the profile there is a variable near
-the top of the script that you will need to change.
-
-If you are migrating `keycloak-server.json`, this will also be migrated as needed. If you prefer,
-you can migrate `keycloak-server.json` beforehand using the instructions in the next section.
-
-One thing to note is that the migration scripts only work for Keycloak versions 1.8.1 forward. If migrating an older
-version you will need to manually upgrade your config files to at least be 1.8.1 compliant.
-
-Lastly, you may want to examine the contents of the scripts before running. They show exactly what will be changed
-for each version. They also have values at the top of the script that you may need to change based on your
-environment.
-
-=== Migrate and convert keycloak-server.json
-If you ran one of the migration scripts from the previous section then you have probably already migrated
-your keycloak-server.json. This section is kept here for reference. If you prefer, you can follow the
-instructions in this section before running the migration scripts above.
-
-You should copy `standalone/configuration/keycloak-server.json` from the old version to make sure any configuration changes you've done are added to the new installation.
-The version specific section below will list any changes done to this file that you have to do when upgrading from one version to another.
-
-Keycloak is moving away from the use of keycloak-server.json. For this release, the server will still work
-if this file is in `standalone/configuration/keycloak-server.json`, but it is highly recommended that
-you convert to using standalone.xml, standalone-ha.xml, or domain.xml for configuration. We may soon remove
-support for keycloak-server.json.
-
-To convert your keycloak-server.json, you will use a jboss-cli operation called `migrate-json`.
-It is recommended that you run this operation while the server is not running.
-
-The `migrate-json` operation assumes you are migrating with an xml configuration file from an old version. For example,
-you should not try to migrate your keycloak-server.json to a standalone.xml file that already contains and
-tags under the keycloak subsystem. Before migration, your keycloak subsystem should look like the one below:
-
-.standalone.xml
-[source,xml]
-----
-
- auth
-
-----
-
-The jboss-cli tool is discussed in detail in link:{installguide_link}[{installguide_name}].
-
-==== migrate-json in Standalone Mode
-
-For standalone, you will issue the `migrate-json` operation in `embed` mode without
-the server running.
-
-.Standalone keycloak-server.json migration
-[source]
-----
-$ .../bin/jboss-cli.sh
-[disconnected /] embed-server --server-config=standalone.xml
-[standalone@embedded /] /subsystem=keycloak-server/:migrate-json
-----
-The `migrate-json` operation will look for your keycloak-server.json file in
-the `standalone/configuration` directory. You also have the option of using
-the `file` argument as shown in the domain mode example below.
-
-==== migrate-json in Domain Mode
-
-For a domain, you will stop the Keycloak server and
-issue the `migrate-json` operation against the running domain controller.
-If you choose not to stop the Keycloak server, the operation will still work,
-but your changes will not take affect until the Keycloak server is restarted.
-
-Domain mode migration requires that you use the `file` parameter to upload your
-keycloak-server.json from a local directory. The example below shows connecting
-to localhost. You will need to substitute the address of your domain controller.
-
-.Domain mode keycloak-server.json migration
-[source]
-----
-$ .../bin/jboss-cli.sh -c --controller=localhost:9990
-[domain@localhost:9990 /] cd profile=auth-server-clustered
-[domain@localhost:9990 profile=auth-server-clustered] cd subsystem=keycloak-server
-[domain@localhost:9990 subsystem=keycloak-server] :migrate-json(file="./keycloak-server.json")
-----
-You will need to repeat the `migrate-json` operation for each profile containing a `keycloak-server` subsystem.
-
-=== Migrate database
-
-Keycloak can automatically migrate the database schema, or you can choose to do it manually.
-
-==== Relational database
-
-To enable automatic upgrading of the database schema set the `migrationStrategy` property to `update` for
-the default `connectionsJpa` provider:
-
-.Edit xml
-[source,xml]
-----
-
-
-
- ...
-
-
-
-
-----
-
-.Equivalent CLI command for above
-[source]
-----
-/subsystem=keycloak-server/spi=connectionsJpa/provider=default/:map-put(name=properties,key=migrationStrategy,value=update)
-----
-
-When you start the server with this setting your database will automatically be migrated if the database schema has
-changed in the new version.
-
-To manually migrate the database set the `migrationStrategy` to `manual`. When you start the server with this
-configuration it will check if the database needs migration. If changes are needed the required changes are written
-to an SQL file that you can review and manually run against the database.
-
-There's also the option to disable migration by setting the `migrationStrategy` to `validate`. With this configuration
-the database will be checked at startup and if it is not migrated the server will exit.
-
-==== Mongo
-
-Mongo doesn't have a schema, but there may still be things like collections and indexes that are added to new releases.
-To enable automatic creation of these set the `migrationStrategy` property to `update` for the default `connectionsMongo`
-provider:
-
-.Edit xml
-[source,xml]
-----
-
-
-
- ...
-
-
-
-
-----
-
-.Equivalent CLI command for above
-[source]
-----
-/subsystem=keycloak-server/spi=connectionsMongo/provider=default/:map-put(name=properties,key=migrationStrategy,value=update)
-----
-
-The Mongo provider does not have the option to manually apply the required changes.
-
-There's also the option to disable migration by setting the `migrationStrategy` to `validate`. With this configuration
-the database will be checked at startup and if it is not migrated the server will exit.
-
-=== Migrate providers
-
-If you have implemented any SPI providers you need to copy them to the new server.
-The version specific section below will mention if any of the SPI's have changed.
-If they have you may have to update your code accordingly.
-
-=== Migrate themes
-
-If you have created a custom theme you need to copy them to the new server.
-The version specific section below will mention if changes have been made to themes.
-If there is you may have to update your themes accordingly.
-
-=== Migrate application
-
-If you deploy applications directly to the Keycloak server you should copy them to the new server.
-For any applications including those not deployed directly to the Keycloak server you should upgrade the adapter.
-The version specific section below will mention if any changes are required to applications.
\ No newline at end of file
diff --git a/server_admin/topics/assembly-creating-first-admin.adoc b/server_admin/topics/assembly-creating-first-admin.adoc
index 2d4ff3a298..ef206dbba5 100644
--- a/server_admin/topics/assembly-creating-first-admin.adoc
+++ b/server_admin/topics/assembly-creating-first-admin.adoc
@@ -19,7 +19,6 @@ image:{project_images}/initial-welcome-page.png[Welcome page]
=== Creating the account remotely
-ifeval::["{kc_dist}" == "quarkus"]
If you cannot access the server from a `localhost` address or just want to start {project_name} from the command line, use the `KEYCLOAK_ADMIN` and `KEYCLOAK_ADMIN_PASSWORD` environment variables to create an initial admin account.
For example:
@@ -30,39 +29,3 @@ export KEYCLOAK_ADMIN_PASSWORD=
bin/kc.[sh|bat] start
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-If you cannot access the server from a `localhost` address, or just want to start {project_name} from the command line, use the `.../bin/add-user-keycloak` script.
-
-.Add-user-keycloak script
-image:{project_images}/add-user-script.png[]
-
-The parameters differ based on if you use the standalone operation mode or domain operation mode. For standalone mode, here is how you use the script.
-
-.Linux/Unix
-[source]
-----
-$ .../bin/add-user-keycloak.sh -r master -u -p
-----
-
-.Windows
-[source]
-----
-> ...\bin\add-user-keycloak.bat -r master -u -p
-----
-
-For domain mode, you have to point the script to one of your server hosts using the `-sc` switch.
-
-.Linux/Unix
-[source]
-----
-$ .../bin/add-user-keycloak.sh --sc domain/servers/server-one/configuration -r master -u -p
-----
-
-.Windows
-[source]
-----
-> ...\bin\add-user-keycloak.bat --sc domain/servers/server-one/configuration -r master -u -p
-----
-endif::[]
diff --git a/server_admin/topics/assembly-managing-clients.adoc b/server_admin/topics/assembly-managing-clients.adoc
index 94d0c40d2d..1196e9fe4c 100644
--- a/server_admin/topics/assembly-managing-clients.adoc
+++ b/server_admin/topics/assembly-managing-clients.adoc
@@ -10,12 +10,7 @@ of client is one that is requesting an access token so that it can invoke other
This section discusses various aspects around configuring clients and various ways to do it.
include::clients/assembly-client-oidc.adoc[leveloffset=+2]
-ifeval::[{project_community}==true]
include::clients/saml/proc-creating-saml-client.adoc[leveloffset=+2]
-endif::[]
-ifeval::[{project_product}==true]
-include::clients/saml/proc-creating-saml-client-prod.adoc[leveloffset=+2]
-endif::[]
include::clients/saml/idp-initiated-login.adoc[leveloffset=+3]
include::clients/saml/proc-using-an-entity-descriptor.adoc[leveloffset=+3]
include::clients/con-client-links.adoc[leveloffset=+2]
diff --git a/server_admin/topics/authentication/flows-prod.adoc b/server_admin/topics/authentication/flows-prod.adoc
deleted file mode 100644
index bdf78caf0d..0000000000
--- a/server_admin/topics/authentication/flows-prod.adoc
+++ /dev/null
@@ -1,444 +0,0 @@
-[[_authentication-flows]]
-
-=== Authentication flows
-
-An _authentication flow_ is a container of authentications, screens, and actions, during log in, registration, and other {project_name} workflows.
-To view all the flows, actions, and checks, each flow requires:
-
-.Procedure
-. Click *Authentication* in the menu.
-. Click the *Flows* tab.
-
-==== Built-in flows
-
-{project_name} has several built-in flows. You cannot modify these flows, but you can alter the flow's requirements to suit your needs.
-
-In the drop-down list, select *browser* to display the Browser Flow screen.
-
-.Browser flow
-image:{project_images}/browser-flow.png[Browser Flow]
-
-Hover over the question-mark tooltip of the drop-down list to view a description of the flow. Two sections exist.
-
-===== Auth type
-The name of the authentication or the action to execute. If an authentication is indented, it is in a sub-flow. It may or may not be executed, depending on the behavior of its parent.
-
-. Cookie
-+
-The first time a user logs in successfully, {project_name} sets a session cookie. If the cookie is already set, this authentication type is successful. Since the cookie provider returned success and each execution at this level of the flow is _alternative_, {project_name} does not perform any other execution. This results in a successful login.
-
-. Kerberos
-+
-This authenticator is disabled by default and is skipped during the Browser Flow.
-
-. Identity Provider Redirector
-+
-This action is configured through the *Actions* > *Config* link. It redirects to another IdP for <<_identity_broker, identity brokering>>.
-
-. Forms
-+
-Since this sub-flow is marked as _alternative_, it will not be executed if the *Cookie* authentication type passed. This sub-flow contains an additional authentication type that needs to be executed. {project_name} loads the executions for this sub-flow and processes them.
-
-The first execution is the *Username Password Form*, an authentication type that renders the username and password page. It is marked as _required_, so the user must enter a valid username and password.
-
-The second execution is the *Browser - Conditional OTP* sub-flow. This sub-flow is _conditional_ and executes depending on the result of the *Condition - User Configured* execution. If the result is true, {project_name} loads the executions for this sub-flow and processes them.
-
-The next execution is the *Condition - User Configured* authentication. This authentication checks if {project_name} has configured other executions in the flow for the user. The *Browser - Conditional OTP* sub-flow executes only when the user has a configured OTP credential.
-
-The final execution is the *OTP Form*. {project_name} marks this execution as _required_ but it runs only when the user has an OTP credential set up because of the setup in the _conditional_ sub-flow. If not, the user does not see an OTP form.
-
-===== Requirement
-A set of radio buttons that control the execution of an action executes.
-
-[[_execution-requirements]]
-====== Required
-
-All _Required_ elements in the flow must be successfully sequentially executed. The flow terminates if a required element fails.
-
-====== Alternative
-
-Only a single element must successfully execute for the flow to evaluate as successful. Because the _Required_ flow elements are sufficient to mark a flow as successful, any _Alternative_ flow element within a flow containing _Required_ flow elements will not execute.
-
-====== Disabled
-
-The element does not count to mark a flow as successful.
-
-====== Conditional
-
-This requirement type is only set on sub-flows.
-
-* A _Conditional_ sub-flow contains executions. These executions must evaluate to logical statements.
-* If all executions evaluate as _true_, the _Conditional_ sub-flow acts as _Required_.
-* If all executions evaluate as _false_, the _Conditional_ sub-flow acts as _Disabled_.
-* If you do not set an execution, the _Conditional_ sub-flow acts as _Disabled_.
-* If a flow contains executions and the flow is not set to _Conditional_, {project_name} does not evaluate the executions, and the executions are considered functionally _Disabled_.
-
-==== Creating flows
-
-Important functionality and security considerations apply when you design a flow.
-
-To create a flow, perform the following:
-
-.Procedure
-. Click *Authentication* in the menu.
-. Click *New*.
-
-[NOTE]
-====
-You can copy and then modify an existing flow. Select a flow, click *Copy*, and enter a name for the new flow.
-====
-
-When creating a new flow, you must create a top-level flow first with the following options:
-
-Alias::
- The name of the flow.
-Description::
- The description you can set to the flow.
-Top-Level Flow Type::
- The type of flow. The type *client* is used only for the authentication of clients (applications). For all other cases, choose *generic*.
-
-.Create a top-level flow
-image:{project_images}/Create-top-level-flow.png[Top Level Flow]
-
-When {project_name} has created the flow, {project_name} displays the *Delete*, *Add execution*, and *Add flow* buttons.
-
-.An empty new flow
-image:{project_images}/New-flow.png[New Flow]
-
-Three factors determine the behavior of flows and sub-flows.
-
-* The structure of the flow and sub-flows.
-* The executions within the flows
-* The requirements set within the sub-flows and the executions.
-
-Executions have a wide variety of actions, from sending a reset email to validating an OTP. Add executions with the *Add execution* button. Hover over the question mark next to *Provider*, to see a description of the execution.
-
-.Adding an authentication execution
-image:{project_images}/Create-authentication-execution.png[Adding an Authentication Execution]
-
-Two types of executions exist, _automatic executions_ and _interactive executions_. _Automatic executions_ are similar to the *Cookie* execution and will automatically
-perform their action in the flow. _Interactive executions_ halt the flow to get input. Executions executing successfully set their status to _success_. For a flow to complete, it needs at least one execution with a status of _success_.
-
-You can add sub-flows to top-level flows with the *Add flow* button. The *Add flow* button displays the *Create Execution Flow* page. This page is similar to the *Create Top Level Form* page. The difference is that the *Flow Type* can be *generic* (default) or *form*. The *form* type constructs a sub-flow that generates a form for the user, similar to the built-in *Registration* flow.
-Sub-flows success depends on how their executions evaluate, including their contained sub-flows. See the <<_execution-requirements, execution requirements section>> for an in-depth explanation of how sub-flows work.
-
-[NOTE]
-====
-After adding an execution, check the requirement has the correct value.
-====
-
-All elements in a flow have a *Delete* option in the *Actions* menu. This action removes the element from the flow.
-Executions have a *Config* menu option to configure the execution. It is also possible to add executions and sub-flows to sub-flows with the *Add execution* and *Add flow* menu options.
-
-Since the order of execution is important, you can move executions and sub-flows up and down within their flows using the up and down buttons beside their names.
-
-[WARNING]
-====
-Make sure to properly test your configuration when you configure the authentication flow to confirm that no security holes exist in your setup. We recommend that you test various
-corner cases. For example, consider testing the authentication behavior for a user when you remove various credentials from the user's account before authentication.
-
-As an example, when 2nd-factor authenticators, such as OTP Form or WebAuthn Authenticator, are configured in the flow as REQUIRED and the user does not have credential of particular
-type, the user will be able to setup the particular credential during authentication itself. This situation means that the user does not authenticate with this credential as he setup
-it right during the authentication. So for browser authentication, make sure to configure your authentication flow with some 1st-factor credentials such as Password or WebAuthn
-Passwordless Authenticator.
-====
-
-==== Creating a password-less browser login flow
-
-To illustrate the creation of flows, this section describes creating an advanced browser login flow. The purpose of this flow is to allow a user a choice between logging in using a password-less manner with xref:webauthn_{context}[WebAuthn], or two-factor authentication with a password and OTP.
-
-.Procedure
-. Click *Authentication* in the menu.
-. Click the *Flows* tab.
-. Click *New*.
-. Enter `Browser Password-less` as an alias.
-. Click *Save*.
-. Click *Add execution*.
-. Select *Cookie* from the drop-down list.
-. Click *Save*.
-. Click *Alternative* for the *Cookie* authentication type to set its requirement to alternative.
-. Click *Add execution*.
-. Select *Kerberos* from the drop-down list.
-. Click *Add execution*.
-. Select *Identity Provider Redirector* from the drop-down list.
-. Click *Save*.
-. Click *Alternative* for the *Identity Provider Redirector* authentication type to set its requirement to alternative.
-. Click *Add flow*.
-. Enter *Forms* as an alias.
-. Click *Save*.
-. Click *Alternative* for the *Forms* authentication type to set its requirement to alternative.
-+
-.The common part with the browser flow
-image:{project_images}/Passwordless-browser-login-common.png[]
-+
-. Click *Actions* for the *Forms* execution.
-. Select *Add execution*.
-. Select *Username Form* from the drop-down list.
-. Click *Save*.
-. Click *Required* for the *Username Form* authentication type to set its requirement to required.
-
-At this stage, the form requires a username but no password. We must enable password authentication to avoid security risks.
-
-. Click *Actions* for the *Forms* sub-flow.
-. Click *Add flow*.
-. Enter `Authentication` as an alias.
-. Click *Save*.
-. Click *Required* for the *Authentication* authentication type to set its requirement to required.
-. Click *Actions* for the *Authentication* sub-flow.
-. Click *Add execution*.
-. Select *Webauthn Passwordless Authenticator* from the drop-down list.
-. Click *Save*.
-. Click *Alternative* for the *Webauthn Passwordless Authenticator* authentication type to set its requirement to alternative.
-. Click *Actions* for the *Authentication* sub-flow.
-. Click *Add flow*.
-. Enter `Password with OTP` as an alias.
-. Click *Save*.
-. Click *Alternative* for the *Password with OTP* authentication type to set its requirement to alternative.
-. Click *Actions* for the *Password with OTP* sub-flow.
-. Click *Add execution*.
-. Select *Password Form* from the drop-down list.
-. Click *Save*.
-. Click *Required* for the *Password Form* authentication type to set its requirement to required.
-. Click *Actions* for the *Password with OTP* sub-flow.
-. Click *Add execution*.
-. Select *OTP Form* from the drop-down list.
-. Click *Save*.
-. Click *Required* for the *OTP Form* authentication type to set its requirement to required.
-
-Finally, change the bindings.
-
-. Click the *Bindings* tab.
-. Click the *Browser Flow* drop-down list.
-. Select *Browser Password-less* from the drop-down list.
-. Click *Save*.
-
-.A password-less browser login
-image:{project_images}/Passwordless-browser-login.png[]
-
-After entering the username, the flow works as follows:
-
-If users have WebAuthn passwordless credentials recorded, they can use these credentials to log in directly. This is the password-less login. The user can also select *Password with OTP* because the `WebAuthn Passwordless` execution and the `Password with OTP` flow are set to *Alternative*. If they are set to *Required*, the user has to enter WebAuthn, password, and OTP.
-
-If the user selects the *Try another way* link with `WebAuthn passwordless` authentication, the user can choose between `Password` and `Security Key` (WebAuthn passwordless). When selecting the password, the user will need to continue and log in with the assigned OTP. If the user has no WebAuthn credentials, the user must enter the password and then the OTP. If the user has no OTP credential, they will be asked to record one.
-
-[NOTE]
-====
-Since the WebAuthn Passwordless execution is set to *Alternative* rather than *Required*, this flow will never ask the user to register a WebAuthn credential. For a user to have a Webauthn credential, an administrator must add a required action to the user. Do this by:
-
-. Enabling the *Webauthn Register Passwordless* required action in the realm (see the xref:webauthn_{context}[WebAuthn] documentation).
-. Setting the required action using the *Credential Reset* part of a user's xref:ref-user-credentials_{context}[Credentials] management menu.
-
-Creating an advanced flow such as this can have side effects. For example, if you enable the ability to reset the password for users, this would be accessible from the password form. In the default `Reset Credentials` flow, users must enter their username. Since the user has already entered a username earlier in the `Browser Password-less` flow, this action is unnecessary for {project_name} and sub-optimal for user experience. To correct this problem, you can:
-
-* Copy the `Reset Credentials` flow. Set its name to `Reset Credentials for password-less`, for example.
-* Select *Delete* in the *Actions* menu of the *Choose user* execution.
-* In the *Bindings* menu, change the reset credential flow from *Reset Credentials* to *Reset Credentials for password-less*
-====
-
-[[_step-up-flow]]
-==== Creating a browser login flow with step-up mechanism
-
-This section describes how to create advanced browser login flow using the step-up mechanism. The purpose of step-up authentication is to allow access to clients or resources based on a specific authentication level of a user.
-
-.Procedure
-. Click *Authentication* in the menu.
-. Click the *Flows* tab.
-. Click *New*.
-. Enter `Browser Incl Step up Mechanism` as an alias.
-. Click *Save*.
-. Click *Add execution*.
-. Select *Cookie* from the item list.
-. Click *Save*.
-. Click *Alternative* for the *Cookie* authentication type to set its requirement to alternative.
-. Click *Add flow*.
-. Enter *Auth Flow* as an alias.
-. Click *Save*.
-. Click *Alternative* for the *Auth Flow* authentication type to set its requirement to alternative.
-
-Now you configure the flow for the first authentication level.
-
-. Click *Actions* for the *Auth Flow*.
-. Click *Add flow*.
-. Enter `1st Condition Flow` as an alias.
-. Click *Save*.
-. Click *Conditional* for the *1st Condition Flow* authentication type to set its requirement to conditional.
-. Click *Actions* for the *1st Condition Flow*.
-. Click *Add execution*.
-. Select *Conditional - Level Of Authentication* from the item list.
-. Click *Save*.
-. Click *Required* for the *Conditional - Level Of Authentication* authentication type to set its requirement to required.
-. Click *Actions* for the *Conditional - Level Of Authentication*.
-. Click *Config*.
-. Enter `Level 1` as an alias.
-. Enter `1` for the Level of Authentication (LoA).
-. Set Max Age to *36000*. This value is in seconds and it is equivalent to 10 hours, which is the default `SSO Session Max` timeout set in the realm.
- As a result, when a user authenticates with this level, subsequent SSO logins can re-use this level and the user does not need to authenticate
- with this level until the end of the user session, which is 10 hours by default.
-. Click *Save*
-+
-.Configure the condition for the first authentication level
-image:{project_images}/authentication-step-up-condition-1.png[]
-
-. Click *Actions* for the *1st Condition Flow*.
-. Click *Add execution*.
-. Select *Username Password Form* from the item list.
-. Click *Save*.
-. Click *Required* for the *Username Password Form* authentication type to set its requirement to required.
-
-Now you configure the flow for the second authentication level.
-
-. Click *Actions* for the *Auth Flow*.
-. Click *Add flow*.
-. Enter `2nd Condition Flow` as an alias.
-. Click *Save*.
-. Click *Conditional* for the *2nd Condition Flow* authentication type to set its requirement to conditional.
-. Click *Actions* for the *2nd Condition Flow*.
-. Click *Add execution*.
-. Select *Conditional - Level Of Authentication* from the item list.
-. Click *Save*.
-. Click *Required* for the *Conditional - Level Of Authentication* authentication type to set its requirement to required.
-. Click *Actions* for the *Conditional - Level Of Authentication*.
-. Click *Config*.
-. Enter `Level 2` as an alias.
-. Enter `2` for the Level of Authentication (LoA).
-. Set Max Age to *0*. As a result, when a user authenticates, this level is valid just for the current authentication, but not any
- subsequent SSO authentications. So the user will always need to authenticate again with this level when this level is requested.
-. Click *Save*
-+
-.Configure the condition for the second authentication level
-image:{project_images}/authentication-step-up-condition-2.png[]
-
-. Click *Actions* for the *2nd Condition Flow*.
-. Click *Add execution*.
-. Select *OTP Form* from the item list.
-. Click *Save*.
-. Click *Required* for the *OTP Form* authentication type to set its requirement to required.
-
-Finally, change the bindings.
-
-. Click the *Bindings* tab.
-. Click the *Browser Flow* item list.
-. Select *Browser Incl Step up Mechanism* from the item list.
-. Click *Save*.
-
-.Browser login with step-up mechanism
-image:{project_images}/authentication-step-up-flow.png[]
-
-.Request a certain authentication level
-To use the step-up mechanism, you specify a requested level of authentication (LoA) in your authentication request. The `claims` parameter is used for this purpose:
-
-[source,subs=+attributes]
-----
-https://{DOMAIN}{kc_realms_path}/{REALMNAME}/protocol/openid-connect/auth?client_id={CLIENT-ID}&redirect_uri={REDIRECT-URI}&scope=openid&response_type=code&response_mode=query&nonce=exg16fxdjcu&claims=%7B%22id_token%22%3A%7B%22acr%22%3A%7B%22essential%22%3Atrue%2C%22values%22%3A%5B%22gold%22%5D%7D%7D%7D
-----
-
-The `claims` parameter is specified in a JSON representation:
-[source]
-----
-claims= {
- "id_token": {
- "acr": {
- "essential": true,
- "values": ["gold"]
- }
- }
- }
-----
-
-The {project_name} javascript adapter has support for easy construct of this JSON and sending it in the login request.
-See link:{adapterguide_link_js_adapter}[Javascript adapter documentation] for more details.
-
-You can also use simpler parameter `acr_values` instead of `claims` parameter to request particular levels as non-essential. This is mentioned
-in the OIDC specification.
-
-You can also configure the default level for the particular client, which is used when the parameter `acr_values` or the parameter `claims` with the `acr` claim is not present.
-For further details, see <<_mapping-acr-to-loa-client,Client ACR configuration>>).
-
-NOTE: To request the acr_values as text (such as `gold`) instead of a numeric value, you configure the mapping between the ACR and the LoA.
-It is possible to configure it at the realm level (recommended) or at the client level. For configuration see <<_mapping-acr-to-loa-realm,ACR to LoA Mapping>>.
-
-For more details see the https://openid.net/specs/openid-connect-core-1_0.html#acrSemantics[official OIDC specification].
-
-*Flow logic*
-
-The logic for the previous configured authentication flow is as follows: +
-If a client request a high authentication level, meaning Level of Authentication 2 (LoA 2), a user has to perform full 2-factor authentication: Username/Password + OTP.
-However, if a user already has a session in Keycloak, that was logged in with username and password (LoA 1), the user is only asked for the second authentication factor (OTP).
-
-The option *Max Age* in the condition determines how long (how much seconds) the subsequent authentication level is valid. This settings helps to decide
-whether the user will be asked to present the authentication factor again during a subsequent authentication. If the particular level X is requested
-by the `claims` or `acr_values` parameter and user already authenticated with level X, but it is expired (for example max age is configured to 300 and user authenticated before 310 seconds)
-then the user will be asked to re-authenticate again with the particular level. However if the level is not yet expired, the user will be automatically
-considered as authenticated with that level.
-
-Using *Max Age* with the value 0 means, that particular level is valid just for this single authentication. Hence every re-authentication requesting that level
-will need to authenticate again with that level. This is useful for operations that require higher security in the application (e.g. send payment) and always require authentication
-with the specific level.
-
-WARNING: Note that parameters such as `claims` or `acr_values` might be changed by the user in the URL when the login request is sent from the client to the {project_name} via the user's browser.
-This situation can be mitigated if client uses PAR (Pushed authorization request), a request object, or other mechanisms that prevents the user from rewrite the parameters in the URL.
-Hence after the authentication, clients are encouraged to check the ID Token to doublecheck that `acr` in the token corresponds to the expected level.
-
-If no explicit level is requested by parameters, the {project_name} will require the authentication with the first LoA
-condition found in the authentication flow, such as the Username/Password in the preceding example. When a user was already authenticated with that level
-and that level expired, the user is not required to re-authenticate, but `acr` in the token will have the value 0. This result is considered as authentication
-based solely on `long-lived browser cookie` as mentioned in the section 2 of OIDC Core 1.0 specification.
-
-NOTE: A conflict situation may arise when an admin specifies several flows, sets different LoA levels to each, and assigns the flows to different clients. However, the rule is always the same: if a user has a certain level, it needs only have that level to connect to a client. It's up to the admin to make sure that the LoA is coherent.
-
-*Example scenario*
-
-. Max Age is configured as 300 seconds for level 1 condition.
-. Login request is sent without requesting any acr. Level 1 will be used and the user needs to authenticate with username and password. The token will have `acr=1`.
-. Another login request is sent after 100 seconds. The user is automatically authenticated due to the SSO and the token will return `acr=1`.
-. Another login request is sent after another 201 seconds (301 seconds since authentication in point 2). The user is automatically authenticated due to the SSO, but the token will return `acr=0` due the level 1 is considered expired.
-. Another login request is sent, but now it will explicitly request ACR of level 1 in the `claims` parameter. User will be asked to re-authenticate with username/password
- and then `acr=1` will be returned in the token.
-
-*ACR claim in the token*
-
-ACR claim is added to the token by the `acr loa level` protocol mapper defined in the `acr` client scope. This client scope is the realm default client scope
-and hence will be added to all newly created clients in the realm.
-
-In case you do not want `acr` claim inside tokens or you need some custom logic for adding it, you can remove the client scope from your client.
-
-Note when the login request initiates a request with the `claims` parameter requesting `acr` as `essential` claim, then {project_name} will always return
-one of the specified levels. If it is not able to return one of the specified levels (For example if the requested level is unknown or bigger than configured conditions
-in the authentication flow), then {project_name} will throw an error.
-
-[[_user_session_limits]]
-*User session limits*
-
-Limits on the number of session that a user can have can be configured. Sessions can be limited per realm or per client.
-
-To add session limits to a flow, perform the following steps.
-
-. Click *Add execution* for the flow.
-. Select *User Session Count Limiter* from the item list.
-. Click *Save*.
-. Click *Required* for the *User Session Count Limiter* authentication type to set its requirement to required.
-. Click *Actions* for the *User Session Count Limiter*.
-. Click *Config*.
-. Enter an alias for this config.
-. Enter the required maximum number of sessions a user can have in this realm. If a value of 0 is used, this check is disabled.
-. Enter the required maximum number of sessions a user can have for the client. If a value of 0 is used, this check is disabled.
-. Select the behavior that is required when the user tries to create a session after the limit is reached. Available bahaviors are:
-
- Deny new session - when a new session is requested and the session limit is reached, no new sessions can be created.
- Terminate oldest session - when a new session is requested and the session limit has been reached, the oldest session will be removed and the new session created.
-
-. Optionally, add a custom error message to be displayed when the limit is reached.
-
-Note that the user session limits should be added to your bound *Browser Flow*, *Direct Grant Flow*, *Reset Credentials* and also to any *Post Broker Login Flow*.
-Currently, the administrator is responsible for maintaining consistency between the different configurations.
-
-Note also that the user session limit feature is not available for CIBA.
-
-ifeval::[{project_community}==true]
-=== Script Authenticator
-
-Ability to upload scripts through the Admin Console and REST endpoints is deprecated.
-
-For more details see link:{developerguide_jsproviders_link}[{developerguide_jsproviders_name}].
-
-endif::[]
diff --git a/server_admin/topics/authentication/webauthn-prod.adoc b/server_admin/topics/authentication/webauthn-prod.adoc
deleted file mode 100644
index 570945bea1..0000000000
--- a/server_admin/topics/authentication/webauthn-prod.adoc
+++ /dev/null
@@ -1,284 +0,0 @@
-[id="webauthn_{context}"]
-=== W3C Web Authentication (WebAuthn)
-
-{project_name} provides support for https://www.w3.org/TR/webauthn/[W3C Web Authentication (WebAuthn)]. {project_name} works as a WebAuthn's https://www.w3.org/TR/webauthn/#webauthn-relying-party[Relying Party (RP)].
-
-[NOTE]
-====
-WebAuthn's operations success depends on the user's WebAuthn supporting authenticator, browser, and platform. Make sure your authenticator, browser, and platform support the WebAuthn specification.
-====
-
-==== Setup
-
-The setup procedure of WebAuthn support for 2FA is the following:
-
-[[_webauthn-register]]
-===== Enable WebAuthn authenticator registration
-
-. Click *Authentication* in the menu.
-. Click the *Required Actions* tab.
-. Click *Register*.
-. Click the *Required Action* drop-down list.
-. Click *Webauthn Register*.
-. Click *Ok*.
-
-Mark the *Default Action* checkbox if you want all new users to be required to register their WebAuthn credentials.
-
-[[_webauthn-authenticator-setup]]
-===== Adding WebAuthn authentication to a browser flow
-
-. Click *Authentication* in the menu.
-. Click the *Browser* flow.
-. Click *Copy* to make a copy of the built-in *Browser* flow.
-. Enter a name for the copy.
-. Click *Ok*.
-. Click the _Actions_ link for *WebAuthn Browser Browser - Conditional OTP* and click _Delete_.
-. Click *Delete*.
-
-If you require WebAuthn for all users:
-
-. Click on the _Actions_ link for *WebAuthn Browser Forms*.
-. Click *Add execution*.
-. Click the *Provider* drop-down list.
-. Click *WebAuthn Authenticator*.
-. Click *Save*.
-. Click _REQUIRED_ for *WebAuthn Authenticator*
-+
-image:images/webauthn-browser-flow-required.png[]
-+
-. Click the *Bindings* tab.
-. Click the *Browser Flow* drop-down list.
-. Click *WebAuthn Browser*.
-. Click *Save*.
-
-[NOTE]
-====
-If a user does not have WebAuthn credentials, the user must register WebAuthn credentials.
-====
-
-Users can log in with WebAuthn if they have a WebAuthn credential registered only. So instead of adding the *WebAuthn Authenticator* execution, you can:
-
-.Procedure
-. Click on the _Actions_ link for *WebAuthn Browser Forms*.
-. Click *Add flow*.
-. Enter "Conditional 2FA" for the _Alias_ field.
-. Click *Save*.
-. Click _CONDITIONAL_ for *Conditional 2FA*
-. Click on the _Actions_ link for *Conditional 2FA*.
-. Click *Add execution*.
-. Click the *Provider* drop-down list.
-. Click *Condition - User Configured*.
-. Click *Save*.
-. Click _REQUIRED_ for *Conditional 2FA*
-. Click on the _Actions_ link for *Conditional 2FA*.
-. Click *Add execution*.
-. Click the *Provider* drop-down list.
-. Click *WebAuthn Authenticator*.
-. Click *Save*.
-. Click _ALTERNATIVE_ for *Conditional 2FA*
-+
-image:images/webauthn-browser-flow-conditional.png[]
-
-The user can choose between using WebAuthn and OTP for the second factor:
-
-.Procedure
-. Click the _Actions_ link for *Conditional 2FA*.
-. Click *Add execution*.
-. Click the *Provider* drop-down list.
-. Click *ITP Form*.
-. Click *Save*.
-. Click _ALTERNATIVE_ for *Conditional 2FA*
-+
-image:images/webauthn-browser-flow-conditional-with-OTP.png[]
-
-==== Authenticate with WebAuthn authenticator
-
-After registering a WebAuthn authenticator, the user carries out the following operations:
-
-* Open the login form. The user must authenticate with a username and password.
-* The user's browser asks the user to authenticate by using their WebAuthn authenticator.
-
-==== Managing WebAuthn as an administrator
-
-===== Managing credentials
-
-{project_name} manages WebAuthn credentials similarly to other credentials from xref:ref-user-credentials_{context}[User credential management]:
-
-* {project_name} assigns users a required action to create a WebAuthn credential from the *Reset Actions* list and select *Webauthn Register*.
-* Administrators can delete a WebAuthn credential by clicking *Delete*.
-* Administrators can view the credential's data, such as the AAGUID, by selecting *Show data...*.
-* Administrators can set a label for the credential by setting a value in the *User Label* field and saving the data.
-
-[[_webauthn-policy]]
-===== Managing policy
-
-Administrators can configure WebAuthn related operations as *WebAuthn Policy* per realm.
-
-.Procedure
-. Click *Authentication* in the menu.
-. Click the *WebAuthn Policy* tab.
-. Configure the items within the policy (see description below).
-. Click *Save*.
-
-The configurable items and their description are as follows:
-
-|===
-|Configuration|Description
-
-|Relying Party Entity Name
-|The readable server name as a WebAuthn Relying Party. This item is mandatory and applies to the registration of the WebAuthn authenticator. The default setting is "keycloak". For more details, see https://www.w3.org/TR/webauthn/#dictionary-pkcredentialentity[WebAuthn Specification].
-
-|Signature Algorithms
-|The algorithms telling the WebAuthn authenticator which signature algorithms to use for the https://www.w3.org/TR/webauthn/#iface-pkcredential[Public Key Credential]. {project_name} uses the Public Key Credential to sign and verify https://www.w3.org/TR/webauthn/#authentication-assertion[Authentication Assertions]. If no algorithms exist, the default https://datatracker.ietf.org/doc/html/rfc8152#section-8.1[ES256] is adapted. ES256 is an optional configuration item applying to the registration of WebAuthn authenticators. For more details, see https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialparameters[WebAuthn Specification].
-
-|Relying Party ID
-|The ID of a WebAuthn Relying Party that determines the scope of https://www.w3.org/TR/webauthn/#public-key-credential[Public Key Credentials]. The ID must be the origin's effective domain. This ID is an optional configuration item applied to the registration of WebAuthn authenticators. If this entry is blank, {project_name} adapts the host part of {project_name}'s base URL. For more details, see https://www.w3.org/TR/webauthn/[WebAuthn Specification].
-
-|Attestation Conveyance Preference
-|The WebAuthn API implementation on the browser (https://www.w3.org/TR/webauthn/#webauthn-client[WebAuthn Client]) is the preferential method to generate Attestation statements. This preference is an optional configuration item applying to the registration of the WebAuthn authenticator. If no option exists, its behavior is the same as selecting "none". For more details, see https://www.w3.org/TR/webauthn/[WebAuthn Specification].
-
-|Authenticator Attachment
-|The acceptable attachment pattern of a WebAuthn authenticator for the WebAuthn Client. This pattern is an optional configuration item applying to the registration of the WebAuthn authenticator. For more details, see https://www.w3.org/TR/webauthn/#enumdef-authenticatorattachment[WebAuthn Specification].
-
-|Require Resident Key
-|The option requiring that the WebAuthn authenticator generates the Public Key Credential as https://www.w3.org/TR/webauthn/[Client-side-resident Public Key Credential Source]. This option applies to the registration of the WebAuthn authenticator. If left blank, its behavior is the same as selecting "No". For more details, see https://www.w3.org/TR/webauthn/#dom-authenticatorselectioncriteria-requireresidentkey[WebAuthn Specification].
-
-|User Verification Requirement
-|The option requiring that the WebAuthn authenticator confirms the verification of a user. This is an optional configuration item applying to the registration of a WebAuthn authenticator and the authentication of a user by a WebAuthn authenticator. If no option exists, its behavior is the same as selecting "preferred". For more details, see https://www.w3.org/TR/webauthn/#dom-authenticatorselectioncriteria-userverification[WebAuthn Specification for registering a WebAuthn authenticator] and https://www.w3.org/TR/webauthn/#dom-publickeycredentialrequestoptions-userverification[WebAuthn Specification for authenticating the user by a WebAuthn authenticator].
-
-|Timeout
-|The timeout value, in seconds, for registering a WebAuthn authenticator and authenticating the user by using a WebAuthn authenticator. If set to zero, its behavior depends on the WebAuthn authenticator's implementation. The default value is 0. For more details, see https://www.w3.org/TR/webauthn/#dom-publickeycredentialcreationoptions-timeout[WebAuthn Specification for registering a WebAuthn authenticator] and https://www.w3.org/TR/webauthn/#dom-publickeycredentialrequestoptions-timeout[WebAuthn Specification for authenticating the user by a WebAuthn authenticator].
-
-|Avoid Same Authenticator Registration
-|If enabled, {project_name} cannot re-register an already registered WebAuthn authenticator.
-
-|Acceptable AAGUIDs
-|The white list of AAGUIDs which a WebAuthn authenticator must register against.
-
-|===
-
-==== Attestation statement verification
-
-When registering a WebAuthn authenticator, {project_name} verifies the trustworthiness of the attestation statement generated by the WebAuthn authenticator. {project_name} requires the trust anchor's certificates for this. {project_name} uses the link:{installguide_truststore_link}[{installguide_truststore_name}], so you must import these certificates into it in advance.
-
-To omit this validation, disable this truststore or set the WebAuthn policy's configuration item "Attestation Conveyance Preference" to "none".
-
-
-==== Managing WebAuthn credentials as a user
-
-===== Register WebAuthn authenticator
-
-The appropriate method to register a WebAuthn authenticator depends on whether the user has already registered an account on {project_name}.
-
-===== New user
-
-If the *WebAuthn Register* required action is *Default Action* in a realm, new users must set up the WebAuthn security key after their first login.
-
-.Procedure
-
-. Open the login form.
-. Click *Register*.
-. Fill in the items on the form.
-. Click *Register*.
-
-After successfully registering, the browser asks the user to enter the text of their WebAuthn authenticator's label.
-
-===== Existing user
-
-If `WebAuthn Authenticator` is set up as required as shown in the first example, then when existing users try to log in, they are required to register their WebAuthn authenticator automatically:
-
-.Procedure
-
-. Open the login form.
-. Enter the items on the form.
-. Click *Save*.
-. Click *Login*.
-
-After successful registration, the user's browser asks the user to enter the text of their WebAuthn authenticator's label.
-
-[[_webauthn_passwordless]]
-==== Passwordless WebAuthn together with Two-Factor
-
-{project_name} uses WebAuthn for two-factor authentication, but you can use WebAuthn as the first-factor authentication. In this case, users with `passwordless` WebAuthn credentials can authenticate to {project_name} without a password. {project_name} can use WebAuthn as both the passwordless and two-factor authentication mechanism in the context of a realm and a single authentication flow.
-
-An administrator typically requires that Security Keys registered by users for the WebAuthn passwordless authentication meet different requirements. For example, the security keys may require users to authenticate to the security key using a PIN, or the security key attests with a stronger certificate authority.
-
-Because of this, {project_name} permits administrators to configure a separate `WebAuthn Passwordless Policy`. There is a required `Webauthn Register Passwordless` action of type and separate authenticator of type `WebAuthn Passwordless Authenticator`.
-
-.Procedure
-===== Setup
-
-Set up WebAuthn passwordless support as follows:
-
-. Register a new required action for WebAuthn passwordless support. Use the steps described in <<_webauthn-register, Enable WebAuthn Authenticator Registration>>. Register the `Webauthn Register Passwordless` action.
-
-. Configure the policy. You can use the steps and configuration options described in <<_webauthn-policy, Managing Policy>>. Perform the configuration in the Admin Console in the tab *WebAuthn Passwordless Policy*. Typically the requirements for the security key will be stronger than for the two-factor policy. For example, you can set the *User Verification Requirement* to *Required* when you configure the passwordless policy.
-
-. Configure the authentication flow. Use the *WebAuthn Browser* flow described in <<_webauthn-authenticator-setup, Adding WebAuthn Authentication to a Browser Flow>>. Configure the flow as follows:
-+
-** The *WebAuthn Browser Forms* subflow contains *Username Form* as the first authenticator. Delete the default *Username Password Form* authenticator and add the *Username Form* authenticator. This action requires the user to provide a username as the first step.
-+
-** There will be a required subflow, which can be named *Passwordless Or Two-factor*, for example. This subflow indicates the user can authenticate with Passwordless WebAuthn credential or with Two-factor authentication.
-+
-** The flow contains *WebAuthn Passwordless Authenticator* as the first alternative.
-+
-** The second alternative will be a subflow named *Password And Two-factor Webauthn*, for example. This subflow contains a *Password Form* and a *WebAuthn Authenticator*.
-
-The final configuration of the flow looks similar to this:
-
-.PasswordLess flow
-image:images/webauthn-passwordless-flow.png[PasswordLess flow]
-
-You can now add *WebAuthn Register Passwordless* as the required action to a user, already known to {project_name}, to test this. During the first authentication, the user must use the password and second-factor WebAuthn credential. The user does not need to provide the password and second-factor WebAuthn credential if they use the WebAuthn Passwordless credential.
-
-[[_webauthn_loginless]]
-==== LoginLess WebAuthn
-
-{project_name} uses WebAuthn for two-factor authentication, but you can use WebAuthn as the first-factor authentication. In this case, users with `passwordless` WebAuthn credentials can authenticate to {project_name} without submitting a login or a password. {project_name} can use WebAuthn as both the loginless/passwordless and two-factor authentication mechanism in the context of a realm.
-
-An administrator typically requires that Security Keys registered by users for the WebAuthn loginless authentication meet different requirements. Loginless authentication requires users to authenticate to the security key (for example by using a PIN code or a fingerprint) and that the cryptographic keys associated with the loginless credential are stored physically on the security key. Not all security keys meet that kind of requirements. Check with your security key vendor if your device supports 'user verification' and 'resident key'. See <<_webauthn-supported-keys, Supported Security Keys>>.
-
-{project_name} permits administrators to configure the `WebAuthn Passwordless Policy` in a way that allows loginless authentication. Note that loginless authentication can only be configured with `WebAuthn Passwordless Policy` and with `WebAuthn Passwordless` credentials. WebAuthn loginless authentication and WebAuthn passwordless authentication can be configured on the same realm but will share the same policy `WebAuthn Passwordless Policy`.
-
-.Procedure
-===== Setup
-
-Set up WebAuthn Loginless support as follows:
-
-. Register a new required action for WebAuthn passwordless support. Use the steps described in <<_webauthn-register, Enable WebAuthn Authenticator Registration>>. Register the `Webauthn Register Passwordless` action.
-
-. Configure the `WebAuthn Passwordless Policy`. Perform the configuration in the Admin Console, `Authentication` section, in the tab `WebAuthn Passwordless Policy`. You have to set *User Verification Requirement* to *required* and *Require Resident Key* to *Yes* when you configure the policy for loginless scenario. Note that since there isn't a dedicated Loginless policy it won't be possible to mix authentication scenarios with user verification=no/resident key=no and loginless scenarios (user verification=yes/resident key=yes). Storage capacity is usually very limited on security keys meaning that you won't be able to store many resident keys on your security key.
-
-. Configure the authentication flow. Create a new authentication flow, add the "WebAuthn Passwordless" execution and set the Requirement setting of the execution to *Required*
-
-The final configuration of the flow looks similar to this:
-
-.LoginLess flow
-image:images/webauthn-loginless-flow.png[LoginLess flow]
-
-You can now add the required action `WebAuthn Register Passwordless` to a user, already known to {project_name}, to test this. The user with the required action configured will have to authenticate (with a username/password for example) and will then be prompted to register a security key to be used for loginless authentication.
-
-===== Vendor specific remarks
-
-====== Compatibility check list
-
-Loginless authentication with {project_name} requires the security key to meet the following features
-
-** FIDO2 compliance: not to be confused with FIDO/U2F
-** User verification: the ability for the security key to authenticate the user (prevents someone finding your security key to be able to authenticate loginless and passwordless)
-** Resident key: the ability for the security key to store the login and the cryptographic keys associated with the client application
-
-====== Windows Hello
-
-To use Windows Hello based credentials to authenticate against {project_name}, configure the *Signature Algorithms* setting of the `WebAuthn Passwordless Policy` to include the *RS256* value. Note that some browsers don't allow access to platform security key (like Windows Hello) inside private windows.
-
-[[_webauthn-supported-keys]]
-====== Supported security keys
-
-The following security keys have been successfuly tested for loginless authentication with {project_name}:
-
-* Windows Hello (Windows 10 21H1/21H2)
-* Yubico Yubikey 5 NFC
-* Feitian ePass FIDO-NFC
-
-
diff --git a/server_admin/topics/authentication/webauthn.adoc b/server_admin/topics/authentication/webauthn.adoc
index dd2d5ba533..8f63ff8693 100644
--- a/server_admin/topics/authentication/webauthn.adoc
+++ b/server_admin/topics/authentication/webauthn.adoc
@@ -150,7 +150,7 @@ The configurable items and their description are as follows:
==== Attestation statement verification
-When registering a WebAuthn authenticator, {project_name} verifies the trustworthiness of the attestation statement generated by the WebAuthn authenticator. {project_name} requires the trust anchor's certificates for this. {project_name} uses the link:{installguide_truststore_link}[{installguide_truststore_name}], so you must import these certificates into it in advance.
+When registering a WebAuthn authenticator, {project_name} verifies the trustworthiness of the attestation statement generated by the WebAuthn authenticator. {project_name} requires the trust anchor's certificates imported into the [truststore](https://www.keycloak.org/server/keycloak-truststore).
To omit this validation, disable this truststore or set the WebAuthn policy's configuration item "Attestation Conveyance Preference" to "none".
diff --git a/server_admin/topics/authentication/x509.adoc b/server_admin/topics/authentication/x509.adoc
index c4fbd13649..c3489f5236 100644
--- a/server_admin/topics/authentication/x509.adoc
+++ b/server_admin/topics/authentication/x509.adoc
@@ -72,94 +72,6 @@ The certificate identity mapping can map the extracted user identity to an exist
* Certificate KeyUsage validation.
* Certificate ExtendedKeyUsage validation.
-ifeval::["{kc_dist}" == "wildfly"]
-==== Enable X.509 client certificate user authentication
-
-The following sections describe how to configure {appserver_name}/Undertow and the {project_name} Server to enable X.509 client certificate authentication.
-
-[[_enable-mtls-wildfly]]
-===== Enable mutual SSL in {appserver_name}
-
-See link:https://docs.wildfly.org/24/Admin_Guide.html#enable-ssl[Enable SSL] for the instructions to enable SSL in {appserver_name}.
-
-* Open {project_dirref}/standalone/configuration/standalone.xml and add a new realm:
-```xml
-
-
-
-
-
-
-
-
-
-
-
-
-```
-
-`ssl/keystore`::
-The `ssl` element contains the `keystore` element that contains the details to load the server public key pair from a JKS keystore.
-
-`ssl/keystore/path`::
-The path to the JKS keystore.
-
-`ssl/keystore/relative-to`::
-The path that the keystore path is relative to.
-
-`ssl/keystore/keystore-password`::
-The password to open the keystore.
-
-`ssl/keystore/alias` (optional)::
-The alias of the entry in the keystore. Set if the keystore contains multiple entries.
-
-`ssl/keystore/key-password` (optional)::
-The private key password, if different from the keystore password.
-
-`authentication/truststore`::
-Defines how to load a trust store to verify the certificate presented by the remote side of the inbound/outgoing connection. Typically, the truststore contains a collection of trusted CA certificates.
-
-`authentication/truststore/path`::
-The path to the JKS keystore containing the certificates of the trusted certificate authorities.
-
-`authentication/truststore/relative-to`::
-The path that the truststore path is relative to.
-
-`authentication/truststore/keystore-password`::
-The password to open the truststore.
-
-
-===== Enable HTTPS listener
-
-See link:https://docs.wildfly.org/24/Admin_Guide.html#https-listener[HTTPS Listener] for the instructions to enable HTTPS in WildFly.
-
-* Add the element.
-
-[source,xml,subs="attributes+"]
-----
-
- ....
-
-
-
-
-----
-
-`https-listener/security-realm`::
-This value must match the name of the realm from the previous section.
-
-`https-listener/verify-client`::
-If set to *REQUESTED*, the server optionally asks for a client certificate.
-If set to *REQUIRED*, the server refuses inbound connections if no client certificate has been provided.
-endif::[]
-
[[_browser_flow]]
==== Adding X.509 client certificate authentication to browser flows
@@ -317,158 +229,3 @@ image:{project_images}/x509-directgrant-execution.png[X509 Direct Grant Executio
+
.X509 direct grant flow bindings
image:{project_images}/x509-directgrant-flow-bindings.png[X509 Direct Grant Flow Bindings]
-
-==== Client certificate lookup
-
-When the {project_name} server receives a direct HTTP request, the {appserver_name} undertow subsystem establishes an SSL handshake and extracts the client certificate. The {appserver_name} saves the client certificate to the `javax.servlet.request.X509Certificate` attribute of the HTTP request, as specified in the servlet specification. The {project_name} X509 authenticator can look up the certificate from this attribute.
-
-However, when the {project_name} server listens to HTTP requests behind a load balancer or reverse proxy, the proxy server may extract the client certificate and establish a mutual SSL connection. A reverse proxy generally puts the authenticated client certificate in the HTTP header of the underlying request. The proxy forwards the request to the back end {project_name} server. In this case, {project_name} must look up the X.509 certificate chain from the HTTP headers rather than the attribute of the HTTP request.
-
-If {project_name} is behind a reverse proxy, you generally need to configure the alternative provider of the `x509cert-lookup` SPI in {project_dirref}/standalone/configuration/standalone.xml. With the `default` provider looking up the HTTP header certificate, two additional built-in providers exist: `haproxy` and `apache`.
-
-===== HAProxy certificate lookup provider
-
-You use this provider when your {project_name} server is behind an HAProxy reverse proxy. Use the following configuration for your server:
-
-[source,xml]
-----
-
- haproxy
-
-
-
-
-
-
-
-
-----
-
-In this example configuration, the client certificate is looked up from the HTTP header, `SSL_CLIENT_CERT`, and the other certificates from its chain are looked up from HTTP headers such as `CERT_CHAIN_0` through `CERT_CHAIN_9`. The attribute `certificateChainLength` is the maximum length of the chain so the last attribute is `CERT_CHAIN_9`.
-
-Consult the HAProxy documentation for the details of configuring the HTTP Headers for the client certificate and client certificate chain.
-
-===== Apache certificate lookup provider
-
-You can use this provider when your {project_name} server is behind an Apache reverse proxy. Use the following configuration for your server:
-
-[source,xml]
-----
-
- apache
-
-
-
-
-
-
-
-
-----
-
-This configuration is the same as the `haproxy` provider. Consult the Apache documentation on link:https://httpd.apache.org/docs/current/mod/mod_ssl.html[mod_ssl] and link:https://httpd.apache.org/docs/current/mod/mod_headers.html[mod_headers] for details on how the HTTP Headers for the client certificate and client certificate chain are configured.
-
-===== NGINX certificate lookup provider
-
-You can use this provider when your {project_name} server is behind an NGINX reverse proxy. Use the following configuration for your server:
-
-[source,xml]
-----
-
- nginx
-
-
-
-
-
-
-
-
-----
-
-[NOTE]
-====
-The NGINX link:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables[SSL/TLS module] does not expose the client certificate chain. {project_name}'s NGINX certificate lookup provider rebuilds it by using the link:{installguide_truststore_link}[{installguide_truststore_name}]. Populate the {project_name} truststore by using the keytool CLI with all root and intermediate CA's for rebuilding client certificate chain.
-====
-
-Consult the NGINX documentation for the details of configuring the HTTP Headers for the client certificate.
-
-Example of NGINX configuration file :
-[source,txt]
-----
- ...
- server {
- ...
- ssl_client_certificate trusted-ca-list-for-client-auth.pem;
- ssl_verify_client optional_no_ca;
- ssl_verify_depth 2;
- ...
- location / {
- ...
- proxy_set_header ssl-client-cert $ssl_client_escaped_cert;
- ...
- }
- ...
-}
-----
-
-[NOTE]
-====
-All certificates in trusted-ca-list-for-client-auth.pem must be added to link:{installguide_truststore_link}[{installguide_truststore_name}].
-====
-
-===== Other reverse proxy implementations
-
-{project_name} does not have built-in support for other reverse proxy implementations. However, you can make other reverse proxies behave in a similar way to `apache` or `haproxy`. If none of these work, create your implementation of the `org.keycloak.services.x509.X509ClientCertificateLookupFactory` and `org.keycloak.services.x509.X509ClientCertificateLookup` providers. See the link:{developerguide_link}[{developerguide_name}] for details on how to add your provider.
-
-ifeval::["{kc_dist}" == "wildfly"]
-==== Troubleshooting
-
-Dumping HTTP headers::
-To view what the reverse proxy sends to Keycloak, enable the `RequestDumpingHandler` Undertow filter and consult the `server.log` file.
-
-Enable TRACE logging under the logging subsystem::
-[source,xml]
-----
-...
-
-
-...
-
-
-
-
-
-
-----
-[WARNING]
-====
-Do not use RequestDumpingHandler or TRACE logging in production.
-====
-
-Direct Grant authentication with X.509::
-You can use the following template to request a token by using the Resource Owner Password Credentials Grant:
-
-[source,subs=+attributes]
-$ curl https://[host][:port]{kc_realms_path}/master/protocol/openid-connect/token \
- --insecure \
- --data "grant_type=password&scope=openid profile&username=&password=&client_id=CLIENT_ID&client_secret=CLIENT_SECRET" \
- -E /path/to/client_cert.crt \
- --key /path/to/client_cert.key
-[source]
-
-`[host][:port]`::
-The host and the port number of the remote {project_name} server.
-
-`CLIENT_ID`::
-The client id.
-
-`CLIENT_SECRET`::
-For confidential clients, a client secret.
-
-`client_cert.crt`::
-A public key certificate to verify the identity of the client in mutual SSL authentication. The certificate must be in PEM format.
-
-`client_cert.key`::
-A private key in the public key pair. This key must be in PEM format.
-endif::[]
\ No newline at end of file
diff --git a/server_admin/topics/clients/assembly-client-oidc.adoc b/server_admin/topics/clients/assembly-client-oidc.adoc
index e250fd34bf..223ef48ac5 100644
--- a/server_admin/topics/clients/assembly-client-oidc.adoc
+++ b/server_admin/topics/clients/assembly-client-oidc.adoc
@@ -4,17 +4,9 @@
[role="_abstract"]
xref:con-oidc_{context}[OpenID Connect] is the recommended protocol to secure applications. It was designed from the ground up to be web friendly and it works best with HTML5/JavaScript applications.
-ifeval::[{project_community}==true]
include::oidc/proc-creating-oidc-client.adoc[leveloffset=+1]
include::oidc/con-basic-settings.adoc[leveloffset=+1]
include::oidc/con-advanced-settings.adoc[leveloffset=+1]
-endif::[]
-
-ifeval::[{project_product}==true]
-include::oidc/proc-creating-oidc-client-prod.adoc[leveloffset=+1]
-include::oidc/con-basic-settings-prod.adoc[leveloffset=+1]
-include::oidc/con-advanced-settings-prod.adoc[leveloffset=+1]
-endif::[]
include::oidc/con-confidential-client-credentials.adoc[leveloffset=+1]
diff --git a/server_admin/topics/clients/oidc/con-advanced-settings-prod.adoc b/server_admin/topics/clients/oidc/con-advanced-settings-prod.adoc
deleted file mode 100644
index d1dc4edd5b..0000000000
--- a/server_admin/topics/clients/oidc/con-advanced-settings-prod.adoc
+++ /dev/null
@@ -1,111 +0,0 @@
-[id="con-advanced-settings_{context}"]
-= Advanced settings
-[role="_abstract"]
-When you click _Advanced Settings_, additional fields are displayed.
-
-[[_mtls-client-certificate-bound-tokens]]
-*OAuth 2.0 Mutual TLS Certificate Bound Access Tokens Enabled*
-
-ifeval::["{kc_dist}" == "wildfly"]
-[NOTE]
-====
-To enable mutual TLS in {project_name}, see <<_enable-mtls-wildfly, Enable mutual SSL in WildFly>>.
-====
-endif::[]
-
-Mutual TLS binds an access token and a refresh token together with a client certificate, which is exchanged during a TLS handshake. This binding prevents an attacker from using stolen tokens.
-
-This type of token is a holder-of-key token. Unlike bearer tokens, the recipient of a holder-of-key token can verify if the sender of the token is legitimate.
-
-If this setting is on, the workflow is:
-
-. A token request is sent to the token endpoint in an authorization code flow or hybrid flow.
-. {project_name} requests a client certificate.
-. {project_name} receives the client certificate.
-. {project_name} successfully verifies the client certificate.
-
-If verification fails, {project_name} rejects the token.
-
-In the following cases, {project_name} will verify the client sending the access token or the refresh token:
-
-* A token refresh request is sent to the token endpoint with a holder-of-key refresh token.
-* A UserInfo request is sent to UserInfo endpoint with a holder-of-key access token.
-* A logout request is sent to Logout endpoint with a holder-of-key refresh token.
-
-See https://datatracker.ietf.org/doc/html/draft-ietf-oauth-mtls-08#section-3[Mutual TLS Client Certificate Bound Access Tokens] in the OAuth 2.0 Mutual TLS Client Authentication and Certificate Bound Access Tokens for more details.
-
-[NOTE]
-====
-Currently, {project_name} client adapters do not support holder-of-key token verification. {project_name} adapters treat access and refresh tokens as bearer tokens.
-====
-
-[[_proof-key-for-code-exchange]]
-*Proof Key for Code Exchange Code Challenge Method*
-
-If an attacker steals an authorization code of a legitimate client, Proof Key for Code Exchange (PKCE) prevents the attacker from receiving the tokens that apply to the code.
-
-An administrator can select one of these options:
-
-*(blank)*:: {project_name} does not apply PKCE unless the client sends appropriate PKCE parameters to {project_name}s authorization endpoint.
-*S256*:: {project_name} applies to the client PKCE whose code challenge method is S256.
-*plain*:: {project_name} applies to the client PKCE whose code challenge method is plain.
-
-See https://datatracker.ietf.org/doc/html/rfc7636[RFC 7636 Proof Key for Code Exchange by OAuth Public Clients] for more details.
-
-[[_jwe-id-token-encryption]]
-*Signed and Encrypted ID Token Support*
-
-{project_name} can encrypt ID tokens according to the https://datatracker.ietf.org/doc/html/rfc7516[Json Web Encryption (JWE)] specification. The administrator determines if ID tokens are encrypted for each client.
-
-The key used for encrypting the ID token is the Content Encryption Key (CEK). {project_name} and a client must negotiate which CEK is used and how it is delivered. The method used to determine the CEK is the Key Management Mode. The Key Management Mode that {project_name} supports is Key Encryption.
-
-In Key Encryption:
-
-. The client generates an asymmetric cryptographic key pair.
-. The public key is used to encrypt the CEK.
-. {project_name} generates a CEK per ID token
-. {project_name} encrypts the ID token using this generated CEK
-. {project_name} encrypts the CEK using the client's public key.
-. The client decrypts this encrypted CEK using their private key
-. The client decrypts the ID token using the decrypted CEK.
-
-No party, other than the client, can decrypt the ID token.
-
-The client must pass its public key for encrypting CEK to {project_name}. {project_name} supports downloading public keys from a URL provided by the client. The client must provide public keys according to the https://datatracker.ietf.org/doc/html/rfc7517[Json Web Keys (JWK)] specification.
-
-The procedure is:
-
-. Open the client's *Keys* tab.
-. Toggle *JWKS URL* to ON.
-. Input the client's public key URL in the *JWKS URL* textbox.
-
-Key Encryption's algorithms are defined in the https://datatracker.ietf.org/doc/html/rfc7518#section-4.1[Json Web Algorithm (JWA)] specification. {project_name} supports:
-
-* RSAES-PKCS1-v1_5(RSA1_5)
-* RSAES OAEP using default parameters (RSA-OAEP)
-* RSAES OAEP 256 using SHA-256 and MFG1 (RSA-OAEP-256)
-
-The procedure to select the algorithm is:
-
-. Open the client's *Settings* tab.
-. Open *Fine Grain OpenID Connect Configuration*.
-. Select the algorithm from *ID Token Encryption Content Encryption Algorithm* pulldown menu.
-
-[[_mapping-acr-to-loa-client]]
-*ACR to Level of Authentication (LoA) Mapping*
-
-In the advanced settings of a client, you can define which `Authentication Context Class Reference (ACR)` value is mapped to which `Level of Authentication (LoA)`.
-This mapping can be specified also at the realm as mentioned in the <<_mapping-acr-to-loa-realm,ACR to LoA Mapping>>. A best practice is to configure this mapping at the
-realm level, which allows to share the same settings across multiple clients.
-
-The `Default ACR Values` can be used to specify the default values when the login request is sent from this client to {project_name} without `acr_values` parameter and without
-a `claims` parameter that has an `acr` claim attached. See https://openid.net/specs/openid-connect-registration-1_0.html#ClientMetadata[offical OIDC dynamic client registration specification].
-
-WARNING: Note that default ACR values are used as the default level, however it cannot be reliably used to enforce login with the particular level.
-For example, assume that you configure the `Default ACR Values` to level 2. Then by default, users will be required to authenticate with level 2.
-However when the user explicitly attaches the parameter into login request such as `acr_values=1`, then the level 1 will be used. As a result, if the client
-really requires level 2, the client is encouraged to check the presence of the `acr` claim inside ID Token and doublecheck that it contains the requested level 2.
-
-image:{project_images}/client-oidc-map-acr-to-loa.png[alt="ACR to LoA mapping"]
-
-For further details see <<_step-up-flow,Step-up Authentication>> and https://openid.net/specs/openid-connect-core-1_0.html#acrSemantics[the offical OIDC specification].
\ No newline at end of file
diff --git a/server_admin/topics/clients/oidc/con-advanced-settings.adoc b/server_admin/topics/clients/oidc/con-advanced-settings.adoc
index 7aa3b859f6..e896d79817 100644
--- a/server_admin/topics/clients/oidc/con-advanced-settings.adoc
+++ b/server_admin/topics/clients/oidc/con-advanced-settings.adoc
@@ -68,13 +68,6 @@ This section exists for backward compatibility. Click the question mark icons f
[[_mtls-client-certificate-bound-tokens]]
*OAuth 2.0 Mutual TLS Certificate Bound Access Tokens Enabled*
-ifeval::["{kc_dist}" == "wildfly"]
-[NOTE]
-====
-To enable mutual TLS in {project_name}, see <<_enable-mtls-wildfly, Enable mutual SSL in WildFly>>.
-====
-endif::[]
-
Mutual TLS binds an access token and a refresh token together with a client certificate, which is exchanged during a TLS handshake. This binding prevents an attacker from using stolen tokens.
This type of token is a holder-of-key token. Unlike bearer tokens, the recipient of a holder-of-key token can verify if the sender of the token is legitimate.
diff --git a/server_admin/topics/clients/oidc/con-basic-settings-prod.adoc b/server_admin/topics/clients/oidc/con-basic-settings-prod.adoc
deleted file mode 100644
index 8e9729bd1e..0000000000
--- a/server_admin/topics/clients/oidc/con-basic-settings-prod.adoc
+++ /dev/null
@@ -1,92 +0,0 @@
-[id="con-basic-settings_{context}"]
-= Basic settings
-[role="_abstract"]
-
-On the *Settings* tab, you can make basic changes. For details on a specific field, click the question mark icon for that field. However, certain fields are described in detail in this section.
-
-*Client ID*:: The alpha-numeric ID string that is used in OIDC requests and in the {project_name} database to identify the client.
-
-*Name*:: The name for the client in {project_name} UI screen. To localize
-the name, set up a replacement string value. For example, a string value such as $\{myapp}. See the link:{developerguide_link}[{developerguide_name}] for more information.
-
-*Description*:: The description of the client. This setting can also be localized.
-
-*Enabled*:: When turned off, the client cannot request authentication.
-
-*Always Display in Console*:: Always list this client in the Account Console even if this user does not have an active session.
-
-*Consent Required*:: When turned on, users see a consent page that they can use to grant access to that application. It will also display metadata so the user knows the exact information that the client can access.
-
-*Login theme*:: A theme to use for login, OTP, grant registration and forgotten password pages.
-
-*Client Protocol*:: The Open ID connect protocol.
-
-[[_access-type]]
-*Access Type*:: The type of OIDC client.
-_Confidential_::
- For server-side clients that perform browser logins and require client secrets when making an Access Token Request. This setting should be used for server-side applications.
-
-_Public_::
- For client-side clients that perform browser logins. As it is not possible to ensure that secrets can be kept safe with client-side clients, it is important to restrict access by configuring correct redirect URIs.
-
-_Bearer-only_::
- The application allows only bearer token requests. When turned on, this application cannot participate in browser logins.
-
-*Standard Flow Enabled*:: When enabled, clients can use the OIDC xref:_oidc-auth-flows-authorization[Authorization Code Flow].
-
-*Implicit Flow Enabled*:: When enabled, clients can use the OIDC xref:_oidc-auth-flows-implicit[Implicit Flow].
-
-*Direct Access Grants Enabled*:: When enabled, clients can use the OIDC xref:_oidc-auth-flows-direct[Direct Access Grants].
-
-*OAuth 2.0 Device Authorization Grant Enabled*:: If this is on, clients are allowed to use the OIDC xref:con-oidc-auth-flows_server_administration_guide[Device Authorization Grant].
-
-*OpenID Connect Client Initiated Backchannel Authentication Grant Enabled*::
-If this is on, clients are allowed to use the OIDC xref:con-oidc-auth-flows_{context}[Client Initiated Backchannel Authentication Grant].
-
-*Standard Flow Enabled*:: When enabled, clients can use the OIDC xref:_oidc-auth-flows-authorization[Authorization Code Flow].
-
-*Implicit Flow Enabled*:: When enabled, clients can use the OIDC xref:_oidc-auth-flows-implicit[Implicit Flow].
-
-*Direct Access Grants Enabled*:: When enabled, clients can use the OIDC xref:_oidc-auth-flows-direct[Direct Access Grants].
-
-*OAuth 2.0 Device Authorization Grant Enabled*:: If this is on, clients are allowed to use the OIDC xref:con-oidc-auth-flows_server_administration_guide[Device Authorization Grant].
-
-*Root URL*:: If {project_name} uses any configured relative URLs, this value is prepended to them.
-
-*Valid Redirect URIs*:: Required field. Enter a URL pattern and click *+* to add and *-* to remove existing URLs and click *Save*. You can use wildcards at the end of the URL pattern. For example $$http://host.com/*$$
-+
-Exclusive redirect URL patterns are typically more secure. See xref:unspecific-redirect-uris_{context}[Unspecific Redirect URIs] for more information.
-
-*Base URL*:: This URL is used when {project_name} needs to link to the client.
-
-[[_admin-url]]
-*Admin URL*:: Callback endpoint for a client. The server uses this URL to make callbacks like pushing revocation policies, performing backchannel logout, and other administrative operations. For {project_name} servlet adapters, this URL can be the root URL of the servlet application.
-For more information, see link:{adapterguide_link}[{adapterguide_name}].
-
-*Web Origins*:: Enter a URL pattern and click *+* to add and *-* to remove existing URLs. Click *Save*.
-+
-This option handles link:https://fetch.spec.whatwg.org/[Cross-Origin Resource Sharing (CORS)].
-If browser JavaScript attempts an AJAX HTTP request to a server whose domain is different from the one that the
-JavaScript code came from, the request must use CORS. The server must handle CORS requests, otherwise the browser will not display or allow the request to be processed. This protocol protects against XSS, CSRF, and other JavaScript-based attacks.
-+
-Domain URLs listed here are embedded within the access token sent to the client application. The client application uses this information to decide whether to allow a CORS request to be invoked on it. Only {project_name} client adapters support this feature. See link:{adapterguide_link}[{adapterguide_name}] for more iformation.
-
-
-[[_front-channel-logout]]
-*Front Channel Logout*:: If *Front Channel Logout* is enabled, the application should be able to log out users through the front channel as per link:https://openid.net/specs/openid-connect-frontchannel-1_0.html[OpenID Connect Front-Channel Logout] specification. If enabled, you should also provide the `Front-Channel Logout URL`.
-
-*Front-Channel Logout URL*:: URL that will be used by {project_name} to send logout requests to clients through the front-channel.
-
-[[_back-channel-logout-url]]
-*Backchannel Logout URL*:: URL that will cause the client to log itself out when a logout request is sent to this realm (via end_session_endpoint). If omitted, no logout requests are sent to the client.
-
-*Backchannel Logout Session Required*::
-Specifyies whether a session ID Claim is included in the Logout Token when the *Backchannel Logout URL* is used.
-
-*Backchannel logout revoke offline sessions*:: Specifies whether a revoke_offline_access event is included in the Logout Token when the Backchannel Logout URL is used. {project_name} will revoke offline sessions when receiving a Logout Token with this event.
-
-
-[NOTE]
-====
-After completing the fields on the *Settings* tab, you can use the other tabs to perform advanced configuration, such as configuring fine-grained authentication for administrators. See <<_fine_grain_permissions, Fine grain admin permissions>>. Also, see the following sections for other procedures.
-====
\ No newline at end of file
diff --git a/server_admin/topics/clients/oidc/con-confidential-client-credentials.adoc b/server_admin/topics/clients/oidc/con-confidential-client-credentials.adoc
index c6754b14d1..257480c6bf 100644
--- a/server_admin/topics/clients/oidc/con-confidential-client-credentials.adoc
+++ b/server_admin/topics/clients/oidc/con-confidential-client-credentials.adoc
@@ -86,10 +86,6 @@ The client secret will be used to sign the JWT by the client.
{project_name} will validate if the client uses proper X509 certificate during the TLS Handshake.
-ifeval::["{kc_dist}" == "wildfly"]
-NOTE: This option requires mutual TLS in {project_name}. See <<_enable-mtls-wildfly, Enable mutual SSL in WildFly>>.
-endif::[]
-
.X509 certificate
image:{project_images}/x509-client-auth.png[]
diff --git a/server_admin/topics/clients/oidc/proc-creating-oidc-client-prod.adoc b/server_admin/topics/clients/oidc/proc-creating-oidc-client-prod.adoc
deleted file mode 100644
index 1c45c19b70..0000000000
--- a/server_admin/topics/clients/oidc/proc-creating-oidc-client-prod.adoc
+++ /dev/null
@@ -1,26 +0,0 @@
-[id="proc-creating-oidc-client_{context}"]
-= Creating an OpenID Connect Client
-[role="_abstract"]
-To protect an application that uses the OpenID connect protocol, you create a client.
-
-.Procedure
-. Click *Clients* in the menu.
-
-. Click *Create* to go to the *Add Client* page.
-+
-.Add client
-image:{project_images}/add-client-oidc.png[Add Client]
-
-. Enter any name for *Client ID.*
-
-. Enter the base URL of your application in the *Root URL* field.
-. Click *Save*.
-+
-This action creates the client and bring you to the *Settings* tab.
-+
-.Client settings tab
-image:{project_images}/client-settings-oidc.png[Client Settings tab]
-
-[role="_additional-resources"]
-.Additional resources
-* For more information about the OIDC protocol, see xref:con-oidc_{context}[OpenID Connect].
diff --git a/server_admin/topics/clients/saml/proc-creating-saml-client-prod.adoc b/server_admin/topics/clients/saml/proc-creating-saml-client-prod.adoc
deleted file mode 100644
index a54fee268b..0000000000
--- a/server_admin/topics/clients/saml/proc-creating-saml-client-prod.adoc
+++ /dev/null
@@ -1,112 +0,0 @@
-
-[[_client-saml-configuration]]
-= Creating a SAML client
-[role="_abstract"]
-{project_name} supports <<_saml,SAML 2.0>> for registered applications.
-POST and Redirect bindings are supported. You can choose to require client signature validation. You can have the server sign and/or encrypt responses as well.
-
-.Procedure
-. Click *Clients* in the menu.
-. Click *Create* to go to the *Add Client* page.
-. Enter the *Client ID* of the client. This is often a URL and is the expected *issuer* value in SAML requests sent by the application.
-. Select *saml* in the *Client Protocol* drop down box.
-+
-.Add client
-image:{project_images}/add-client-saml.png[]
-
-. Enter the *Client SAML Endpoint* URL. This URL is where you want the {project_name} server to send SAML requests and responses. Generally, applications have one URL for processing SAML requests. Multiple URLs can be set in the *Settings* tab of the client.
-
-. Click *Save*. This action creates the client and brings you to the *Settings* tab.
-+
-.Client settings
-image:{project_images}/client-settings-saml.png[]
-+
-The following list describes each setting:
-+
-*Client ID*:: The alpha-numeric ID string that is used in OIDC requests and in the {project_name} database to identify the client. This value must match the issuer value sent with AuthNRequests. {project_name} pulls the issuer from the Authn SAML request and match it to a client by this value.
-
-*Name*:: The name for the client in a {project_name} UI screen. To localize
-the name, set up a replacement string value. For example, a string value such as $\{myapp}. See the link:{developerguide_link}[{developerguide_name}] for more information.
-
-*Description*:: The description of the client. This setting can also be localized.
-
-*Enabled*:: When set to OFF, the client cannot request authentication.
-
-*Consent Required*:: When set to ON, users see a consent page that grants access to that application. The page also displays the metadata of the information that the client can access. If you have ever done a social login to Facebook, you often see a similar page. Red Hat Single Sign-On provides the same functionality.
-
-*Include AuthnStatement*:: SAML login responses may specify the authentication method used, such as password, as well as timestamps of the login and the session expiration.
-*Include AuthnStatement* is enabled by default, so that the *AuthnStatement* element will be included in login responses. Setting this to OFF prevents clients from determining the maximum session length, which can create client sessions that do not expire.
-
-*Sign Documents*:: When set to ON, {project_name} signs the document using the realms private key.
-
-*Optimize REDIRECT signing key lookup*:: When set to ON, the SAML protocol messages include the {project_name} native extension. This extension contains a hint with the signing key ID. The SP uses the extension for signature validation instead of attempting to validate the signature using keys.
-+
-This option applies to REDIRECT bindings where the signature is transferred in query parameters and this information is not found in the signature information. This is contrary to POST binding messages where key ID is always included in document signature.
-+
-This option is used when {project_name} server and adapter provide the IDP and SP. This option is only relevant when *Sign Documents* is set to ON.
-
-*Sign Assertions*:: The assertion is signed and embedded in the SAML XML Auth response.
-
-*Signature Algorithm*:: The algorithm used in signing SAML documents.
-
-*SAML Signature Key Name*:: Signed SAML documents sent using POST binding contain the identification of the signing key in the *KeyName* element. This action can be controlled by the *SAML Signature Key Name* option. This option controls the contents of the *Keyname*.
-+
---
-* *KEY_ID* The *KeyName* contains the key ID. This option is the default option.
-* *CERT_SUBJECT* The *KeyName* contains the subject from the certificate corresponding to the realm key. This option is expected by Microsoft Active Directory Federation Services.
-* *NONE* The *KeyName* hint is completely omitted from the SAML message.
---
-+
-*Canonicalization Method*:: The canonicalization method for XML signatures.
-
-*Encrypt Assertions*:: Encrypts the assertions in SAML documents with the realms private key. The AES algorithm uses a key size of 128 bits.
-
-*Client Signature Required*:: If *Client Signature Required* is enabled, documents coming from a client are expected to be signed. {project_name} will validate this signature using the client public key or cert set up in the `Keys` tab.
-
-*Force POST Binding*:: By default, {project_name} responds using the initial SAML binding of the original request. By enabling *Force POST Binding*, {project_name} responds using the SAML POST binding even if the original request used the redirect binding.
-
-*Front Channel Logout*:: If *Front Channel Logout* is enabled, the application requires a browser redirect to perform a logout. For example, the application may require a cookie to be reset which could only be done via a redirect. If *Front Channel Logout* is disabled, {project_name} invokes a background SAML request to log out of the application.
-
-*Force Name ID Format*:: If a request has a name ID policy, ignore it and use the value configured in the Admin Console under *Name ID Format*.
-
-*Allow ECP Flow*:: If true, this application is allowed to use SAML ECP profile for authentication.
-
-*Name ID Format*:: The Name ID Format for the subject. This format is used if no name ID policy is specified in a request, or if the Force Name ID Format attribute is set to ON.
-
-*Root URL*:: When {project_name} uses a configured relative URL, this value is prepended to the URL.
-
-*Valid Redirect URIs*:: Enter a URL pattern and click the + sign to add. Click the - sign to remove. Click *Save* to save these changes.
-Wildcards values are allowed only at the end of a URL. For example, http://host.com/*$$.
-This field is used when the exact SAML endpoints are not registered and {project_name} pulls the Assertion Consumer URL from a request.
-
-*Base URL*:: If {project_name} needs to link to a client, this URL is used.
-
-*Logo URL*
-
-URL that references a logo for the Client application.
-
-*Policy URL*
-
-URL that the Relying Party Client provides to the End-User to read about how the profile data will be used.
-
-*Terms of Service URL*
-
-URL that the Relying Party Client provides to the End-User to read about the Relying Party's terms of service.
-
-*Master SAML Processing URL*:: This URL is used for all SAML requests and the response is directed to the SP. It is used as the Assertion Consumer Service URL and the Single Logout Service URL.
-+
-If login requests contain the Assertion Consumer Service URL then those login requests will take precedence. This URL must be validated by a registered Valid Redirect URI pattern.
-
-*Assertion Consumer Service POST Binding URL*:: POST Binding URL for the Assertion Consumer Service.
-
-*Assertion Consumer Service Redirect Binding URL*:: Redirect Binding URL for the Assertion Consumer Service.
-
-*Logout Service POST Binding URL*:: POST Binding URL for the Logout Service.
-
-*Logout Service Redirect Binding URL*:: Redirect Binding URL for the Logout Service.
-
-*Logout Service Artifact Binding URL*:: _Artifact_ Binding URL for the Logout Service. When set together with the `Force Artifact Binding` option, _Artifact_ binding is forced for both login and logout flows. _Artifact_ binding is not used for logout unless this property is set.
-
-*Artifact Binding URL*:: URL to send the HTTP artifact messages to.
-
-*Artifact Resolution Service*:: URL of the client SOAP endpoint where to send the `ArtifactResolve` messages to.
diff --git a/server_admin/topics/events.adoc b/server_admin/topics/events.adoc
index 0dd0fce5e2..9b602c63f4 100644
--- a/server_admin/topics/events.adoc
+++ b/server_admin/topics/events.adoc
@@ -4,13 +4,6 @@
[role="_abstract"]
{project_name} includes a suite of auditing capabilities. You can record every login and administrator action and review those actions in the Admin Console. {project_name} also includes a Listener SPI that listens for events and can trigger actions. Examples of built-in listeners include log files and sending emails if an event occurs.
-ifeval::[{project_product}==true]
-include::events/login-prod.adoc[]
-include::events/admin-prod.adoc[]
-endif::[]
-
-ifeval::[{project_community}==true]
include::events/login.adoc[]
include::events/admin.adoc[]
-endif::[]
diff --git a/server_admin/topics/events/admin-prod.adoc b/server_admin/topics/events/admin-prod.adoc
deleted file mode 100644
index 169f672c13..0000000000
--- a/server_admin/topics/events/admin-prod.adoc
+++ /dev/null
@@ -1,33 +0,0 @@
-
-=== Auditing admin events
-
-You can record all actions that are performed by an administrator in the Admin Console. The Admin Console performs administrative actions by invoking the {project_name} REST interface and {project_name} audits these REST invocations. You can view the resulting events in the Admin Console.
-
-To enable auditing of Admin actions:
-
-.Procedure
-. Click *Events* in the menu.
-. Click the *Config* tab.
-. Toggle *Save Events* to *ON* in the *Admin Events Settings* section. {project_name} displays the *Include Representation* switch.
-+
-.Admin event configuration
-image:{project_images}/admin-events-settings.png[Admin Event Configuration]
-+
-. Toggle *Include Representation* to *ON*.
-+
-The `Include Representation` switch includes JSON documents sent through the admin REST API so you can view the administrators actions. To clear the database of stored actions, click *Clear admin events*.
-
-. To view the admin events, click the *Admin Events* tab.
-+
-.Admin events
-image:{project_images}/admin-events.png[Admin Events]
-
-. If the `Details` column has a *Representation* button, click the *Representation* button to view the JSON {project_name} sent with the operation.
-+
-.Admin representation
-image:{project_images}/admin-events-representation.png[Admin Representation]
-
-. Click `Filter` to view specific events.
-+
-.Admin event filter
-image:{project_images}/admin-events-filter.png[Admin Event Filter]
diff --git a/server_admin/topics/events/login-prod.adoc b/server_admin/topics/events/login-prod.adoc
deleted file mode 100644
index a1217f2ee3..0000000000
--- a/server_admin/topics/events/login-prod.adoc
+++ /dev/null
@@ -1,213 +0,0 @@
-
-=== Auditing login events
-
-You can record and view every event that affects users. {project_name} triggers login events for actions such as successful user login, a user entering an incorrect password, or a user account updating. By default, {project_name} does not store or display events in the Admin Console. Only the error events are logged to the Admin Console and the server’s log file.
-
-To start saving events, enable storage.
-
-.Procedure
-. Click *Events* in the menu.
-. Click the *Config* tab.
-. Toggle *Save Events* to *ON*.
-+
-.Save Events
-image:{project_images}/login-events-settings.png[Save Events]
-
-. Specify the length of time to store events in the *Expiration* field.
-
-. Click *Add saved types* if you want to store other types of events
-+
-. Click *Save*.
-
-You can click the *Clear events* button to delete all events.
-
-.Procedure
-
-You can now view login events.
-
-. Click the *Login Events* tab.
-+
-.Login Events
-image:{project_images}/login-events.png[Login Events]
-
-. Flter events using the *Filter* button.
-+
-.Login Events Filter
-image:{project_images}/login-events-filter.png[Login Events Filter]
-+
-In this example, we filter only `Login` events.
-
-. Click *Update* to run the filter.
-
-==== Event types
-
-*Login events:*
-
-[cols="2",options="header"]
-|===
-|Event |Description
-|Login
-|A user logs in.
-
-|Register
-|A user registers.
-
-|Logout
-|A user logs out.
-
-|Code to Token
-|An application, or client, exchanges a code for a token.
-
-|Refresh Token
-|An application, or client, refreshes a token.
-
-|===
-
-*Account events:*
-
-[cols="2",options="header"]
-|===
-|Event |Description
-|Social Link
-|A user account links to a social media provider.
-
-|Remove Social Link
-|The link from a social media account to a user account severs.
-
-|Update Email
-|An email address for an account changes.
-
-|Update Profile
-|A profile for an account changes.
-
-|Send Password Reset
-|{project_name} sends a password reset email.
-
-|Update Password
-|The password for an account changes.
-
-|Update TOTP
-|The Time-based One-time Password (TOTP) settings for an account changes.
-
-|Remove TOTP
-|{project_name} removes TOTP from an account.
-
-|Send Verify Email
-|{project_name} sends an email verification email.
-
-|Verify Email
-|{project_name} verifies the email address for an account.
-
-|===
-
-Each event has a corresponding error event.
-
-==== Event listener
-
-Event listeners listen for events and perform actions based on that event. {project_name} includes two built-in listeners, the Logging Event Listener and Email Event Listener.
-
-===== The logging event listener
-When the Logging Event Listener is enabled, this listener writes to a log file when an error event occurs.
-
-An example log message from a Logging Event Listener:
-
-----
-11:36:09,965 WARN [org.keycloak.events] (default task-51) type=LOGIN_ERROR, realmId=master,
- clientId=myapp,
- userId=19aeb848-96fc-44f6-b0a3-59a17570d374, ipAddress=127.0.0.1,
- error=invalid_user_credentials, auth_method=openid-connect, auth_type=code,
- redirect_uri=http://localhost:8180/myapp,
- code_id=b669da14-cdbb-41d0-b055-0810a0334607, username=admin
-----
-
-You can use the Logging Event Listener to protect against hacker bot attacks:
-
-. Parse the log file for the `LOGIN_ERROR` event.
-. Extract the IP Address of the failed login event.
-. Send the IP address to an intrusion prevention software framework tool.
-
-The Logging Event Listener logs events to the `org.keycloak.events` log category. {project_name} does not include debug log events in server logs, by default.
-
-To include debug log events in server logs:
-
-. Edit the `standalone.xml` file.
-. Change the log level used by the Logging Event listener.
-
-Alternately, you can configure the log level for `org.keycloak.events`.
-
-For example, to change the log level add the following:
-
-[source,xml]
-----
-
- ...
-
-
-
-
-----
-
-To change the log level used by the Logging Event listener, add the following:
-
-[source,xml]
-----
-
- ...
-
-
-
-
-
-
-
-
-
-----
-
-The valid values for log levels are `debug`, `info`, `warn`, `error`, and `fatal`.
-
-===== The Email Event Listener
-
-The Email Event Listener sends an email to the user's account when an event occurs and supports the following events:
-
-* Login Error.
-* Update Password.
-* Update Time-based One-time Password (TOTP).
-* Remove Time-based One-time Password (TOTP).
-
-.Procedure
-
-To enable the Email Listener:
-
-. Click *Events* from the menu.
-. Click the *Config* tab.
-. Click the *Event Listeners* field.
-. Select `email`.
-
-You can exclude events by editing the `standalone.xml`, `standalone-ha.xml`, or `domain.xml` configuration files included in your distribution. For example:
-
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-
-You can set a maximum length of the Event detail in the database by editing the `standalone.xml`, `standalone-ha.xml`, or `domain.xml` configuration files. This setting is useful if a field (for example, redirect_uri) is long. For example:
-
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-
-See the link:{installguide_link}[{installguide_name}] for more details on the location of the `standalone.xml`, `standalone-ha.xml`, or `domain.xml` files.
diff --git a/server_admin/topics/events/login.adoc b/server_admin/topics/events/login.adoc
index 1dd504a2e9..44e233adfc 100644
--- a/server_admin/topics/events/login.adoc
+++ b/server_admin/topics/events/login.adoc
@@ -13,14 +13,14 @@ Use this procedure to start auditing user events.
. Toggle *Save events* to *ON*.
+
.User events settings
-image:{project_images}/user-events-settings.png[User events settings]
+image:images/user-events-settings.png[User events settings]
. Specify the length of time to store events in the *Expiration* field.
. Click *Add saved types* to see other events you can save.
+
.Add types
-image:{project_images}/add-event-types.png[Add types]
+image:images/add-event-types.png[Add types]
. Click *Add*.
@@ -33,12 +33,12 @@ You can now view events.
. Click the *Events* tab in the menu.
+
.User events
-image:{project_images}/user-events.png[Login Events]
+image:images/user-events.png[Login Events]
. To filter events, click *Search user event*.
+
.Search user event
-image:{project_images}/search-user-event.png[Search user event]
+image:images/search-user-event.png[Search user event]
==== Event types
@@ -162,7 +162,7 @@ To enable the Email Listener:
. Select `email`.
+
.Event listeners
-image:{project_images}/event-listeners.png[Event listeners]
+image:images/event-listeners.png[Event listeners]
You can exclude events by using the `--spi-events-listener-email-exclude-events` argument. For example:
diff --git a/server_admin/topics/realms/ssl.adoc b/server_admin/topics/realms/ssl.adoc
index 91a0bcdac8..d3fc207ba3 100644
--- a/server_admin/topics/realms/ssl.adoc
+++ b/server_admin/topics/realms/ssl.adoc
@@ -5,7 +5,6 @@
Each realm has an associated SSL Mode, which defines the SSL/HTTPS requirements for interacting with the realm.
Browsers and applications that interact with the realm honor the SSL/HTTPS requirements defined by the SSL Mode or they cannot interact with the server.
-WARNING: {project_name} generates a self-signed certificate the first time it runs. Please note that self-signed certificates are not secure, and should only be used for testing purposes. It is highly recommended that you install a CA-signed certificate on the {project_name} server itself or on a reverse proxy in front of the {project_name} server. See the link:{installguide_link}[{installguide_name}].
.Procedure
diff --git a/server_admin/topics/sessions/preloading.adoc b/server_admin/topics/sessions/preloading.adoc
index d3a9af5488..9deed1a5e7 100644
--- a/server_admin/topics/sessions/preloading.adoc
+++ b/server_admin/topics/sessions/preloading.adoc
@@ -12,35 +12,7 @@ It can be achieved by setting `preloadOfflineSessionsFromDatabase` property in t
The following example shows how to configure offline sessions preloading.
-ifeval::["{kc_dist}" == "quarkus"]
[source,bash]
----
bin/kc.[sh|bat] start --spi-user-sessions-infinispan-preload-offline-sessions-from-database=true
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-[source,xml]
-----
-
- ...
-
- infinispan
-
-
-
-
-
-
- ...
-
-----
-
-Equivalent configuration using CLI commands:
-
-[source,bash]
-----
-/subsystem=keycloak-server/spi=userSessions:add(default-provider=infinispan)
-/subsystem=keycloak-server/spi=userSessions/provider=infinispan:add(properties={preloadOfflineSessionsFromDatabase => "true"},enabled=true)
-----
-endif::[]
diff --git a/server_admin/topics/sso-protocols/con-oidc-auth-flows.adoc b/server_admin/topics/sso-protocols/con-oidc-auth-flows.adoc
index b27a34df52..3cc0ca9e88 100644
--- a/server_admin/topics/sso-protocols/con-oidc-auth-flows.adoc
+++ b/server_admin/topics/sso-protocols/con-oidc-auth-flows.adoc
@@ -131,37 +131,17 @@ The CIBA grant uses the following two providers.
{project_name} has both default providers. However, the administrator needs to set up Authentication Channel Provider like this:
-ifeval::["{kc_dist}" == "quarkus"]
[source,bash,subs="attributes+"]
----
kc.[sh|bat] start --spi-ciba-auth-channel-ciba-http-auth-channel-http-authentication-channel-uri=https://backend.internal.example.com{kc_base_path}
----
-endif::[]
-ifeval::["{kc_dist}" == "wildfly"]
-[source,xml,subs="attributes+"]
-----
-
- ciba-http-auth-channel
-
-
-
-
-
-
-----
-endif::[]
The configurable items and their description follow.
|===
|Configuration|Description
-ifeval::["{kc_dist}" == "quarkus"]
|http-authentication-channel-uri
-endif::[]
-ifeval::["{kc_dist}" == "wildfly"]
-|httpAuthenticationChannelUri
-endif::[]
|Specifying URI of the entity that actually authenticates the user via AD (Authentication Device).
|===
diff --git a/server_admin/topics/sso-protocols/con-sso-docker.adoc b/server_admin/topics/sso-protocols/con-sso-docker.adoc
index 050c9bb923..885a75a41f 100644
--- a/server_admin/topics/sso-protocols/con-sso-docker.adoc
+++ b/server_admin/topics/sso-protocols/con-sso-docker.adoc
@@ -6,7 +6,7 @@
[NOTE]
====
-Docker authentication is disabled by default. To enable docker authentication, see link:{installguide_profile_link}[{installguide_profile_name}].
+Docker authentication is disabled by default. To enable docker authentication, see the https://www.keycloak.org/server/features[Enabling and disabling features] guide.
====
[role="_abstract"]
link:https://docs.docker.com/registry/spec/auth/[Docker Registry V2 Authentication] is a protocol, similar to OIDC, that authenticates users against Docker registries. {project_name}'s implementation of this protocol lets Docker clients use a {project_name} authentication server authenticate against a registry. This protocol uses standard token and signature mechanisms but it does deviate from a true OIDC implementation. It deviates by using a very specific JSON format for requests and responses as well as mapping repository names and permissions to the OAuth scope mechanism.
diff --git a/server_admin/topics/sso-protocols/docker.adoc b/server_admin/topics/sso-protocols/docker.adoc
index 9b661175eb..1f16df23fa 100644
--- a/server_admin/topics/sso-protocols/docker.adoc
+++ b/server_admin/topics/sso-protocols/docker.adoc
@@ -2,7 +2,7 @@
=== Docker Registry v2 Authentication
-NOTE: Docker authentication is disabled by default. To enable see link:{installguide_profile_link}[{installguide_profile_name}].
+NOTE: Docker authentication is disabled by default. To enable see the https://www.keycloak.org/server/features[Enabling and disabling features] guide.
link:https://docs.docker.com/registry/spec/auth/[Docker Registry V2 Authentication] is an OIDC-Like protocol used to authenticate users against a Docker registry. {project_name}'s implementation of this protocol allows for a {project_name} authentication server to be used by a Docker client to authenticate against a registry. While this protocol uses fairly standard token and signature mechanisms, it has a few wrinkles that prevent it from being treated as a true OIDC implementation. The largest deviations include a very specific JSON format for requests and responses as well as the ability to understand how to map repository names and permissions to the OAuth scope mechanism.
diff --git a/server_admin/topics/sso-protocols/oidc.adoc b/server_admin/topics/sso-protocols/oidc.adoc
index 7387231724..743e93ceee 100644
--- a/server_admin/topics/sso-protocols/oidc.adoc
+++ b/server_admin/topics/sso-protocols/oidc.adoc
@@ -155,25 +155,10 @@ The CIBA grant uses the following two providers.
{project_name} has both default providers. However, the administrator needs to set up Authentication Channel Provider like this:
-ifeval::["{kc_dist}" == "quarkus"]
[source,bash,subs="attributes+"]
----
kc.[sh|bat] start --spi-ciba-auth-channel-ciba-http-auth-channel-http-authentication-channel-uri=https://backend.internal.example.com{kc_base_path}
----
-endif::[]
-ifeval::["{kc_dist}" == "wildfly"]
-[source,xml,subs="attributes+"]
-----
-
- ciba-http-auth-channel
-
-
-
-
-
-
-----
-endif::[]
The configurable items and their description follow.
diff --git a/server_admin/topics/threat/admin.adoc b/server_admin/topics/threat/admin.adoc
index 95ee87b20c..bc5aa6b128 100644
--- a/server_admin/topics/threat/admin.adoc
+++ b/server_admin/topics/threat/admin.adoc
@@ -1,90 +1,5 @@
=== Admin endpoints and Admin Console
-{project_name} exposes the administrative REST API and the web console on the same port as non-administrative usage by default. Do not expose administrative endpoints externally if external access is not necessary. If you need to expose administrative endpoints externally, you can expose them directly in {project_name} or use a proxy.
-
-To expose endpoints by using a proxy, consult the documentation for the proxy. You need to control access to requests to the `{kc_admins_path}` endpoint.
-
-Two options are available in {project_name} to expose endpoints directly, IP restriction and separate ports.
-
-When the Admin Console becomes inaccessible on the frontend URL of {project_name}, configure a fixed admin URL in the default hostname provider.
-
-==== IP restriction
-
-You can restrict access to `{kc_admins_path}` to only specific IP addresses. For example, restrict access to `{kc_admins_path}` to IP addresses in the range `10.0.0.1` to `10.0.0.255`.
-
-[source,xml,subs="attributes+"]
-----
-
- ...
-
- ...
-
- ...
-
-
-
-
-
-
- ...
-
-----
-
-You can also restrict access to specific IP addresses by using these CLI commands.:
-
-[source,bash,subs="attributes+"]
-----
-/subsystem=undertow/configuration=filter/expression-filter=ipAccess:add(,expression="path-prefix[{kc_admins_path}] -> ip-access-control(acl={'10.0.0.0/24 allow'})")
-/subsystem=undertow/server=default-server/host=default-host/filter-ref=ipAccess:add()
-----
-
-[NOTE]
-====
-For IP restriction using a proxy, configure the proxy to ensure {project_name} receives the client IP address and not the proxy IP address.
-====
-
-==== Port restriction
-
-You can expose `{kc_admins_path}` to a different unexposed port. For example, expose `{kc_admins_path}` on port `8444` and prevent access to the default port `8443`.
-
-[source,xml,subs="attributes+"]
-----
-
- ...
-
- ...
-
-
-
- ...
-
-
-
-
-
-
- ...
-
-
-...
-
-
- ...
-
-
- ...
-
-----
-
-You can expose `{kc_admins_path}` on port `8444` and prevent access to the default port `8443` by using CLI commands:
-
-[source,bash,subs="attributes+"]
-----
-/socket-binding-group=standard-sockets/socket-binding=https-admin/:add(port=8444)
-
-/subsystem=undertow/server=default-server/https-listener=https-admin:add(socket-binding=https-admin, security-realm=ApplicationRealm, enable-http2=true)
-
-/subsystem=undertow/configuration=filter/expression-filter=portAccess:add(,expression="path-prefix('{kc_admins_path}') and not equals(%p, 8444) -> response-code(403)")
-/subsystem=undertow/server=default-server/host=default-host/filter-ref=portAccess:add()
-----
+{project_name} exposes the administrative REST API and the web console on the same port as non-administrative usage.
+Do not expose administrative endpoints externally if external access is not necessary.
diff --git a/server_admin/topics/threat/auth-sessions-limit.adoc b/server_admin/topics/threat/auth-sessions-limit.adoc
index 1549e24aa9..988d18ced7 100644
--- a/server_admin/topics/threat/auth-sessions-limit.adoc
+++ b/server_admin/topics/threat/auth-sessions-limit.adoc
@@ -18,7 +18,7 @@ under control, using an advanced firewall control to limit ingress network traff
Higher memory usage may occur for deployments where there are many active `RootAuthenticationSessionEntity` with a lot of `AuthenticationSessionEntity`.
-If the load balancer does not support or is not configured for link:{installguide_stickysessions_link}[session stickiness], the load over network in a cluster can
+If the load balancer does not support or is not configured for session stickiness, the load over network in a cluster can
increase significantly. The reason for this load is that each request that lands on a node that does not own the appropriate authentication session needs to retrieve
and update the authentication session record in the owner node which involves a separate network transmission for both the retrieval and the storage.
@@ -26,7 +26,6 @@ The maximum number of `AuthenticationSessionEntity` per `RootAuthenticationSessi
The following example shows how to limit the number of active `AuthenticationSessionEntity` per a `RootAuthenticationSessionEntity` to 100.
-ifeval::["{kc_dist}" == "quarkus"]
[source,bash]
----
bin/kc.[sh|bat] start --spi-authentication-sessions-infinispan-auth-sessions-limit=100
@@ -38,31 +37,3 @@ The equivalent command for the new map storage:
----
bin/kc.[sh|bat] start --spi-authentication-sessions-map-auth-sessions-limit=100
----
-
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-[source,xml]
-----
-
- ...
-
- infinispan
-
-
-
-
-
-
- ...
-
-----
-
-Equivalent configuration using CLI commands:
-
-[source,bash]
-----
-/subsystem=keycloak-server/spi=authenticationSessions:add(default-provider=infinispan)
-/subsystem=keycloak-server/spi=authenticationSessions/provider=infinispan:add(properties={authSessionsLimit => "100"},enabled=true)
-----
-endif::[]
diff --git a/server_admin/topics/threat/read-only-attributes.adoc b/server_admin/topics/threat/read-only-attributes.adoc
index 3820a9502b..1dcefbe622 100644
--- a/server_admin/topics/threat/read-only-attributes.adoc
+++ b/server_admin/topics/threat/read-only-attributes.adoc
@@ -29,41 +29,12 @@ This is the list of the read-only attributes, which are used internally by the {
System administrators have a way to add additional attributes to this list. The configuration is currently available at the server level.
-ifeval::["{kc_dist}" == "quarkus"]
You can add this configuration by using the `spi-user-profile-legacy-user-profile-read-only-attributes` and ``spi-user-profile-legacy-user-profile-admin-read-only-attributes` options. For example:
[source,bash,options="nowrap"]
----
kc.[sh|bat] start --spi-user-profile-legacy-user-profile-read-only-attributes=foo,bar*
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-You can add this configuration to your `standalone(-*).xml` files to the configuration of the {project_name} server subsystem:
-
-[options="nowrap"]
-----
-
-
-
-
-
-
-
-
-----
-
-The same can be configured with the usage of the JBoss CLI with the commands:
-
-
-[options="nowrap"]
-----
-/subsystem=keycloak-server/spi=userProfile/:add
-/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:add(properties={},enabled=true)
-/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:map-put(name=properties,key=read-only-attributes,value=[foo,bar*])
-/subsystem=keycloak-server/spi=userProfile/provider=legacy-user-profile/:map-put(name=properties,key=admin-read-only-attributes,value=[foo])
-----
-endif::[]
For this example, users and administrators would not be able to update attribute `foo`. Users would not be able to edit any attributes starting with the `bar`.
So for example `bar` or `barrier`. Configuration is case insensitive, so attributes like `FOO` or `BarRier` will be denied as well for this example. The wildcard character `\*` is supported
diff --git a/server_admin/topics/user-federation/ldap.adoc b/server_admin/topics/user-federation/ldap.adoc
index 69b14f004f..579004f884 100644
--- a/server_admin/topics/user-federation/ldap.adoc
+++ b/server_admin/topics/user-federation/ldap.adoc
@@ -80,7 +80,7 @@ Hover the mouse pointer over the tooltips in the Admin Console to see more detai
When you configure a secure connection URL to your LDAP store (for example,`ldaps://myhost.com:636`), {project_name} uses SSL to communicate with the LDAP server. Configure a truststore on the {project_name} server side so that {project_name} can trust the SSL connection to LDAP.
-Configure the global truststore for {project_name} with the Truststore SPI. For more information about configuring the global truststore, see the link:{installguide_link}[{installguide_name}]. If you do not configure the Truststore SPI, the truststore falls back to the default mechanism provided by Java, which can be the file supplied by the `javax.net.ssl.trustStore` system property or the cacerts file from the JDK if the system property is unset.
+Configure the global truststore for {project_name} with the Truststore SPI. For more information about configuring the global truststore, see the [Configuring a Truststore](https://www.keycloak.org/server/keycloak-truststore) guide. If you do not configure the Truststore SPI, the truststore falls back to the default mechanism provided by Java, which can be the file supplied by the `javax.net.ssl.trustStore` system property or the cacerts file from the JDK if the system property is unset.
The `Use Truststore SPI` configuration property, in the LDAP federation provider configuration, controls the truststore SPI. By default, {project_name} sets the property to `Only for ldaps`, which is adequate for most deployments. {project_name} uses the Truststore SPI if the connection URL to LDAP starts with `ldaps` only.
diff --git a/server_admin/topics/users/user-profile.adoc b/server_admin/topics/users/user-profile.adoc
index a8ec7c7bcd..3302a1ebc3 100644
--- a/server_admin/topics/users/user-profile.adoc
+++ b/server_admin/topics/users/user-profile.adoc
@@ -1,4 +1,4 @@
-
+[[user-profile]]
= Defining a user profile
In {project_name} a user is associated with a set of attributes. These attributes are used to better describe and identify users within {project_name} as well as to pass over additional information about them to applications.
diff --git a/server_admin/topics/vault.adoc b/server_admin/topics/vault.adoc
index 0b95fd6dcd..d382eb07cb 100644
--- a/server_admin/topics/vault.adoc
+++ b/server_admin/topics/vault.adoc
@@ -24,127 +24,14 @@ In the <<_ldap,LDAP settings>> of LDAP-based user federation.
OIDC identity provider secret::
In the _Client Secret_ inside identity provider <<_identity_broker_oidc,OpenID Connect Config>>
-ifeval::["{kc_dist}" == "wildfly"]
-To use a vault, register a vault provider in {project_name}. You can use the providers described <<_providers, here>> or implement your provider. See the link:{developerguide_link}[{developerguide_name}] for more information.
-
-[NOTE]
-====
-{project_name} permits a maximum of one active vault provider per {project_name} instance at a time. Configure the vault provider in each instance within the cluster consistently.
-====
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-[[_providers]]
-
-=== Kubernetes / OpenShift files plain-text vault provider
-
-{project_name} supports vault implementation for https://kubernetes.io/docs/concepts/configuration/secret/[Kubernetes secrets]. You can mount Kubernetes secrets as data volumes, and they appear as a directory with a flat-file structure. {project_name} represents each secret as a file with the file's name as the secret name and the file's contents as the secret value.
-
-You must name the files within this directory as the secret name prefixed by the realm name and an underscore. Double all underscores within the secret name or the realm name in the file name. For example, for a field within a realm named `sso_realm`, a reference to a secret with the name `secret-name` would be written as `${vault.secret-name}`, and the file name looked up would be `sso+++__+++realm+++_+++secret-name`. Note the underscore doubled in realm name.
-
-To use this type of secret store, you must declare the `files-plaintext` vault provider in the standalone.xml file and set its parameter for the directory containing the mounted volume. This example shows the `files-plaintext` provider with the directory where vault files are searched set to `standalone/configuration/vault` relative to the {project_name} base directory:
-
-[source, xml]
-----
-
- files-plaintext
-
-
-
-
-
-
-----
-
-Here is the equivalent configuration using CLI commands:
-
-[source,bash]
-----
-/subsystem=keycloak-server/spi=vault/:add
-/subsystem=keycloak-server/spi=vault/provider=files-plaintext/:add(enabled=true,properties={dir => "${jboss.home.dir}/standalone/configuration/vault"})
-/subsystem=keycloak-server/spi=vault:write-attribute(name=default-provider,value=files-plaintext)
-----
-
-=== Elytron credential store vault provider
-
-{project_name} also provides support for reading secrets stored in an Elytron credential store. The `elytron-cs-keystore` vault provider can retrieve secrets from the credential store's keystore based implementation, which is also the default implementation Elytron provides.
-
-A keystore backs this credential store. `JCEKS` is the default format, but you can use other formats such as `PKCS12`. Users can create and manage the store contents using the `elytron` subsystem in WildFly/JBoss EAP, or the `elytron-tool.sh` script.
-
-To use this provider, you must declare the `elytron-cs-keystore` in the `keycloak-server` subsystem and set the location and master secret of the keystore created by Elytron. An example of the minimal configuration for the provider follows:
-
-[source, xml]
-----
-
- elytron-cs-keystore
-
-
-
-
-
-
-
-----
-
-If the underlying keystore has a format different from `JCEKS`, you must specify this format by using the `keyStoreType`:
-
-[source, xml]
-----
-
- elytron-cs-keystore
-
-
-
-
-
-
-
-
-----
-
-For the secret, the `elytron-cs-keystore` provider supports clear-text values and masked values by using the `elytron-tool.sh` script:
-
-[source, xml]
-----
-
- ...
-
- ...
-
-----
-
-For more information about creating and managing elytron credential stores and masking keystore secrets, see the Elytron documentation.
-
-[NOTE]
-====
-{project_name} implements the `elytron-cs-keystore` vault provider as a WildFly extension and is available if the {project_name} server runs on WildFly/JBoss EAP only.
-====
-endif::[]
-
=== Key resolvers
All built-in providers support the configuration of key resolvers. A key resolver implements the algorithm or strategy for combining the realm name with the key, obtained from the `${vault.key}` expression, into the final entry name used to retrieve the secret from the vault. {project_name} uses the `keyResolvers` property to configure the resolvers that the provider uses. The value is a comma-separated list of resolver names. An example of the configuration for the `files-plaintext` provider follows:
-ifeval::["{kc_dist}" == "quarkus"]
[source,bash]
----
kc.[sh.bat] start --spi-vault-file-key-resolvers=REALM_UNDERSCORE_KEY,KEY_ONLY
----
-endif::[]
-ifeval::["{kc_dist}" == "wildfly"]
-[source, xml]
-----
-
- files-plaintext
-
-
-
-
-
-
-
-----
-endif::[]
The resolvers run in the same order you declare them in the configuration. For each resolver, {project_name} uses the last entry name the resolver produces, which combines the realm with the vault key to search for the vault's secret. If {project_name} finds a secret, it returns the secret. If not, {project_name} uses the next resolver. This search continues until {project_name} finds a non-empty secret or runs out of resolvers. If {project_name} finds no secret, {project_name} returns an empty secret.
@@ -171,219 +58,4 @@ endif::[]
|===
-If you have not configured a resolver for the built-in providers, {project_name} selects the `REALM_UNDERSCORE_KEY`.
-
-ifeval::["{kc_dist}" == "wildfly"]
-ifeval::[{project_community}==true]
-The `FACTORY_PROVIDED` resolver provides a hook that you can use to implement a custom resolver by extending the provider factory of choice and overriding the `getFactoryResolver` method, so it returns the custom resolver. For example, if you want to use the `elytron-cs-keystore` provider but the built-in resolvers do not match the format used in your keystore, you can extend the `ElytronCSKeystoreProvider` and implement the `getFactoryResolver` method:
-
-[source,java]
-----
- public class CustomElytronProviderFactory extends ElytronCSKeyStoreProviderFactory {
- ...
- @Override
- protected VaultKeyResolver getFactoryResolver() {
- return (realm, key) -> realm + "###" + key;
- }
-
- @Override
- public String getId() {
- return "custom-elytron-cs-keystore;
- }
-
- ...
- }
-----
-
-The custom factory returns a key resolver that combines the realm and key with a triple # character. For example, an entry would be `master_realm###smtp_key`. Install this factory like any custom provider.
-
-[NOTE]
-====
-The custom factory must override both the `getFactoryResolver` and `getId` methods. The second method is necessary so that you can properly configure the custom factory in {project_name}.
-====
-
-To install and use the previous custom provider, the configuration would look similar to this:
-
-[source, xml]
-----
-
- custom-elytron-cs-keystore
-
-
-
-
-
-
-
-
-
-----
-
-This configuration makes {project_name} set up the custom Elytron provider and use the key resolver that the custom factory creates.
-endif::[]
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-=== Sample Configuration
-
-The following is an example of configuring a vault and credential store. The procedure involves two parts:
-
-* Creating the credential store and a vault, where the credential store and vault passwords are in plain text.
-* Updating the credential store and vault to have the password use a mask provided by `elytron-tool.sh`.
-
-In this example, the test target used is an LDAP instance with `BIND DN credential: secret12`. The target is mapped using user federation in the realm `ldaptest`.
-
-==== Configuring the credential store and vault without a mask
-
-You create the credential store and a vault where the credential store and vault passwords are in plain text.
-
-.Prerequisites
-
-* A running LDAP instance has `BIND DN credential: secret12`.
-
-* The alias uses the format _< key-value> when using the default key resolver. In this case, the instance is running in the realm `ldaptest` and `ldaptest_ldap_secret` is the alias that corresponds to the value `ldap_secret` in that realm.
-
-NOTE: The resolver replaces underscore characters with double underscore characters in the realm and key names. For example, for the key `ldaptest_ldap_secret`, the final key will be `ldaptest_ldap__secret`.
-
-.Procedure
-
-. Create the Elytron credential store.
-+
-[source,bash,subs=+attributes]
-----
-[standalone@localhost:9990 /] /subsystem=elytron/credential-store=test-store:add(create=true, location=/home/test/test-store.p12, credential-reference={clear-text=testpwd1!},implementation-properties={keyStoreType=PKCS12})
-----
-
-. Add an alias to the credential store.
-
-+
-[source,bash,subs=+attributes]
-----
-/subsystem=elytron/credential-store=test-store:add-alias(alias=ldaptest_ldap__secret,secret-value=secret12)
-----
-+
-Notice how the resolver causes the key `ldaptest_ldap__secret` to use double underscores.
-
-. List the aliases from the credential store to inspect the contents of the keystore that is produced by Elytron.
-+
-[source,bash,subs=+attributes]
-----
-keytool -list -keystore /home/test/test-store.p12 -storetype PKCS12 -storepass testpwd1!
-Keystore type: PKCS12
-Keystore provider: SUN
-
-Your keystore contains 1 entries
-
-ldaptest_ldap__secret/passwordcredential/clear/, Oct 12, 2020, SecretKeyEntry,
-----
-
-. Configure the vault SPI in {project_name}.
-+
-[source,bash,subs=+attributes]
-----
-/subsystem=keycloak-server/spi=vault:add(default-provider=elytron-cs-keystore)
-
-/subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:add(enabled=true, properties={location=>/home/test/test-store.p12, secret=>testpwd1!, keyStoreType=>PKCS12})
-----
-+
-At this point, the vault and credentials store passwords are not masked.
-+
-[source,bash,subs=+attributes]
-----
-
- elytron-cs-keystore
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-----
-
-. In the LDAP provider, replace `binDN credential` with `${vault.ldap_secret}`.
-
-. Test your LDAP connection.
-+
-.LDAP Vault
-image:images/ldap-vault.png[LDAP Vault]
-
-
-==== Masking the password in the credential store and vault
-
-You can now update the credential store and vault to have passwords that use a mask provided by `elytron-tool.sh`.
-
-. Create a masked password using values for the `salt` and the `iteration` parameters:
-+
-[source,bash,subs=+attributes]
-----
-$ EAP_HOME/bin/elytron-tool.sh mask --salt SALT --iteration ITERATION_COUNT --secret PASSWORD
-----
-+
-For example:
-+
-[source,bash,subs=+attributes]
-----
-elytron-tool.sh mask --salt 12345678 --iteration 123 --secret testpwd1!
-MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123
-----
-
-. Update the Elytron credential store configuration to use the masked password.
-+
-[source,bash,subs=+attributes]
-----
-/subsystem=elytron/credential-store=cs-store:write-attribute(name=credential-reference.clear-text,value="MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123")
-----
-
-. Update the {project_name} vault configuration to use the masked password.
-+
-[source,bash,subs=+attributes]
-----
-/subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:remove()
-/subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:add(enabled=true, properties={location=>/home/test/test-store.p12, secret=>”MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123”, keyStoreType=>PKCS12})
-----
-+
-The vault and credential store are now masked:
-+
-[source,bash,subs=+attributes]
-----
-
- elytron-cs-keystore
-
-
-
-
-
-
-
-
- ....
- .....
-
-
-
-
-
-
-
-
-----
-
-. You can now test the connection to the LDAP using `${vault.ldap_secret}`.
-
-
-[role="_additional-resources"]
-.Additional resources
-
-For more information about the Elytron tool, see link:https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html/how_to_configure_server_security/securely_storing_credentials#cred_store_elytron_client[Using Credential Stores with Elytron Client].
-endif::[]
+If you have not configured a resolver for the built-in providers, {project_name} selects the `REALM_UNDERSCORE_KEY`.
\ No newline at end of file
diff --git a/server_development/topics/auth-spi.adoc b/server_development/topics/auth-spi.adoc
index f2653f440f..aa76543733 100644
--- a/server_development/topics/auth-spi.adoc
+++ b/server_development/topics/auth-spi.adoc
@@ -849,13 +849,7 @@ org.keycloak.examples.authenticator.SecretQuestionRequiredActionFactory
This services/ file is used by {project_name} to scan the providers it has to load into the system.
-ifeval::["{kc_dist}" == "quarkus"]
To deploy this jar, copy it to the `providers/` directory, then run `bin/kc.[sh|bat] build`.
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-To deploy this jar, just copy it to the `standalone/deployments/` directory.
-endif::[]
==== Implement the RequiredActionProvider
@@ -1127,13 +1121,7 @@ org.keycloak.authentication.forms.RegistrationRecaptcha
This services/ file is used by {project_name} to scan the providers it has to load into the system.
-ifeval::["{kc_dist}" == "quarkus"]
To deploy this jar, copy it to the `providers/` directory, then run `bin/kc.[sh|bat] build`.
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-To deploy this jar, just copy it to the `standalone/deployments/` directory.
-endif::[]
==== Adding FormAction to the registration flow
diff --git a/server_development/topics/providers.adoc b/server_development/topics/providers.adoc
index bbd7a7b48b..340298851d 100644
--- a/server_development/topics/providers.adoc
+++ b/server_development/topics/providers.adoc
@@ -82,33 +82,14 @@ Example service configuration file (`META-INF/services/org.keycloak.theme.ThemeS
org.acme.provider.MyThemeSelectorProviderFactory
----
-ifeval::["{kc_dist}" == "quarkus"]
-You can configure your provider through server configuring.
+To configure your provider, see the (https://www.keycloak.org/server/configuration-provider)[Configuring Providers] guide.
-For example by adding starting the server with the following arguments:
+For example, to configure a provider you can set options as follows:
[source,bash]
----
bin/kc.[sh|bat] --spi-theme-selector-my-theme-selector-enabled=true --spi-theme-selector-my-theme-selector-theme=my-theme
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-You can configure your provider through `standalone.xml`, `standalone-ha.xml`, or `domain.xml`.
-
-For example by adding the following to `standalone.xml`:
-
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-endif::[]
Then you can retrieve the config in the `ProviderFactory` init method:
@@ -181,24 +162,11 @@ For example `HostnameProvider` specifies the hostname to be used by {project_nam
Hence there can be only single implementation of this provider active for the {project_name} server. If there are multiple provider implementations available to the server runtime,
one of them needs to be specified as the default one.
-ifeval::["{kc_dist}" == "quarkus"]
For example such as:
[source,bash]
----
bin/kc.[sh|bat] build --spi-hostname-provider=default
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-For example such as:
-[source,xml]
-----
-
- default
- ...
-
-----
-endif::[]
The value `default` used as the value of `default-provider` must match the ID returned by the `ProviderFactory.getId()` of the particular provider factory implementation.
In the code, you can obtain the provider such as `keycloakSession.getProvider(HostnameProvider.class)`
@@ -215,7 +183,6 @@ The provider ID must match the ID returned by the `ProviderFactory.getId()` of t
particular provider factory implementation. Some provider types can be retrieved with the usage of `ComponentModel` as the second argument and some (for example `Authenticator`) even
need to be retrieved with the usage of `KeycloakSessionFactory`. It is not recommended to implement your own providers this way as it may be deprecated in the future.
-ifeval::["{kc_dist}" == "quarkus"]
=== Registering provider implementations
Providers are registered with the server by simply copying them to the `providers` directory.
@@ -233,161 +200,6 @@ For example to disable the Infinispan user cache provider use:
----
bin/kc.[sh|bat] build --spi-user-cache-infinispan-enabled=false
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-=== Registering provider implementations
-
-There are two ways to register provider implementations. In most cases the simplest way is to use the {project_name} deployer
-approach as this handles a number of dependencies automatically for you. It also supports hot deployment as well as re-deployment.
-
-The alternative approach is to deploy as a module.
-
-If you are creating a custom SPI you will need to deploy it as a module, otherwise we recommend using the {project_name} deployer approach.
-
-==== Using the {project_name} deployer
-
-If you copy your provider jar to the {project_name} `standalone/deployments/` directory, your provider will automatically be deployed.
-Hot deployment works too. Additionally, your provider jar works similarly to other components deployed in a {appserver_name}
-environment in that they can use facilities like the `jboss-deployment-structure.xml` file. This file allows you to
-set up dependencies on other components and load third-party jars and modules.
-
-Provider jars can also be contained within other deployable units like EARs and WARs. Deploying with a EAR actually makes
-it really easy to use third party jars as you can just put these libraries in the EAR's `lib/` directory.
-
-==== Register a provider using Modules
-
-.Procedure
-
-. Create a module using the jboss-cli script or manually create a folder.
-+
-.. For example, to add the event listener sysout example provider using the `jboss-cli` script, execute:
-+
-[source]
-----
-KEYCLOAK_HOME/bin/jboss-cli.sh --command="module add --name=org.acme.provider --resources=target/provider.jar --dependencies=org.keycloak.keycloak-core,org.keycloak.keycloak-server-spi"
-----
-
-.. Alternatively, you can manually create the module inside `KEYCLOAK_HOME/modules` and add your jar and a `module.xml`.
-+
-For example, create the folder `KEYCLOAK_HOME/modules/org/acme/provider/main`. Then copy `provider.jar` to this folder and create `module.xml` with the following content:
-+
-[source,xml]
-----
-
-
-
-
-
-
-
-
-
-
-----
-
-. Register this module with {project_name} by editing the keycloak-server subsystem section of
-`standalone.xml`, `standalone-ha.xml`, or `domain.xml`, and adding it to the providers:
-+
-[source,xml]
-----
-
- auth
-
- module:org.keycloak.examples.event-sysout
-
- ...
-----
-
-==== Disabling a provider
-
-You can disable a provider by setting the enabled attribute for the provider to false
-in `standalone.xml`, `standalone-ha.xml`, or `domain.xml`.
-For example to disable the Infinispan user cache provider add:
-
-[source,xml]
-----
-
-
-
-----
-
-=== Leveraging Jakarta EE
-
-The service providers can be packaged within any Jakarta EE component so long as you set up the `META-INF/services`
-file correctly to point to your providers. For example, if your provider needs to use third party libraries, you
-can package up your provider within an ear and store these third party libraries in the ear's `lib/` directory.
-Also note that provider jars can make use of the `jboss-deployment-structure.xml` file that EJBs, WARS, and EARs
-can use in a {appserver_name} environment. See the {appserver_name} documentation for more details on this file. It
-allows you to pull in external dependencies among other fine grain actions.
-
-`ProviderFactory` implementations are required to be plain java objects. But, we also currently support
-implementing provider classes as Stateful EJBs. This is how you would do it:
-
-[source,java]
-----
-@Stateful
-@Local(EjbExampleUserStorageProvider.class)
-public class EjbExampleUserStorageProvider implements UserStorageProvider,
- UserLookupProvider,
- UserRegistrationProvider,
- UserQueryProvider,
- CredentialInputUpdater,
- CredentialInputValidator,
- OnUserCache
-{
- @PersistenceContext
- protected EntityManager em;
-
- protected ComponentModel model;
- protected KeycloakSession session;
-
- public void setModel(ComponentModel model) {
- this.model = model;
- }
-
- public void setSession(KeycloakSession session) {
- this.session = session;
- }
-
-
- @Remove
- @Override
- public void close() {
- }
-...
-}
-----
-
-You define the `@Local` annotation and specify your provider class there. If you don't do this, EJB will
-not proxy the provider instance correctly and your provider won't work.
-
-You put the `@Remove` annotation on the `close()` method of your provider. If you don't, the stateful bean
-will never be cleaned up and you may eventually see error messages.
-
-Ixmplementations of `ProviderFactory` are required to be plain java objects. Your factory class would
-perform a JNDI lookup of the Stateful EJB in its `create()` method.
-
-[source,java]
-----
-public class EjbExampleUserStorageProviderFactory
- implements UserStorageProviderFactory {
-
- @Override
- public EjbExampleUserStorageProvider create(KeycloakSession session, ComponentModel model) {
- try {
- InitialContext ctx = new InitialContext();
- EjbExampleUserStorageProvider provider = (EjbExampleUserStorageProvider)ctx.lookup(
- "java:global/user-storage-jpa-example/" + EjbExampleUserStorageProvider.class.getSimpleName());
- provider.setModel(model);
- provider.setSession(session);
- return provider;
- } catch (Exception e) {
- throw new RuntimeException(e);
- }
- }
-----
-endif::[]
[[_script_providers]]
=== JavaScript providers
@@ -519,13 +331,7 @@ The name of the script file. This property is *mandatory* and should map to a fi
==== Deploy the script JAR
-ifeval::["{kc_dist}" == "quarkus"]
Once you have a JAR file with a descriptor and the scripts you want to deploy, you just need to copy the JAR to the {project_name} `providers/` directory, then run `bin/kc.[sh|bat] build`.
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-Once you have a JAR file with a descriptor and the scripts you want to deploy, you just need to copy the JAR to the {project_name} `standalone/deployments/` directory.
-endif::[]
===== Deploy the script engine on Java 15 and later
@@ -536,7 +342,6 @@ providers - adding the script engine of your choice to your distribution.
You can use any script engine. However, we only test with the Nashorn JavaScript Engine. The following steps assume that this engine is used:
-ifeval::["{kc_dist}" == "quarkus"]
Install the script engine by copying the nashorn script engine JAR and its dependencies directly to the `KEYCLOAK_HOME/providers` directory. In the `pom.xml` file
of your script project, you can declare the dependency such as this in the `dependencies` section:
@@ -574,30 +379,6 @@ Once the project is built, copy the script engine and its dependencies to the `K
cp target/keycloak-server-copy/providers/*.jar KEYCLOAK_HOME/providers/
```
After re-augment the distribution with `kc.sh build`, the script engine should be deployed and your script providers should work.
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-You can install the script engine by adding the new module nashorn-core to your {project_name}. After the server starts, you can run commands similar to these in the `KEYCLOAK_HOME/bin` directory:
-```bash
-export NASHORN_VERSION=15.3
-wget https://repo1.maven.org/maven2/org/openjdk/nashorn/nashorn-core/$NASHORN_VERSION/nashorn-core-$NASHORN_VERSION.jar
-./jboss-cli.sh -c --command="module add --module-root-dir=../modules/system/layers/keycloak/ --name=org.openjdk.nashorn.nashorn-core --resources=./nashorn-core-$NASHORN_VERSION.jar --dependencies=asm.asm,jdk.dynalink"
-rm nashorn-core-$NASHORN_VERSION.jar
-```
-
-In case you want to install your provider into the different module, you can use configuration property `script-engine-module` of the default scripting provider.
-For example, you can use something such as this in your `KEYCLOAK_HOME/standalone/configuration/standalone-*.xml` file:
-```xml
-
-
-
-
-
-
-
-```
-
-endif::[]
==== Using the {project_name} Admin Console to upload scripts
@@ -610,7 +391,7 @@ JAR file to prevent attacks when you run scripts at runtime.
Ability to upload scripts can be explicitly enabled. This should be used with great care and plans should be created to
deploy all scripts directly to the server as soon as possible.
-For more details about how to enable the `upload_scripts` feature, see link:{installguide_profile_link}[{installguide_profile_name}].
+For more details about how to enable the `upload_scripts` feature, see the https://www.keycloak.org/server/features[Enabling and disabling features] guide.
=== Available SPIs
diff --git a/server_development/topics/themes.adoc b/server_development/topics/themes.adoc
index 5c8cebde5f..9c960cc6ff 100755
--- a/server_development/topics/themes.adoc
+++ b/server_development/topics/themes.adoc
@@ -32,7 +32,6 @@ NOTE: To set the theme for the `master` Admin Console you need to set the Admin
+
. To see the changes to the Admin Console refresh the page.
-ifeval::["{kc_dist}" == "quarkus"]
. Change the welcome theme by using the `spi-theme-welcome-theme` option.
. For example:
@@ -41,23 +40,6 @@ ifeval::["{kc_dist}" == "quarkus"]
----
bin/kc.[sh|bat] start --spi-theme-welcome-theme=custom-theme
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-. Change the welcome theme by editing `standalone.xml`, `standalone-ha.xml`, or `domain.xml`.
-
-. Add `welcomeTheme` to the theme element, for example:
-+
-[source,xml]
-----
-
- ...
- custom-theme
- ...
-
-----
-. Restart the server for the changes to the welcome theme to take effect.
-endif::[]
=== Default themes
@@ -87,31 +69,12 @@ restarting {project_name}.
.Procedure
-ifeval::["{kc_dist}" == "quarkus"]
. Run Keycloak with the following options:
+
[source,bash]
----
bin/kc.[sh|bat] start --spi-theme-static-max-age=-1 --spi-theme-cache-themes=false --spi-theme-cache-templates=false
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-. Edit `standalone.xml`.
-
-. For `theme` set `staticMaxAge` to `-1` and both
-`cacheTemplates` and `cacheThemes` to `false`:
-+
-[source,xml]
-----
-
- -1
- false
- false
- ...
-
-----
-endif::[]
. Create a directory in the `themes` directory.
+
@@ -442,12 +405,5 @@ The contents of `META-INF/keycloak-themes.json` in this case would be:
+
A single archive can contain multiple themes and each theme can support one or more types.
-ifeval::["{kc_dist}" == "quarkus"]
To deploy the archive to {project_name}, add it to the `providers/` directory of
{project_name} and restart the server if it is already running.
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-To deploy the archive to {project_name}, add it to the `standalone/deployments/` directory of
-{project_name} and it will be automatically loaded.
-endif::[]
diff --git a/server_development/topics/user-storage/packaging.adoc b/server_development/topics/user-storage/packaging.adoc
index 309095584f..54e18b159e 100644
--- a/server_development/topics/user-storage/packaging.adoc
+++ b/server_development/topics/user-storage/packaging.adoc
@@ -8,10 +8,4 @@ org.keycloak.examples.federation.properties.ClasspathPropertiesStorageFactory
org.keycloak.examples.federation.properties.FilePropertiesStorageFactory
----
-ifeval::["{kc_dist}" == "quarkus"]
To deploy this jar, copy it to the `providers/` directory, then run `bin/kc.[sh|bat] build`.
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-To deploy this jar, just copy it to the `standalone/deployments/` directory.
-endif::[]
diff --git a/server_development/topics/user-storage/simple-example.adoc b/server_development/topics/user-storage/simple-example.adoc
index 64e4048d4c..527f4e16c0 100644
--- a/server_development/topics/user-storage/simple-example.adoc
+++ b/server_development/topics/user-storage/simple-example.adoc
@@ -209,29 +209,12 @@ In our `init()` method implementation, we find the property file containing our
The `Config.Scope` parameter is factory configuration that configured through server configuration.
-ifeval::["{kc_dist}" == "quarkus"]
For example, by running the server with the following argument:
[source,bash]
----
kc.[sh|bat] start --spi-storage-readonly-property-file-path=/other-users.properties
----
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-For example, by adding the following to `standalone.xml`:
-
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-endif::[]
We can specify the classpath of the user property file instead of hardcoding it. Then you can retrieve the configuration in the `PropertyFileUserStorageProviderFactory.init()`:
@@ -267,13 +250,7 @@ The class files for our provider implementation should be placed in a jar. You
org.keycloak.examples.federation.properties.FilePropertiesStorageFactory
----
-ifeval::["{kc_dist}" == "quarkus"]
To deploy this jar, copy it to the `providers/` directory, then run `bin/kc.[sh|bat] build`.
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-To deploy this jar, just copy it to the `standalone/deployments/` directory.
-endif::[]
==== Enabling the provider in the Admin Console
diff --git a/server_installation/.asciidoctorconfig b/server_installation/.asciidoctorconfig
deleted file mode 100644
index acb09a0956..0000000000
--- a/server_installation/.asciidoctorconfig
+++ /dev/null
@@ -1 +0,0 @@
-:imagesdir: {asciidoctorconfigdir}
diff --git a/server_installation/docinfo-footer.html b/server_installation/docinfo-footer.html
deleted file mode 120000
index a39d3bd0f6..0000000000
--- a/server_installation/docinfo-footer.html
+++ /dev/null
@@ -1 +0,0 @@
-../aggregation/navbar.html
\ No newline at end of file
diff --git a/server_installation/docinfo.html b/server_installation/docinfo.html
deleted file mode 120000
index 14514f94d2..0000000000
--- a/server_installation/docinfo.html
+++ /dev/null
@@ -1 +0,0 @@
-../aggregation/navbar-head.html
\ No newline at end of file
diff --git a/server_installation/images/add-provider-dialog.png b/server_installation/images/add-provider-dialog.png
deleted file mode 100755
index 976e5e148d..0000000000
Binary files a/server_installation/images/add-provider-dialog.png and /dev/null differ
diff --git a/server_installation/images/add-provider-select.png b/server_installation/images/add-provider-select.png
deleted file mode 100755
index 1c2e51af89..0000000000
Binary files a/server_installation/images/add-provider-select.png and /dev/null differ
diff --git a/server_installation/images/cli-gui.png b/server_installation/images/cli-gui.png
deleted file mode 100644
index 62511575c7..0000000000
Binary files a/server_installation/images/cli-gui.png and /dev/null differ
diff --git a/server_installation/images/clients.png b/server_installation/images/clients.png
deleted file mode 100644
index 29636e1fbc..0000000000
Binary files a/server_installation/images/clients.png and /dev/null differ
diff --git a/server_installation/images/cr-code.png b/server_installation/images/cr-code.png
deleted file mode 100644
index 37dc856396..0000000000
Binary files a/server_installation/images/cr-code.png and /dev/null differ
diff --git a/server_installation/images/domain-mode.png b/server_installation/images/domain-mode.png
deleted file mode 100755
index 9d96bd0624..0000000000
Binary files a/server_installation/images/domain-mode.png and /dev/null differ
diff --git a/server_installation/images/domain.png b/server_installation/images/domain.png
deleted file mode 100755
index 6a751437d0..0000000000
Binary files a/server_installation/images/domain.png and /dev/null differ
diff --git a/server_installation/images/email-simple-example.png b/server_installation/images/email-simple-example.png
deleted file mode 100755
index 9c341e2e2a..0000000000
Binary files a/server_installation/images/email-simple-example.png and /dev/null differ
diff --git a/server_installation/images/identity_broker_flow.png b/server_installation/images/identity_broker_flow.png
deleted file mode 100755
index 74739a2a3f..0000000000
Binary files a/server_installation/images/identity_broker_flow.png and /dev/null differ
diff --git a/server_installation/images/installed-namespace.png b/server_installation/images/installed-namespace.png
deleted file mode 100644
index 55642349c5..0000000000
Binary files a/server_installation/images/installed-namespace.png and /dev/null differ
diff --git a/server_installation/images/keycloak_installed_cr.png b/server_installation/images/keycloak_installed_cr.png
deleted file mode 100644
index f03e96cc40..0000000000
Binary files a/server_installation/images/keycloak_installed_cr.png and /dev/null differ
diff --git a/server_installation/images/keycloak_logo.png b/server_installation/images/keycloak_logo.png
deleted file mode 100755
index 4883f52302..0000000000
Binary files a/server_installation/images/keycloak_logo.png and /dev/null differ
diff --git a/server_installation/images/login-complete.png b/server_installation/images/login-complete.png
deleted file mode 100644
index 57263465b3..0000000000
Binary files a/server_installation/images/login-complete.png and /dev/null differ
diff --git a/server_installation/images/login-empty.png b/server_installation/images/login-empty.png
deleted file mode 100644
index f84b8cacbe..0000000000
Binary files a/server_installation/images/login-empty.png and /dev/null differ
diff --git a/server_installation/images/new_install_cr.png b/server_installation/images/new_install_cr.png
deleted file mode 100644
index 6d6ccd6e77..0000000000
Binary files a/server_installation/images/new_install_cr.png and /dev/null differ
diff --git a/server_installation/images/realm_user.png b/server_installation/images/realm_user.png
deleted file mode 100644
index d859471eb5..0000000000
Binary files a/server_installation/images/realm_user.png and /dev/null differ
diff --git a/server_installation/images/realms.png b/server_installation/images/realms.png
deleted file mode 100644
index 13b43bd76d..0000000000
Binary files a/server_installation/images/realms.png and /dev/null differ
diff --git a/server_installation/images/route-ocp.png b/server_installation/images/route-ocp.png
deleted file mode 100644
index 5e71204dc6..0000000000
Binary files a/server_installation/images/route-ocp.png and /dev/null differ
diff --git a/server_installation/images/secrets-ocp.png b/server_installation/images/secrets-ocp.png
deleted file mode 100644
index 5dc6d84b9a..0000000000
Binary files a/server_installation/images/secrets-ocp.png and /dev/null differ
diff --git a/server_installation/images/test-realm-cr.png b/server_installation/images/test-realm-cr.png
deleted file mode 100644
index 674d3b2cfc..0000000000
Binary files a/server_installation/images/test-realm-cr.png and /dev/null differ
diff --git a/server_installation/images/update-server-config-dialog.png b/server_installation/images/update-server-config-dialog.png
deleted file mode 100755
index 5b07227859..0000000000
Binary files a/server_installation/images/update-server-config-dialog.png and /dev/null differ
diff --git a/server_installation/images/update-server-config-select.png b/server_installation/images/update-server-config-select.png
deleted file mode 100755
index b9156e6f57..0000000000
Binary files a/server_installation/images/update-server-config-select.png and /dev/null differ
diff --git a/server_installation/images/user_list.png b/server_installation/images/user_list.png
deleted file mode 100644
index 4e2cb58c15..0000000000
Binary files a/server_installation/images/user_list.png and /dev/null differ
diff --git a/server_installation/index.adoc b/server_installation/index.adoc
deleted file mode 100644
index 37d60aeaac..0000000000
--- a/server_installation/index.adoc
+++ /dev/null
@@ -1,26 +0,0 @@
-:toc: left
-:toclevels: 3
-:sectanchors:
-:linkattrs:
-
-include::topics/templates/document-attributes-community.adoc[]
-
-:server_installation_and_configuration_guide:
-:context: server_installation_and_configuration_guide
-
-= {installguide_name}
-
-:release_header_guide: {installguide_name_short}
-:release_header_latest_link: {installguide_link_latest}
-include::topics/templates/release-header.adoc[]
-
-[WARNING]
-====
-This guide is for the legacy WildFly distribution of Keycloak.
-
-For the new Quarkus powered distribution check out the new https://www.keycloak.org/guides#server[Server Guides].
-====
-
-include::topics.adoc[]
-
-:context:
diff --git a/server_installation/keycloak-images/admin-console.png b/server_installation/keycloak-images/admin-console.png
deleted file mode 100755
index f845508fca..0000000000
Binary files a/server_installation/keycloak-images/admin-console.png and /dev/null differ
diff --git a/server_installation/keycloak-images/cross-dc-architecture.png b/server_installation/keycloak-images/cross-dc-architecture.png
deleted file mode 100644
index 75bf0a4101..0000000000
Binary files a/server_installation/keycloak-images/cross-dc-architecture.png and /dev/null differ
diff --git a/server_installation/keycloak-images/cross-dc-architecture.svg b/server_installation/keycloak-images/cross-dc-architecture.svg
deleted file mode 100644
index 4f3c851ed6..0000000000
--- a/server_installation/keycloak-images/cross-dc-architecture.svg
+++ /dev/null
@@ -1,523 +0,0 @@
-
-
diff --git a/server_installation/keycloak-images/db-module.png b/server_installation/keycloak-images/db-module.png
deleted file mode 100755
index 8f2e2b8b34..0000000000
Binary files a/server_installation/keycloak-images/db-module.png and /dev/null differ
diff --git a/server_installation/keycloak-images/domain-boot-files.png b/server_installation/keycloak-images/domain-boot-files.png
deleted file mode 100755
index 23c9144c3d..0000000000
Binary files a/server_installation/keycloak-images/domain-boot-files.png and /dev/null differ
diff --git a/server_installation/keycloak-images/domain-file.png b/server_installation/keycloak-images/domain-file.png
deleted file mode 100755
index 0c9ffbcc93..0000000000
Binary files a/server_installation/keycloak-images/domain-file.png and /dev/null differ
diff --git a/server_installation/keycloak-images/domain-json-config.png b/server_installation/keycloak-images/domain-json-config.png
deleted file mode 100755
index b14297b889..0000000000
Binary files a/server_installation/keycloak-images/domain-json-config.png and /dev/null differ
diff --git a/server_installation/keycloak-images/domain-server-dir.png b/server_installation/keycloak-images/domain-server-dir.png
deleted file mode 100755
index 48ec897af3..0000000000
Binary files a/server_installation/keycloak-images/domain-server-dir.png and /dev/null differ
diff --git a/server_installation/keycloak-images/host-files.png b/server_installation/keycloak-images/host-files.png
deleted file mode 100755
index 3b8b860b4a..0000000000
Binary files a/server_installation/keycloak-images/host-files.png and /dev/null differ
diff --git a/server_installation/keycloak-images/load_balancer.png b/server_installation/keycloak-images/load_balancer.png
deleted file mode 100644
index de190bff09..0000000000
Binary files a/server_installation/keycloak-images/load_balancer.png and /dev/null differ
diff --git a/server_installation/keycloak-images/load_balancer.svg b/server_installation/keycloak-images/load_balancer.svg
deleted file mode 100644
index 30717b328c..0000000000
--- a/server_installation/keycloak-images/load_balancer.svg
+++ /dev/null
@@ -1,612 +0,0 @@
-
-
diff --git a/server_installation/keycloak-images/operator-application-monitoring-routes.png b/server_installation/keycloak-images/operator-application-monitoring-routes.png
deleted file mode 100644
index 9f687ecf01..0000000000
Binary files a/server_installation/keycloak-images/operator-application-monitoring-routes.png and /dev/null differ
diff --git a/server_installation/keycloak-images/operator-components-keycloak.svg b/server_installation/keycloak-images/operator-components-keycloak.svg
deleted file mode 100644
index 106d14784b..0000000000
--- a/server_installation/keycloak-images/operator-components-keycloak.svg
+++ /dev/null
@@ -1,200 +0,0 @@
-
diff --git a/server_installation/keycloak-images/operator-components.png b/server_installation/keycloak-images/operator-components.png
deleted file mode 100644
index a5558edc1d..0000000000
Binary files a/server_installation/keycloak-images/operator-components.png and /dev/null differ
diff --git a/server_installation/keycloak-images/operator-graphana-dashboard.png b/server_installation/keycloak-images/operator-graphana-dashboard.png
deleted file mode 100644
index 678853fea7..0000000000
Binary files a/server_installation/keycloak-images/operator-graphana-dashboard.png and /dev/null differ
diff --git a/server_installation/keycloak-images/operator-olm-installation.png b/server_installation/keycloak-images/operator-olm-installation.png
deleted file mode 100644
index 3108aaf9e9..0000000000
Binary files a/server_installation/keycloak-images/operator-olm-installation.png and /dev/null differ
diff --git a/server_installation/keycloak-images/operator-openshift-operatorhub.png b/server_installation/keycloak-images/operator-openshift-operatorhub.png
deleted file mode 100644
index 26b4482f90..0000000000
Binary files a/server_installation/keycloak-images/operator-openshift-operatorhub.png and /dev/null differ
diff --git a/server_installation/keycloak-images/operator-operatorhub-install.png b/server_installation/keycloak-images/operator-operatorhub-install.png
deleted file mode 100644
index 821fbe22ca..0000000000
Binary files a/server_installation/keycloak-images/operator-operatorhub-install.png and /dev/null differ
diff --git a/server_installation/keycloak-images/operator-prometheus-alerts.png b/server_installation/keycloak-images/operator-prometheus-alerts.png
deleted file mode 100644
index 0db98eb609..0000000000
Binary files a/server_installation/keycloak-images/operator-prometheus-alerts.png and /dev/null differ
diff --git a/server_installation/keycloak-images/standalone-boot-files.png b/server_installation/keycloak-images/standalone-boot-files.png
deleted file mode 100755
index 9775ba77bf..0000000000
Binary files a/server_installation/keycloak-images/standalone-boot-files.png and /dev/null differ
diff --git a/server_installation/keycloak-images/standalone-config-file.png b/server_installation/keycloak-images/standalone-config-file.png
deleted file mode 100755
index d4ddd121a1..0000000000
Binary files a/server_installation/keycloak-images/standalone-config-file.png and /dev/null differ
diff --git a/server_installation/keycloak-images/standalone-ha-config-file.png b/server_installation/keycloak-images/standalone-ha-config-file.png
deleted file mode 100755
index 0a27cec4ab..0000000000
Binary files a/server_installation/keycloak-images/standalone-ha-config-file.png and /dev/null differ
diff --git a/server_installation/keycloak-images/standalone-json-config-file.png b/server_installation/keycloak-images/standalone-json-config-file.png
deleted file mode 100755
index 39ea3bfdf2..0000000000
Binary files a/server_installation/keycloak-images/standalone-json-config-file.png and /dev/null differ
diff --git a/server_installation/keycloak-images/welcome.png b/server_installation/keycloak-images/welcome.png
deleted file mode 100755
index b7040cf2c6..0000000000
Binary files a/server_installation/keycloak-images/welcome.png and /dev/null differ
diff --git a/server_installation/master-docinfo.xml b/server_installation/master-docinfo.xml
deleted file mode 100644
index b0cc742088..0000000000
--- a/server_installation/master-docinfo.xml
+++ /dev/null
@@ -1,25 +0,0 @@
-{project_name_full}
-{project_versionDoc}
-For Use with {project_name_full} {project_versionDoc}
-{installguide_name}
-{project_versionDoc}
-
- This guide consists of information to install and configure {project_name_full} {project_versionDoc}
-
-
- Red Hat Customer Content Services
-
-
- Copyright {cdate} Red Hat, Inc.
- Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License.
- You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
-
diff --git a/server_installation/master.adoc b/server_installation/master.adoc
deleted file mode 100644
index 77445d557b..0000000000
--- a/server_installation/master.adoc
+++ /dev/null
@@ -1,17 +0,0 @@
-:toc:
-:toclevels: 3
-:numbered:
-:linkattrs:
-
-include::topics/templates/document-attributes-product.adoc[]
-
-:server_installation_and_configuration_guide:
-:context: server_installation_and_configuration_guide
-
-= {installguide_name}
-
-include::topics/templates/making-open-source-more-inclusive.adoc[]
-include::topics.adoc[]
-
-:context:
-
diff --git a/server_installation/pom.xml b/server_installation/pom.xml
deleted file mode 100644
index 2fd6eacd50..0000000000
--- a/server_installation/pom.xml
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
- 4.0.0
-
-
- org.keycloak.documentation
- documentation-parent
- 999-SNAPSHOT
- ../
-
-
- Server Installation and Configuration
- server-installation
- pom
-
-
-
-
- org.keycloak.documentation
- header-maven-plugin
-
-
- add-file-headers
-
-
-
-
- org.asciidoctor
- asciidoctor-maven-plugin
-
-
- asciidoc-to-html
-
-
-
-
- maven-antrun-plugin
-
-
- echo-output
-
-
-
-
-
-
diff --git a/server_installation/rhsso-images/admin-console.png b/server_installation/rhsso-images/admin-console.png
deleted file mode 100644
index f845508fca..0000000000
Binary files a/server_installation/rhsso-images/admin-console.png and /dev/null differ
diff --git a/server_installation/rhsso-images/cross-dc-architecture.png b/server_installation/rhsso-images/cross-dc-architecture.png
deleted file mode 100644
index 81b94d1250..0000000000
Binary files a/server_installation/rhsso-images/cross-dc-architecture.png and /dev/null differ
diff --git a/server_installation/rhsso-images/cross-dc-architecture.svg b/server_installation/rhsso-images/cross-dc-architecture.svg
deleted file mode 100644
index fd71484274..0000000000
--- a/server_installation/rhsso-images/cross-dc-architecture.svg
+++ /dev/null
@@ -1,529 +0,0 @@
-
-
diff --git a/server_installation/rhsso-images/db-module.png b/server_installation/rhsso-images/db-module.png
deleted file mode 100755
index b2207d27c6..0000000000
Binary files a/server_installation/rhsso-images/db-module.png and /dev/null differ
diff --git a/server_installation/rhsso-images/domain-boot-files.png b/server_installation/rhsso-images/domain-boot-files.png
deleted file mode 100755
index d1a3ea34d6..0000000000
Binary files a/server_installation/rhsso-images/domain-boot-files.png and /dev/null differ
diff --git a/server_installation/rhsso-images/domain-file.png b/server_installation/rhsso-images/domain-file.png
deleted file mode 100755
index aa7e087e0c..0000000000
Binary files a/server_installation/rhsso-images/domain-file.png and /dev/null differ
diff --git a/server_installation/rhsso-images/domain-json-config.png b/server_installation/rhsso-images/domain-json-config.png
deleted file mode 100755
index f174acd935..0000000000
Binary files a/server_installation/rhsso-images/domain-json-config.png and /dev/null differ
diff --git a/server_installation/rhsso-images/domain-server-dir.png b/server_installation/rhsso-images/domain-server-dir.png
deleted file mode 100755
index 7ce78c8e77..0000000000
Binary files a/server_installation/rhsso-images/domain-server-dir.png and /dev/null differ
diff --git a/server_installation/rhsso-images/files.png b/server_installation/rhsso-images/files.png
deleted file mode 100755
index f4f5fb4756..0000000000
Binary files a/server_installation/rhsso-images/files.png and /dev/null differ
diff --git a/server_installation/rhsso-images/host-files.png b/server_installation/rhsso-images/host-files.png
deleted file mode 100755
index cf6bd7970e..0000000000
Binary files a/server_installation/rhsso-images/host-files.png and /dev/null differ
diff --git a/server_installation/rhsso-images/load_balancer.png b/server_installation/rhsso-images/load_balancer.png
deleted file mode 100644
index ee093e716e..0000000000
Binary files a/server_installation/rhsso-images/load_balancer.png and /dev/null differ
diff --git a/server_installation/rhsso-images/load_balancer.svg b/server_installation/rhsso-images/load_balancer.svg
deleted file mode 100644
index 8092029ba1..0000000000
--- a/server_installation/rhsso-images/load_balancer.svg
+++ /dev/null
@@ -1,199 +0,0 @@
-
diff --git a/server_installation/rhsso-images/login-page.png b/server_installation/rhsso-images/login-page.png
deleted file mode 100644
index c037cecfc7..0000000000
Binary files a/server_installation/rhsso-images/login-page.png and /dev/null differ
diff --git a/server_installation/rhsso-images/operator-components-sso.svg b/server_installation/rhsso-images/operator-components-sso.svg
deleted file mode 100644
index 28b0ab5b10..0000000000
--- a/server_installation/rhsso-images/operator-components-sso.svg
+++ /dev/null
@@ -1,158 +0,0 @@
-
diff --git a/server_installation/rhsso-images/operator-components.png b/server_installation/rhsso-images/operator-components.png
deleted file mode 100644
index 4ae9e88cfd..0000000000
Binary files a/server_installation/rhsso-images/operator-components.png and /dev/null differ
diff --git a/server_installation/rhsso-images/operator-olm-installation.png b/server_installation/rhsso-images/operator-olm-installation.png
deleted file mode 100644
index 09c1cf1510..0000000000
Binary files a/server_installation/rhsso-images/operator-olm-installation.png and /dev/null differ
diff --git a/server_installation/rhsso-images/operator-openshift-operatorhub.png b/server_installation/rhsso-images/operator-openshift-operatorhub.png
deleted file mode 100644
index 3283d265e0..0000000000
Binary files a/server_installation/rhsso-images/operator-openshift-operatorhub.png and /dev/null differ
diff --git a/server_installation/rhsso-images/operator-operatorhub-install.png b/server_installation/rhsso-images/operator-operatorhub-install.png
deleted file mode 100644
index 9f309b0365..0000000000
Binary files a/server_installation/rhsso-images/operator-operatorhub-install.png and /dev/null differ
diff --git a/server_installation/rhsso-images/standalone-boot-files.png b/server_installation/rhsso-images/standalone-boot-files.png
deleted file mode 100755
index 0dca7df1dd..0000000000
Binary files a/server_installation/rhsso-images/standalone-boot-files.png and /dev/null differ
diff --git a/server_installation/rhsso-images/standalone-config-file.png b/server_installation/rhsso-images/standalone-config-file.png
deleted file mode 100755
index f53837e83a..0000000000
Binary files a/server_installation/rhsso-images/standalone-config-file.png and /dev/null differ
diff --git a/server_installation/rhsso-images/standalone-ha-config-file.png b/server_installation/rhsso-images/standalone-ha-config-file.png
deleted file mode 100755
index 42f805e149..0000000000
Binary files a/server_installation/rhsso-images/standalone-ha-config-file.png and /dev/null differ
diff --git a/server_installation/rhsso-images/standalone-json-config-file.png b/server_installation/rhsso-images/standalone-json-config-file.png
deleted file mode 100755
index 2f25f3c1ba..0000000000
Binary files a/server_installation/rhsso-images/standalone-json-config-file.png and /dev/null differ
diff --git a/server_installation/topics.adoc b/server_installation/topics.adoc
deleted file mode 100644
index 42f04e1a69..0000000000
--- a/server_installation/topics.adoc
+++ /dev/null
@@ -1,64 +0,0 @@
-include::topics/overview.adoc[]
-include::topics/overview/recommended-reading.adoc[]
-include::topics/installation.adoc[]
-include::topics/installation/system-requirements.adoc[]
-ifeval::[{project_community}==true]
-include::topics/installation/distribution-files-community.adoc[]
-endif::[]
-ifeval::[{project_product}==true]
-include::topics/installation/distribution-files-product.adoc[]
-include::topics/installation/installing-rpm.adoc[]
-endif::[]
-include::topics/installation/directory-structure.adoc[]
-include::topics/operating-mode.adoc[]
-include::topics/operating-mode/standalone.adoc[]
-include::topics/operating-mode/standalone-ha.adoc[]
-include::topics/operating-mode/domain.adoc[]
-include::topics/operating-mode/crossdc.adoc[]
-include::topics/config-subsystem.adoc[]
-include::topics/config-subsystem/configure-spi-providers.adoc[]
-include::topics/config-subsystem/start-cli.adoc[]
-include::topics/config-subsystem/cli-recipes.adoc[]
-include::topics/profiles.adoc[]
-include::topics/database.adoc[]
-include::topics/database/checklist.adoc[]
-include::topics/database/jdbc.adoc[]
-include::topics/database/datasource.adoc[]
-include::topics/database/hibernate.adoc[]
-include::topics/database/unicode-considerations.adoc[]
-include::topics/host.adoc[]
-include::topics/network.adoc[]
-include::topics/network/bind-address.adoc[]
-include::topics/network/ports.adoc[]
-include::topics/network/https.adoc[]
-include::topics/network/outgoing.adoc[]
-include::topics/clustering.adoc[]
-include::topics/clustering/recommended.adoc[]
-include::topics/clustering/example.adoc[]
-include::topics/clustering/load-balancer.adoc[]
-include::topics/clustering/sticky-sessions.adoc[]
-include::topics/clustering/multicast.adoc[]
-include::topics/clustering/securing-cluster-comm.adoc[]
-include::topics/clustering/serialized.adoc[]
-include::topics/clustering/booting.adoc[]
-include::topics/clustering/troubleshooting.adoc[]
-include::topics/cache.adoc[]
-include::topics/cache/eviction.adoc[]
-include::topics/cache/replication.adoc[]
-include::topics/cache/disable.adoc[]
-include::topics/cache/clear.adoc[]
-include::topics/operator.adoc[]
-include::topics/operator/installation.adoc[]
-ifeval::[{project_community}==true]
-include::topics/operator/monitoring-stack.adoc[]
-endif::[]
-include::topics/operator/keycloak-cr.adoc[]
-include::topics/operator/keycloak-realm-cr.adoc[]
-include::topics/operator/keycloak-client-cr.adoc[]
-include::topics/operator/keycloak-user-cr.adoc[]
-include::topics/operator/external-database.adoc[]
-include::topics/operator/external-keycloak.adoc[]
-include::topics/operator/keycloak-backup-cr.adoc[]
-include::topics/operator/extensions.adoc[]
-include::topics/operator/command-options.adoc[]
-include::topics/operator/upgrading.adoc[]
diff --git a/server_installation/topics/cache.adoc b/server_installation/topics/cache.adoc
deleted file mode 100644
index cdbd32a5ad..0000000000
--- a/server_installation/topics/cache.adoc
+++ /dev/null
@@ -1,19 +0,0 @@
-
-[[cache-configuration]]
-== Server cache configuration
-
-{project_name} has two types of caches. One type of cache sits in front of the database to decrease load on the DB
-and to decrease overall response times by keeping data in memory. Realm, client, role, and user metadata is kept in this type of cache.
-This cache is a local cache. Local caches do not use replication even if you are in the cluster with more {project_name} servers.
-Instead, they only keep copies locally and if the entry is updated an invalidation message is sent to the rest of the cluster
-and the entry is evicted. There is separate replicated cache `work`, which task is to send the invalidation messages to the whole cluster about what entries
- should be evicted from local caches. This greatly reduces network traffic, makes things efficient, and avoids transmitting sensitive
-metadata over the wire.
-
-The second type of cache handles managing user sessions, offline tokens, and keeping track of login failures so that the
-server can detect password phishing and other attacks. The data held in these caches is temporary, in memory only,
-but is possibly replicated across the cluster.
-
-This chapter discusses some configuration options for these caches for both clustered and non-clustered deployments.
-
-NOTE: More advanced configuration of these caches can be found in the link:{appserver_caching_link}[Infinispan] section of the _{appserver_caching_name}_.
diff --git a/server_installation/topics/cache/clear.adoc b/server_installation/topics/cache/clear.adoc
deleted file mode 100644
index 94d32b0b0a..0000000000
--- a/server_installation/topics/cache/clear.adoc
+++ /dev/null
@@ -1,16 +0,0 @@
-
-=== Clearing cache at runtime
-
-You can clear the realm cache, user cache, or the external public keys.
-
-.Procedure
-
-. Log into the Admin Console.
-
-. Click *Realm Settings*.
-
-. Click the *Cache* tab.
-
-. Clear the realm cache, the user cache or cache of external public keys.
-
-NOTE: The cache will be cleared for all realms!
diff --git a/server_installation/topics/cache/disable.adoc b/server_installation/topics/cache/disable.adoc
deleted file mode 100644
index 9c452a9d58..0000000000
--- a/server_installation/topics/cache/disable.adoc
+++ /dev/null
@@ -1,30 +0,0 @@
-
-=== Disabling caching
-
-You can disable the realm or user cache.
-
-.Procedure
-
-. Edit the `standalone.xml`, `standalone-ha.xml`, or `domain.xml` file in your distribution.
-+
-The location of this file depends on your <<_operating-mode, operating mode>>.
-Here is a sample config file.
-+
-[source,xml]
-----
-
-
-
-
-
-
-
-
-
-----
-
-. Set the `enabled` attribute to false for the cache you want to disable.
-
-. Reboot your server for this change to take effect.
-
-
diff --git a/server_installation/topics/cache/eviction.adoc b/server_installation/topics/cache/eviction.adoc
deleted file mode 100644
index 66f18b0cb3..0000000000
--- a/server_installation/topics/cache/eviction.adoc
+++ /dev/null
@@ -1,58 +0,0 @@
-
-[[_eviction]]
-=== Eviction and expiration
-
-There are multiple different caches configured for {project_name}.
-There is a realm cache that holds information about secured applications, general security data, and configuration options.
-There is also a user cache that contains user metadata. Both caches default to a maximum of 10000 entries and use a least recently used eviction strategy.
-Each of them is also tied to an object revisions cache that controls eviction in a clustered setup.
-This cache is created implicitly and has twice the configured size. The same applies for the `authorization` cache, which holds
-the authorization data. The `keys` cache holds data about external keys and does not need to have dedicated revisions cache. Rather
-it has `expiration` explicitly declared on it, so the keys are periodically expired and forced to be periodically downloaded from external clients or
-identity providers.
-
-The eviction policy and max entries for these caches can be configured in the _standalone.xml_, _standalone-ha.xml_, or
-_domain.xml_ depending on your <<_operating-mode, operating mode>>. In the configuration file, there is the part with infinispan
-subsystem, which looks similar to this:
-
-[source,xml,subs="attributes+"]
-----
-
-
-
-
-
-
-
-
- ...
-
-
-
-
- ...
-
-----
-
-To limit or expand the number of allowed entries simply add or edit the `object` element or the `expiration` element of particular cache
-configuration.
-
-In addition, there are also separate caches `sessions`, `clientSessions`, `offlineSessions`, `offlineClientSessions`,
-`loginFailures` and `actionTokens`. These caches are distributed in cluster environment and they are unbounded in size by default.
-If they are bounded, it would then be possible that some sessions will be lost. Expired sessions are cleared internally
-by {project_name} itself to avoid growing the size of these caches
-without limit. If you see memory issues due to a large number of sessions, you can try to:
-
-* Increase the size of cluster (more nodes in cluster means that sessions are spread more equally among nodes)
-
-* Increase the memory for {project_name} server process
-
-* Decrease the number of owners to ensure that caches are saved in one single place. See <<_replication>> for more details
-
-* Disable l1-lifespan for distributed caches. See Infinispan documentation for more details
-
-* Decrease session timeouts, which could be done individually for each realm in {project_name} admin console. But this could affect
-usability for end users. See link:{adminguide_timeouts_link}[{adminguide_timeouts_name}] for more details.
-
-There is an additional replicated cache, `work`, which is mostly used to send messages among cluster nodes; it is also unbounded
-by default. However, this cache should not cause any memory issues as entries in this cache are very short-lived.
diff --git a/server_installation/topics/cache/replication.adoc b/server_installation/topics/cache/replication.adoc
deleted file mode 100644
index 92d22d0a90..0000000000
--- a/server_installation/topics/cache/replication.adoc
+++ /dev/null
@@ -1,31 +0,0 @@
-
-[[_replication]]
-=== Replication and failover
-
-There are caches like `sessions`, `authenticationSessions`, `offlineSessions`, `loginFailures` and a few others (See <<_eviction>> for more details),
-which are configured as distributed caches when using a clustered setup. Entries are
-not replicated to every single node, but instead one or more nodes is chosen as an owner of that data. If a node is not the owner of a specific cache entry it queries
-the cluster to obtain it. What this means for failover is that if all the nodes that own a piece of data go down, that data
-is lost forever. By default, {project_name} only specifies one owner for data. So if that one node goes down
-that data is lost. This usually means that users will be logged out and will have to login again.
-
-You can change the number of nodes that replicate a piece of data by change the `owners` attribute in the `distributed-cache` declaration.
-
-.owners
-[source,xml,subs="attributes+"]
-----
-
-
-
-...
-----
-
-Here we've changed it so at least two nodes will replicate one specific user login session.
-
-TIP: The number of owners recommended is really dependent on your deployment. If you do not care if users are logged
- out when a node goes down, then one owner is good enough and you will avoid replication.
-
-TIP: It is generally wise to configure your environment to use loadbalancer with sticky sessions. It is beneficial for performance
- as {project_name} server, where the particular request is served, will be usually the owner of the data from the distributed cache
- and will therefore be able to look up the data locally. See <> for more details.
-
diff --git a/server_installation/topics/clustering.adoc b/server_installation/topics/clustering.adoc
deleted file mode 100644
index b6062728ee..0000000000
--- a/server_installation/topics/clustering.adoc
+++ /dev/null
@@ -1,14 +0,0 @@
-
-[[_clustering]]
-== Configuring {project_name} to run in a cluster
-
-To configure {project_name} to run in a cluster, you perform these actions:
-
-* <<_operating-mode,Pick an operation mode>>
-* <<_database,Configure a shared external database>>
-* Set up a load balancer
-* Supplying a private network that supports IP multicast
-
-Picking an operation mode and configuring a shared database have been discussed earlier in this guide. This chapter describes setting up a load balancer and supplying a private network as well as booting up a host in the cluster.
-
-NOTE: It is possible to cluster {project_name} without IP Multicast, but this topic is beyond the scope of this guide. For more information, see link:{appserver_jgroups_link}[JGroups] chapter of the _{appserver_jgroups_name}_.
diff --git a/server_installation/topics/clustering/booting.adoc b/server_installation/topics/clustering/booting.adoc
deleted file mode 100644
index 98cce8ed0f..0000000000
--- a/server_installation/topics/clustering/booting.adoc
+++ /dev/null
@@ -1,21 +0,0 @@
-
-=== Booting the cluster
-
-Booting {project_name} in a cluster depends on your <<_operating-mode, operating mode>>
-
-.Standalone Mode
-[source]
-----
-$ bin/standalone.sh --server-config=standalone-ha.xml
-----
-
-.Domain Mode
-[source]
-----
-$ bin/domain.sh --host-config=host-master.xml
-$ bin/domain.sh --host-config=host-slave.xml
-----
-
-You may need to use additional parameters or system properties. For example, the parameter `-b` for the binding host or the system property
-`jboss.node.name` to specify the name of the route, as described in <> section.
-
diff --git a/server_installation/topics/clustering/example.adoc b/server_installation/topics/clustering/example.adoc
deleted file mode 100644
index 57c99b1ae0..0000000000
--- a/server_installation/topics/clustering/example.adoc
+++ /dev/null
@@ -1,6 +0,0 @@
-
-=== Clustering example
-
-{project_name} does come with an out of the box clustering demo that leverages domain mode. Review the
-<<_clustered-domain-example, Clustered Domain Example>> chapter for more details.
-
diff --git a/server_installation/topics/clustering/load-balancer.adoc b/server_installation/topics/clustering/load-balancer.adoc
deleted file mode 100644
index ed60307261..0000000000
--- a/server_installation/topics/clustering/load-balancer.adoc
+++ /dev/null
@@ -1,240 +0,0 @@
-[[_setting-up-a-load-balancer-or-proxy]]
-=== Setting Up a load balancer or proxy
-
-This section discusses a number of things you need to configure before you can put a reverse proxy or load balancer
-in front of your clustered {project_name} deployment. It also covers configuring the built-in load balancer that
-was <<_clustered-domain-example, Clustered Domain Example>>.
-
-The following diagram illustrates the use of a load balancer. In this example, the load balancer serves as a reverse proxy between three clients and a cluster of three {project_name} servers.
-
-[[load-balancer-diagram]]
-.Example Load Balancer Diagram
-image:{project_images}/load_balancer.png[]
-
-==== Identifying client IP addresses
-
-A few features in {project_name} rely on the fact that the remote
-address of the HTTP client connecting to the authentication server is the real IP address of the client machine. Examples include:
-
-* Event logs - a failed login attempt would be logged with the wrong source IP address
-* SSL required - if the SSL required is set to external (the default) it should require SSL for all external requests
-* Authentication flows - a custom authentication flow that uses the IP address to for example show OTP only for external requests
-* Dynamic Client Registration
-
-This can be problematic when you have a reverse proxy or loadbalancer in front of your {project_name} authentication server.
-The usual setup is that you have a frontend proxy sitting on a public network that load balances and forwards requests
-to backend {project_name} server instances located in a private network. There is some extra configuration you have to do in this scenario
-so that the actual client IP address is forwarded to and processed by the {project_name} server instances. Specifically:
-
-* Configure your reverse proxy or loadbalancer to properly set `X-Forwarded-For` and `X-Forwarded-Proto` HTTP headers.
-* Configure your reverse proxy or loadbalancer to preserve the original 'Host' HTTP header.
-* Configure the authentication server to read the client's IP address from `X-Forwarded-For` header.
-
-Configuring your proxy to generate the `X-Forwarded-For` and `X-Forwarded-Proto` HTTP headers and preserving the
- original `Host` HTTP header is beyond the scope of this guide. Take extra precautions to ensure that the
-`X-Forwarded-For` header is set by your proxy. If your proxy isn't configured correctly, then _rogue_ clients can set this header themselves and trick {project_name}
-into thinking the client is connecting from a different IP address than it actually is. This becomes really important if you are doing
-any black or white listing of IP addresses.
-
-Beyond the proxy itself, there are a few things you need to configure on the {project_name} side of things.
-If your proxy is forwarding requests via the HTTP protocol, then you need to configure {project_name} to pull the client's
-IP address from the `X-Forwarded-For` header rather than from the network packet.
-To do this, open up the profile configuration file (_standalone.xml_, _standalone-ha.xml_, or _domain.xml_ depending on your
-<<_operating-mode, operating mode>>) and look for the `{subsystem_undertow_xml_urn}` XML block.
-
-.`X-Forwarded-For` HTTP Config
-[source,xml,subs="attributes+"]
-----
-
-
-
-
-
- ...
-
- ...
-
-----
-
-Add the `proxy-address-forwarding` attribute to the `http-listener` element. Set the value to `true`.
-
-If your proxy is using the AJP protocol instead of HTTP to forward requests (i.e. Apache HTTPD + mod-cluster), then you have
-to configure things a little differently. Instead of modifying the `http-listener`, you need to add a filter to
-pull this information from the AJP packets.
-
-
-.`X-Forwarded-For` AJP Config
-[source,xml,subs="attributes+"]
-----
-
-
-
-
-
-
- ...
-
-
-
- ...
-
- ...
-
-
-
-----
-
-==== Enabling HTTPS/SSL with a reverse proxy
-
-Assuming that your reverse proxy doesn't use port 8443 for SSL you also need to configure to what port the HTTPS traffic is redirected.
-[source,xml,subs="attributes+"]
-----
-
- ...
-
- ...
-
-----
-
-.Procedure
-
-. Add the `redirect-socket` attribute to the `http-listener` element. The value should be `proxy-https` which points to a
-socket binding you also need to define.
-
-. Add a new `socket-binding` element to the `socket-binding-group` element:
-+
-[source,xml]
-----
-
- ...
-
- ...
-
-----
-
-==== Verifying the configuration
-
-You can verify the reverse proxy or load balancer configuration
-
-.Procedure
-
-. Open the path `/auth/realms/master/.well-known/openid-configuration`
-through the reverse proxy.
-+
-For example if the reverse proxy address is `\https://acme.com/` then open the URL
-`\https://acme.com/auth/realms/master/.well-known/openid-configuration`. This will show a JSON document listing a number
-of endpoints for {project_name}.
-
-. Make sure the endpoints starts with the address (scheme, domain and port) of your
-reverse proxy or load balancer. By doing this you make sure that {project_name} is using the correct endpoint.
-
-. Verify that {project_name} sees the correct source IP address for requests.
-+
-To check this, you can
-try to login to the Admin Console with an invalid username and/or password. This should show a warning in the server log
-something like this:
-+
-[source]
-----
-08:14:21,287 WARN XNIO-1 task-45 [org.keycloak.events] type=LOGIN_ERROR, realmId=master, clientId=security-admin-console, userId=8f20d7ba-4974-4811-a695-242c8fbd1bf8, ipAddress=X.X.X.X, error=invalid_user_credentials, auth_method=openid-connect, auth_type=code, redirect_uri=http://localhost:8080/auth/admin/master/console/?redirect_fragment=%2Frealms%2Fmaster%2Fevents-settings, code_id=a3d48b67-a439-4546-b992-e93311d6493e, username=admin
-----
-
-. Check that the value of `ipAddress` is the IP address of the machine you tried to login with and not the IP address of the reverse proxy or load balancer.
-
-==== Using the built-in load balancer
-
-This section covers configuring the built-in load balancer that is discussed in the
-<<_clustered-domain-example, Clustered Domain Example>>.
-
-The <<_clustered-domain-example, Clustered Domain Example>> is only designed to run
-on one machine. To bring up a slave on another host, you'll need to
-
-. Edit the _domain.xml_ file to point to your new host slave
-. Copy the server distribution. You don't need the _domain.xml_, _host.xml_, or _host-master.xml_ files. Nor do you need
- the _standalone/_ directory.
-. Edit the _host-slave.xml_ file to change the bind addresses used or override them on the command line
-
-.Procedure
-
-. Open _domain.xml_ so you can registering the new host slave with the load balancer configuration.
-
-. Go to the undertow configuration in the `load-balancer` profile. Add a new `host` definition called `remote-host3` within the `reverse-proxy` XML block.
-+
-.domain.xml reverse-proxy config
-[source,xml,subs="attributes+"]
-----
-
- ...
-
-
-
-
-
-
-
- ...
-
-----
-+
-The `output-socket-binding` is a logical name pointing to a `socket-binding` configured later in the _domain.xml_ file.
-The `instance-id` attribute must also be unique to the new host as this value is used by a cookie to enable sticky
-sessions when load balancing.
-
-. Go down to the `load-balancer-sockets` `socket-binding-group` and add the `outbound-socket-binding` for `remote-host3`.
-+
-This new binding needs to point to the host and port of the new host.
-+
-.domain.xml outbound-socket-binding
-[source,xml]
-----
-
- ...
-
-
-
-
-
-
-
-
-
-
-----
-
-===== Master bind addresses
-
-Next thing you'll have to do is to change the `public` and `management` bind addresses for the master host. Either
-edit the _domain.xml_ file as discussed in the <<_bind-address, Bind Addresses>> chapter
-or specify these bind addresses on the command line as follows:
-
-[source]
-----
-$ domain.sh --host-config=host-master.xml -Djboss.bind.address=192.168.0.2 -Djboss.bind.address.management=192.168.0.2
-----
-
-===== Host slave bind addresses
-
-Next you'll have to change the `public`, `management`, and domain controller bind addresses (`jboss.domain.master-address`). Either edit the
-_host-slave.xml_ file or specify them on the command line as follows:
-
-[source]
-----
-$ domain.sh --host-config=host-slave.xml
- -Djboss.bind.address=192.168.0.5
- -Djboss.bind.address.management=192.168.0.5
- -Djboss.domain.master.address=192.168.0.2
-----
-
-The values of `jboss.bind.address` and `jboss.bind.address.management` pertain to the host slave's IP address.
-The value of `jboss.domain.master.address` needs to be the IP address of the domain controller, which is the management address of the master host.
-
-[role="_additional-resources"]
-.Additional resources
-
-* See link:{appserver_loadbalancer_link}[the load balancing] section in the _{appserver_loadbalancer_name}_ for information how to use other software-based load balancers.
-
diff --git a/server_installation/topics/clustering/multicast.adoc b/server_installation/topics/clustering/multicast.adoc
deleted file mode 100644
index c218b89646..0000000000
--- a/server_installation/topics/clustering/multicast.adoc
+++ /dev/null
@@ -1,36 +0,0 @@
-
-=== Setting up multicast networking
-
-The default clustering support needs IP Multicast. Multicast is a network broadcast protocol. This protocol is used at boot time to discover and join the cluster. It is also used to broadcast messages for the replication and invalidation of distributed caches used by {project_name}.
-
-The clustering subsystem for {project_name} runs on the JGroups stack. Out of the box, the bind addresses for clustering are bound to a private network interface with 127.0.0.1 as default IP address.
-
-.Procedure
-
-. Edit your the _standalone-ha.xml_ or _domain.xml_ sections discussed in the <<_bind-address,Bind Address>> chapter.
-+
-.private network config
-[source,xml]
-----
-
- ...
-
-
-
-
-
- ...
-
-
-
-
-
-
- ...
-
-----
-
-. Configure the `jboss.bind.address.private` and `jboss.default.multicast.address` as well as the ports of the services on the clustering stack.
-+
-NOTE: It is possible to cluster {project_name} without IP Multicast, but this topic is beyond the scope of this guide. For more information, see link:{appserver_jgroups_link}[JGroups] in the _{appserver_jgroups_name}_.
-
diff --git a/server_installation/topics/clustering/recommended.adoc b/server_installation/topics/clustering/recommended.adoc
deleted file mode 100644
index 527afeaf40..0000000000
--- a/server_installation/topics/clustering/recommended.adoc
+++ /dev/null
@@ -1,9 +0,0 @@
-
-=== Recommended network architecture
-
-The recommended network architecture for deploying {project_name} is to set up an HTTP/HTTPS load balancer on
-a public IP address that routes requests to {project_name} servers sitting on a private network. This
-isolates all clustering connections and provides a nice means of protecting the servers.
-
-NOTE: By default, there is nothing to prevent unauthorized nodes from joining the cluster and broadcasting multicast messages.
- This is why cluster nodes should be in a private network, with a firewall protecting them from outside attacks.
diff --git a/server_installation/topics/clustering/securing-cluster-comm.adoc b/server_installation/topics/clustering/securing-cluster-comm.adoc
deleted file mode 100644
index 3627b79c87..0000000000
--- a/server_installation/topics/clustering/securing-cluster-comm.adoc
+++ /dev/null
@@ -1,13 +0,0 @@
-
-=== Secure cluster communication
-
-When cluster nodes are isolated on a private network it requires access to the private network to be able to join a cluster or to view communication in the cluster. In addition you can also enable authentication and encryption for cluster communication. As long as your private network is secure it is not necessary to enable authentication and encryption. {project_name} does not send very sensitive information on the cluster in either case.
-
-ifeval::[{project_product}==true]
-If you want to enable authentication and encryption for clustering communication, see link:https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/{appserver_version}/html-single/configuration_guide/configuring_high_availability#securing_cluster[Securing a Cluster] in the _{appserver_name} Configuration Guide_.
-endif::[]
-
-ifeval::[{project_community}==true]
-If you want to enable authentication and encryption for clustering communication, see the 'High Availability Guide' in the link:{appserver_doc_base_url}/High_Availability_Guide.html[WildFly documentation].
-endif::[]
-
diff --git a/server_installation/topics/clustering/serialized.adoc b/server_installation/topics/clustering/serialized.adoc
deleted file mode 100644
index 5041ea2ae3..0000000000
--- a/server_installation/topics/clustering/serialized.adoc
+++ /dev/null
@@ -1,24 +0,0 @@
-
-[[_clustering_db_lock]]
-=== Serialized cluster startup
-
-{project_name} cluster nodes are allowed to boot concurrently.
-When {project_name} server instance boots up it may do some database migration, importing, or first time initializations.
-A DB lock is used to prevent start actions from conflicting with one another when cluster nodes boot up concurrently.
-
-By default, the maximum timeout for this lock is 900 seconds. If a node is waiting on this lock for more than the timeout
-it will fail to boot.
-Typically you won't need to increase/decrease the default value, but just in case it's possible to configure it in
-`standalone.xml`, `standalone-ha.xml`, or `domain.xml` file in your distribution. The location of this file
-depends on your <<_operating-mode, operating mode>>.
-
-[source,xml]
-----
-
-
-
-
-
-
-
-----
\ No newline at end of file
diff --git a/server_installation/topics/clustering/sticky-sessions.adoc b/server_installation/topics/clustering/sticky-sessions.adoc
deleted file mode 100644
index e4a9bc5297..0000000000
--- a/server_installation/topics/clustering/sticky-sessions.adoc
+++ /dev/null
@@ -1,69 +0,0 @@
-[[sticky-sessions]]
-=== Sticky sessions
-
-Typical cluster deployment consists of the load balancer (reverse proxy) and 2 or more {project_name} servers on private network. For performance purposes,
-it may be useful if load balancer forwards all requests related to particular browser session to the same {project_name} backend node.
-
-The reason is, that {project_name} is using Infinispan distributed cache under the covers for save data related to current authentication session and user session.
-The Infinispan distributed caches are configured with one owner by default. That means that particular session is saved just on one cluster node and the other nodes need
-to lookup the session remotely if they want to access it.
-
-For example if authentication session with ID `123` is saved in the Infinispan cache on `node1`, and then `node2` needs to lookup this session,
-it needs to send the request to `node1` over the network to return the particular session entity.
-
-It is beneficial if particular session entity is always available locally, which can be done with the help of sticky sessions.
-The workflow in the cluster environment with the public frontend load balancer and two backend {project_name} nodes can be like this:
-
-* User sends initial request to see the {project_name} login screen
-* This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn't need to be random,
-but can be chosen according to some other criterias (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy).
-* {project_name} creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache.
-* Infinispan distributed cache assigns the primary owner of the session based on the hash of session ID.
-See link:https://infinispan.org/docs/10.1.x/titles/configuring/configuring.html#clustered_caches[Infinispan documentation] for more details around this.
-Let's assume that Infinispan assigned `node2` to be the owner of this session.
-* {project_name} creates the cookie `AUTH_SESSION_ID` with the format like `.` . In our example case, it will be `123.node2` .
-* Response is returned to the user with the {project_name} login screen and the AUTH_SESSION_ID cookie in the browser
-
-From this point, it is beneficial if load balancer forwards all the next requests to the `node2` as this is the node, who is owner of the authentication session with ID `123`
-and hence Infinispan can lookup this session locally. After authentication is finished, the authentication session is converted to user session, which will be also saved on
-`node2` because it has same ID `123` .
-
-The sticky session is not mandatory for the cluster setup, however it is good for performance for the reasons mentioned above. You need to configure your loadbalancer to sticky
-over the `AUTH_SESSION_ID` cookie. How exactly do this is dependent on your loadbalancer.
-
-It is recommended on the {project_name} side to use the system property `jboss.node.name` during startup, with the value corresponding
-to the name of your route. For example, `-Djboss.node.name=node1` will use `node1` to identify the route. This route will be used by
-Infinispan caches and will be attached to the AUTH_SESSION_ID cookie when the node is the owner of the particular key. Here is
-an example of the start up command using this system property:
-[source]
-----
-cd $RHSSO_NODE1
-./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
-----
-
-Typically in production environment the route name should use the same name as your backend host, but it is not required. You can
-use a different route name. For example, if you want to hide the host name of your {project_name} server inside your private network.
-
-==== Disable adding the route
-
-Some load balancers can be configured to add the route information by themselves instead of relying on the back end {project_name} node.
-However, as described above, adding the route by the {project_name} is recommended. This is because when done this way performance improves,
-since {project_name} is aware of the entity that is the owner of particular session and can route to that node, which is not necessarily the local node.
-
-You are permitted to disable adding route information to the AUTH_SESSION_ID cookie by {project_name}, if you prefer, by adding the following
-into your `RHSSO_HOME/standalone/configuration/standalone-ha.xml` file in the {project_name} subsystem configuration:
-
-[source,xml]
-----
-
- ...
-
-
-
-
-
-
-
-
-
-----
diff --git a/server_installation/topics/clustering/troubleshooting.adoc b/server_installation/topics/clustering/troubleshooting.adoc
deleted file mode 100644
index beaaab213f..0000000000
--- a/server_installation/topics/clustering/troubleshooting.adoc
+++ /dev/null
@@ -1,22 +0,0 @@
-
-=== Troubleshooting
-
-* Note that when you run a cluster, you should see message similar to this in the log of both cluster nodes:
-+
-[source]
-----
-INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-10,shared=udp)
-ISPN000094: Received new cluster view: [node1/keycloak|1] (2) [node1/keycloak, node2/keycloak]
-----
-If you see just one node mentioned, it's possible that your cluster hosts are not joined together.
-+
-Usually it's best practice to have your cluster nodes on private network without firewall for communication among them.
-Firewall could be enabled just on public access point to your network instead.
-If for some reason you still need to have firewall enabled on cluster nodes, you will need to open some ports.
-Default values are UDP port 55200 and multicast port 45688 with multicast address 230.0.0.4.
-Note that you may need more ports opened if you want to enable additional features like diagnostics for your JGroups stack.
-{project_name} delegates most of the clustering work to Infinispan/JGroups.
-For more information, see link:{appserver_jgroups_link}[JGroups] in the _{appserver_jgroups_name}_.
-
-* If you are interested in failover support (high availability), evictions, expiration and cache tuning, see
-<>.
diff --git a/server_installation/topics/config-subsystem.adoc b/server_installation/topics/config-subsystem.adoc
deleted file mode 100644
index 8f8a7d6522..0000000000
--- a/server_installation/topics/config-subsystem.adoc
+++ /dev/null
@@ -1,23 +0,0 @@
-[[_manage_config]]
-
-== Managing the subsystem configuration
-
-Low-level configuration of {project_name} is done by editing the
- `standalone.xml`, `standalone-ha.xml`, or `domain.xml` file
-in your distribution. The location of this file
-depends on your <<_operating-mode, operating mode>>.
-
-While there are endless settings you can configure here, this section will focus on
-configuration of the _keycloak-server_ subsystem. No matter which configuration file
-you are using, configuration of the _keycloak-server_ subsystem is the same.
-
-The keycloak-server subsystem is typically declared toward the end of the file like this:
-[source,xml]
-----
-
- auth
- ...
-
-----
-
-Note that anything changed in this subsystem will not take effect until the server is rebooted.
\ No newline at end of file
diff --git a/server_installation/topics/config-subsystem/cli-recipes.adoc b/server_installation/topics/config-subsystem/cli-recipes.adoc
deleted file mode 100644
index fb10287635..0000000000
--- a/server_installation/topics/config-subsystem/cli-recipes.adoc
+++ /dev/null
@@ -1,62 +0,0 @@
-[[_cli_recipes]]
-
-=== CLI recipes
-Here are some configuration tasks and how to perform them with CLI commands.
-Note that in all but the first example, we use the wildcard path `**` to mean
-you should substitute or the path to the keycloak-server subsystem.
-
-For standalone, this just means:
-
-`**` = `/subsystem=keycloak-server`
-
-For domain mode, this would mean something like:
-
-`**` = `/profile=auth-server-clustered/subsystem=keycloak-server`
-
-==== Changing the web context of the server
-[source]
-----
-/subsystem=keycloak-server/:write-attribute(name=web-context,value=myContext)
-----
-==== Setting the global default theme
-[source]
-----
-**/theme=defaults/:write-attribute(name=default,value=myTheme)
-----
-==== Adding a new SPI and a provider
-[source]
-----
-**/spi=mySPI/:add
-**/spi=mySPI/provider=myProvider/:add(enabled=true)
-----
-==== Disabling a provider
-[source]
-----
-**/spi=mySPI/provider=myProvider/:write-attribute(name=enabled,value=false)
-----
-==== Changing the default provider for an SPI
-[source]
-----
-**/spi=mySPI/:write-attribute(name=default-provider,value=myProvider)
-----
-==== Configuring the dblock SPI
-[source]
-----
-**/spi=dblock/:add(default-provider=jpa)
-**/spi=dblock/provider=jpa/:add(properties={lockWaitTimeout => "900"},enabled=true)
-----
-==== Adding or changing a single property value for a provider
-[source]
-----
-**/spi=dblock/provider=jpa/:map-put(name=properties,key=lockWaitTimeout,value=3)
-----
-==== Remove a single property from a provider
-[source]
-----
-**/spi=dblock/provider=jpa/:map-remove(name=properties,key=lockRecheckTime)
-----
-==== Setting values on a provider property of type `List`
-[source]
-----
-**/spi=eventsStore/provider=jpa/:map-put(name=properties,key=exclude-events,value=[EVENT1,EVENT2])
-----
\ No newline at end of file
diff --git a/server_installation/topics/config-subsystem/configure-spi-providers.adoc b/server_installation/topics/config-subsystem/configure-spi-providers.adoc
deleted file mode 100644
index 60815e2567..0000000000
--- a/server_installation/topics/config-subsystem/configure-spi-providers.adoc
+++ /dev/null
@@ -1,60 +0,0 @@
-[[_config_spi_providers]]
-
-=== Configure SPI providers
-
-The specifics of each configuration setting is discussed elsewhere in
-context with that setting. However, it is useful to understand the format used
-to declare settings on SPI providers.
-
-{project_name} is a highly modular system that allows great
-flexibility. There are more than 50 service provider interfaces (SPIs), and
-you are allowed to swap out implementations of each SPI. An implementation of
-an SPI is known as a _provider_.
-
-All elements in an SPI declaration are optional, but a full SPI declaration
- looks like this:
-[source,xml]
-----
-
- myprovider
-
-
-
-
-
-
-
-
-
-
-
-----
-Here we have two providers defined for the SPI `myspi`. The `default-provider`
-is listed as `myprovider`. However it is up to the SPI to decide how it will treat
-this setting. Some SPIs allow more than one provider and some do not. So
-`default-provider` can help the SPI to choose.
-
-Also notice that each provider defines its own set of configuration properties.
-The fact that both providers above have a property called `foo` is just a
-coincidence.
-
-The type of each property value is interpreted by the provider. However, there
-is one exception. Consider the `jpa` provider for the `eventsStore` SPI:
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-We see that the value begins and ends with square brackets. That means that
-the value will be passed to the provider as a list. In this example, the system will pass the
-provider a list with two element values _EVENT1_ and _EVENT2_. To add more values
-to the list, just separate each list element with a comma. Unfortunately,
-you do need to escape the quotes surrounding each list element with `\"`.
-
-Follow the steps in link:{developerguide_link}#_providers[{developerguide_name}] for more details on custom providers and the configuration of providers.
\ No newline at end of file
diff --git a/server_installation/topics/config-subsystem/start-cli.adoc b/server_installation/topics/config-subsystem/start-cli.adoc
deleted file mode 100644
index e97e2b8a9f..0000000000
--- a/server_installation/topics/config-subsystem/start-cli.adoc
+++ /dev/null
@@ -1,107 +0,0 @@
-[[_start_cli]]
-
-=== Starting the {appserver_name} CLI
-Besides editing the configuration by hand, you also have the option of changing
-the configuration by issuing commands via the _jboss-cli_ tool. CLI allows
-you to configure servers locally or remotely. And it is especially useful when
-combined with scripting.
-
-To start the {appserver_name} CLI, you need to run `jboss-cli`.
-
-.Linux/Unix
-[source]
-----
-$ .../bin/jboss-cli.sh
-----
-
-.Windows
-[source]
-----
-> ...\bin\jboss-cli.bat
-----
-
-This will bring you to a prompt like this:
-
-.Prompt
-[source]
-----
-[disconnected /]
-----
-
-If you wish to execute commands on a running server, you will first
-execute the `connect` command.
-
-.connect
-[source]
-----
-[disconnected /] connect
-connect
-[standalone@localhost:9990 /]
-----
-
-You may be thinking to yourself, "I didn't enter in any username or password!". If you run `jboss-cli` on the same machine
-as your running standalone server or domain controller and your account has appropriate file permissions, you do not have
-to setup or enter in an admin username and password. See the link:{appserver_admindoc_link}[_{appserver_admindoc_name}_]
-for more details on how to make things more secure if you are uncomfortable with that setup.
-
-=== CLI embedded mode
-
-If you do happen to be on the same machine as your standalone server and you want to
-issue commands while the server is not active, you can embed the server into CLI and make
-changes in a special mode that disallows incoming requests. To do this, first
-execute the `embed-server` command with the config file you wish to change.
-
-.embed-server
-[source]
-----
-[disconnected /] embed-server --server-config=standalone.xml
-[standalone@embedded /]
-----
-
-=== Using CLI GUI mode
-
-The CLI can also run in GUI mode. GUI mode launches a Swing application that
-allows you to graphically view and edit the entire management model of a _running_ server.
-GUI mode is especially useful when you need help formatting your CLI commands and learning
-about the options available. The GUI can also retrieve server logs from a local or
-remote server.
-
-.Procedure
-
-. Start the CLI in GUI mode
-+
-[source]
-----
-$ .../bin/jboss-cli.sh --gui
-----
-+
-Note: to connect to a remote server, you pass the `--connect` option as well.
-Use the --help option for more details.
-
-. Scroll down to find the node `subsystem=keycloak-server`.
-
-. Right-click the node and select `Explore subsystem=keycloak-server`.
-+
-A new tab displays only the keycloak-server subsystem.
-+
-.keycloak-server subsystem
-image:images/cli-gui.png[keycloak-server subsystem]
-
-=== CLI scripting
-
-The CLI has extensive scripting capabilities. A script is just a text
-file with CLI commands in it. Consider a simple script that turns off theme
-and template caching.
-
-.turn-off-caching.cli
-[source]
-----
-/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheThemes,value=false)
-/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheTemplates,value=false)
-----
-To execute the script, you can follow the `Scripts` menu in CLI GUI, or execute the
-script from the command line as follows:
-[source]
-----
-$ .../bin/jboss-cli.sh --file=turn-off-caching.cli
-----
diff --git a/server_installation/topics/database.adoc b/server_installation/topics/database.adoc
deleted file mode 100644
index 793659bd1c..0000000000
--- a/server_installation/topics/database.adoc
+++ /dev/null
@@ -1,19 +0,0 @@
-[[_database]]
-
-== Setting up the relational database
-{project_name} comes with its own embedded Java-based relational database called H2.
-This is the default database that {project_name} will use to persist data and really only exists so that you can run the authentication
-server out of the box. We highly recommend that you replace it with a more production ready external database. The H2 database
-is not very viable in high concurrency situations and should not be used in a cluster either. The purpose of this chapter is to
-show you how to connect {project_name} to a more mature database.
-
-{project_name} uses two layered technologies to persist its relational data. The bottom layered technology is JDBC. JDBC
-is a Java API that is used to connect to a RDBMS. There are different JDBC drivers per database type that are provided
-by your database vendor. This chapter discusses how to configure {project_name} to use one of these vendor-specific drivers.
-
-The top layered technology for persistence is Hibernate JPA. This is an object to relational mapping API that maps Java
-Objects to relational data. Most deployments of {project_name} will never have to touch the configuration aspects
-of Hibernate, but we will discuss how that is done if you run into that rare circumstance.
-
-NOTE: Datasource configuration is covered much more thoroughly in link:{appserver_datasource_link}[the datasource configuration chapter]
- in the _{appserver_admindoc_name}_.
diff --git a/server_installation/topics/database/checklist.adoc b/server_installation/topics/database/checklist.adoc
deleted file mode 100644
index 29155fcc93..0000000000
--- a/server_installation/topics/database/checklist.adoc
+++ /dev/null
@@ -1,13 +0,0 @@
-[[_rdbms-setup-checklist]]
-=== Database setup checklist
-
-Following are the steps you perform to get an RDBMS configured for {project_name}.
-
-. Locate and download a JDBC driver for your database
-. Package the driver JAR into a module and install this module into the server
-. Declare the JDBC driver in the configuration profile of the server
-. Modify the datasource configuration to use your database's JDBC driver
-. Modify the datasource configuration to define the connection parameters to your database
-
-This chapter will use PostgresSQL for all its examples. Other databases follow the same steps for installation.
-
diff --git a/server_installation/topics/database/datasource.adoc b/server_installation/topics/database/datasource.adoc
deleted file mode 100644
index 795c78f854..0000000000
--- a/server_installation/topics/database/datasource.adoc
+++ /dev/null
@@ -1,53 +0,0 @@
-
-=== Modifying the {project_name} datasource
-
-You modify the existing datasource configuration that {project_name} uses
-to connect it to your new external database. You'll do
-this within the same configuration file and XML block that you registered your JDBC driver in. Here's an example
-that sets up the connection to your new database:
-
-.Declare Your JDBC Drivers
-[source,xml,subs="attributes+"]
-----
-
-
- ...
-
- jdbc:postgresql://localhost/keycloak
- postgresql
-
- 20
-
-
- William
- password
-
-
- ...
-
-
-----
-
-.Prerequisites
-
-* You have already declared your JDBC driver.
-
-.Procedure
-
-. Search for the `datasource` definition for `KeycloakDS`.
-+
-You'll first need to modify the `connection-url`. The
-documentation for your vendor's JDBC implementation should specify the format for this connection URL value.
-
-. Define the `driver` you will use.
-+
-This is the logical name of the JDBC driver you declared in the previous section of this
-chapter.
-+
-It is expensive to open a new connection to a database every time you want to perform a transaction. To compensate, the datasource
-implementation maintains a pool of open connections. The `max-pool-size` specifies the maximum number of connections it will pool.
-You may want to change the value of this depending on the load of your system.
-
-. Define the database username and password that is needed to connect to the database. This step is necessary for at least PostgreSQL. You may be concerned that these credentials are in clear text in the example. Methods exist to obfuscate these credentials, but these methods are beyond the scope of this guide.
-
-NOTE: For more information about datasource features, see link:{appserver_datasource_link}[the datasource configuration chapter] in the _{appserver_admindoc_name}_.
diff --git a/server_installation/topics/database/hibernate.adoc b/server_installation/topics/database/hibernate.adoc
deleted file mode 100644
index 38036935d1..0000000000
--- a/server_installation/topics/database/hibernate.adoc
+++ /dev/null
@@ -1,60 +0,0 @@
-
-=== Database Configuration
-
-The configuration for this component is found in the `standalone.xml`, `standalone-ha.xml`, or `domain.xml` file in your distribution. The location of this file depends on your <<_operating-mode, operating mode>>.
-
-.Database Config
-[source,xml]
-----
-
- ...
-
-
-
-
-
-
-
-
-
-
- ...
-
-----
-
-Possible configuration options are:
-
-dataSource::
- JNDI name of the dataSource
-
-jta::
- boolean property to specify if datasource is JTA capable
-
-driverDialect::
- Value of database dialect.
- In most cases you don't need to specify this property as dialect will be autodetected by Hibernate.
-
-initializeEmpty::
- Initialize database if empty. If set to false the database has to be manually initialized. If you want to manually initialize the database set migrationStrategy to `manual` which will create a file with SQL commands to initialize the database. Defaults to true.
-
-migrationStrategy::
- Strategy to use to migrate database. Valid values are `update`, `manual` and `validate`. Update will automatically migrate the database schema. Manual will export the required changes to a file with SQL commands that you can manually execute on the database. Validate will simply check if the database is up-to-date.
-
-migrationExport::
- Path for where to write manual database initialization/migration file.
-
-showSql::
- Specify whether Hibernate should show all SQL commands in the console (false by default). This is very verbose!
-
-formatSql::
- Specify whether Hibernate should format SQL commands (true by default)
-
-globalStatsInterval::
- Will log global statistics from Hibernate about executed DB queries and other things.
- Statistics are always reported to server log at specified interval (in seconds) and are cleared after each report.
-
-schema::
- Specify the database schema to use
-
-NOTE: These configuration switches and more are described in the link:{appserver_jpa_link}[_{appserver_jpa_name}_].
-
diff --git a/server_installation/topics/database/jdbc.adoc b/server_installation/topics/database/jdbc.adoc
deleted file mode 100644
index 5aa832cea0..0000000000
--- a/server_installation/topics/database/jdbc.adoc
+++ /dev/null
@@ -1,102 +0,0 @@
-
-=== Packaging the JDBC driver
-
-Find and download the JDBC driver JAR for your RDBMS. Before you can use this driver, you must package it up into a module and install it into the server. Modules define JARs that are loaded into the {project_name} classpath and the dependencies those JARs have on other modules.
-
-.Procedure
-
-. Create a directory structure to hold your module definition within the _.../modules/_ directory of your {project_name} distribution.
-+
-The convention is use the Java package name of the JDBC driver for the name of the directory structure. For PostgreSQL, create the directory _org/postgresql/main_.
-
-. Copy your database driver JAR into this directory and create an empty _module.xml_ file within it too.
-+
-.Module Directory
-image:{project_images}/db-module.png[Module Directory]
-
-. Open up the _module.xml_ file and create the following XML:
-+
-.Module XML
-[source,xml]
-----
-
-
-
-
-
-
-
-
-
-
-
-
-----
-+
-* The module name should match the directory structure of your module. So, _org/postgresql_ maps to `org.postgresql`.
-* The `resource-root path` attribute should specify the JAR filename of the driver.
-* The rest are just the normal dependencies that any JDBC driver JAR would have.
-
-=== Declaring and loading the JDBC driver
-
-You declare your JDBC into your deployment profile so that it loads and becomes available when the server boots up.
-
-.Prerequisites
-
-You have packaged the JDBC driver.
-
-.Procedure
-
-. Declare your JDBC driver by editing one of these files based on your deployment mode:
-
-* For standalone mode, edit _.../standalone/configuration/standalone.xml_.
-* For standalone clustering mode, edit _.../standalone/configuration/standalone-ha.xml_.
-* For domain mode, edit _.../domain/configuration/domain.xml_.
-+
-In domain mode, make sure you edit the profile you are using: either `auth-server-standalone` or `auth-server-clustered`
-
-. Within the profile, search for the `drivers` XML block within the `datasources` subsystem.
-+
-You should see a pre-defined driver declared for the H2 JDBC driver. This is where you'll declare the JDBC driver for your external database.
-+
-.JDBC Drivers
-[source,xml,subs="attributes+"]
-----
-
-
- ...
-
-
- org.h2.jdbcx.JdbcDataSource
-
-
-
-
-----
-
-. Within the `drivers` XML block, declare an additional JDBC driver.
-
-* Assign any `name` to this driver.
-* Specify the `module` attribute which points to the `module` package that you created earlier for the driver JAR.
-* Specify the driver's Java class.
-+
-Here's an example of installing a PostgreSQL driver that lives in the module example defined earlier in this chapter.
-+
-.Declare Your JDBC Drivers
-[source,xml,subs="attributes+"]
-----
-
-
- ...
-
-
- org.postgresql.xa.PGXADataSource
-
-
- org.h2.jdbcx.JdbcDataSource
-
-
-
-
-----
-
diff --git a/server_installation/topics/database/unicode-considerations.adoc b/server_installation/topics/database/unicode-considerations.adoc
deleted file mode 100644
index e245a65b58..0000000000
--- a/server_installation/topics/database/unicode-considerations.adoc
+++ /dev/null
@@ -1,70 +0,0 @@
-
-=== Unicode considerations for databases
-
-Database schema in {project_name} only accounts for Unicode strings in the following special fields:
-
-* Realms: display name, HTML display name, localization texts (keys and values)
-* Federation Providers: display name
-* Users: username, given name, last name, attribute names and values
-* Groups: name, attribute names and values
-* Roles: name
-* Descriptions of objects
-
-Otherwise, characters are limited to those contained in database encoding which is often 8-bit. However, for some
-database systems, it is possible to enable UTF-8 encoding of Unicode characters and use full Unicode character set in all
-text fields. Often, this is counterbalanced by shorter maximum length of the strings than in case of 8-bit encodings.
-
-Some of the databases require special settings to database and/or JDBC driver to be able to handle Unicode characters.
-Please find the settings for your database below. Note that if a database is listed here, it can still work properly
-provided it handles UTF-8 encoding properly both on the level of database and JDBC driver.
-
-Technically, the key criterion for Unicode support for all fields is whether the database allows setting of Unicode
-character set for `VARCHAR` and `CHAR` fields. If yes, there is a high chance that Unicode will be plausible, usually at
-the expense of field length. If it only supports Unicode in `NVARCHAR` and `NCHAR` fields, Unicode support for all text
-fields is unlikely as Keycloak schema uses `VARCHAR` and `CHAR` fields extensively.
-
-==== Oracle database
-
-Unicode characters are properly handled provided the database was created with Unicode support in `VARCHAR` and `CHAR`
-fields (e.g. by using `AL32UTF8` character set as the database character set). No special settings is needed for JDBC
-driver.
-
-If the database character set is not Unicode, then to use Unicode characters in the special fields, the JDBC driver needs
-to be configured with the connection property `oracle.jdbc.defaultNChar` set to `true`. It might be wise, though not
-strictly necessary, to also set the `oracle.jdbc.convertNcharLiterals` connection property to `true`. These properties
-can be set either as system properties or as connection properties. Please note that setting `oracle.jdbc.defaultNChar`
-may have negative impact on performance. For details, please refer to Oracle JDBC driver configuration documentation.
-
-==== Microsoft SQL Server database
-
-Unicode characters are properly handled only for the special fields. No special settings of JDBC driver or database is
-necessary.
-
-==== MySQL database
-
-Unicode characters are properly handled provided the database was created with Unicode support in `VARCHAR` and `CHAR`
-fields in the `CREATE DATABASE` command (e.g. by using `utf8` character set as the default database character set in
-MySQL 5.5. Please note that `utf8mb4` character set does not work due to different storage requirements to `utf8`
-character set footnote:[Tracked as https://issues.redhat.com/browse/KEYCLOAK-3873]). Note that in this case, length
-restriction to non-special fields does not apply because columns are created to accommodate given amount of characters,
-not bytes. If the database default character set does not allow storing Unicode, only the special fields allow storing
-Unicode values.
-
-At the side of JDBC driver settings, it is necessary to add a connection property `characterEncoding=UTF-8` to the JDBC
-connection settings.
-
-==== PostgreSQL database
-
-Unicode is supported when the database character set is `UTF8`. In that case, Unicode characters can be used in any
-field, there is no reduction of field length for non-special fields. No special settings of JDBC driver is necessary.
-
-The character set of a PostgreSQL database is determined at the time it is created. You can determine the default
-character set for a PostgreSQL cluster with the SQL command
-```SQL
-show server_encoding;
-```
-
-If the default character set is not UTF 8, then you can create the database with UTF8 as its character set like this:
-```SQL
-create database keycloak with encoding 'UTF8';
-```
diff --git a/server_installation/topics/host.adoc b/server_installation/topics/host.adoc
deleted file mode 100644
index 0d7fac7a81..0000000000
--- a/server_installation/topics/host.adoc
+++ /dev/null
@@ -1,68 +0,0 @@
-
-[[_hostname]]
-== Use of the public hostname
-
-{project_name} uses the public hostname for a number of things. For example, in the token issuer fields and URLs sent in
-password reset emails.
-
-The Hostname SPI provides a way to configure the hostname for a request. The default provider allows setting
-a fixed URL for frontend requests, while allowing backend requests to be based on the request URI. It is also possible
-to develop your own provider in the case the built-in provider does not provide the functionality needed.
-
-
-=== Default provider
-
-The default hostname provider uses the configured `frontendUrl` as the base URL for frontend requests (requests from
-user-agents) and uses the request URL as the basis for backend requests (direct requests from clients).
-
-Frontend request do not have to have the same context-path as the Keycloak server. This means you can expose Keycloak
-on for example `https://auth.example.org` or `https://example.org/keycloak` while internally its URL could be
-`https://10.0.0.10:8080/auth`.
-
-This makes it possible to have user-agents (browsers) send requests to {project_name} through the public domain name,
-while internal clients can use an internal domain name or IP address.
-
-This is reflected in the OpenID Connect Discovery endpoint for example where the `authorization_endpoint` uses the
-frontend URL, while `token_endpoint` uses the backend URL. As a note here a public client for instance would contact
-Keycloak through the public endpoint, which would result in the base of `authorization_endpoint` and `token_endpoint`
-being the same.
-
-To set the frontendUrl for Keycloak you can either pass add `-Dkeycloak.frontendUrl=https://auth.example.org` to the
-startup or you can configure it in `standalone.xml`. See the example below:
-
-[source, xml]
-----
-
- default
-
-
-
-
-
-
-
-----
-
-To update the `frontendUrl` with jboss-cli use the following command:
-
-[source,bash]
-----
-/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.frontendUrl,value="https://auth.example.com")
-----
-
-If you want all requests to go through the public domain name you can force backend requests to use the frontend URL as
-well by setting `forceBackendUrlToFrontendUrl` to `true`.
-
-It is also possible to override the default frontend URL for individual realms. This can be done in the admin console.
-
-If you do not want to expose the admin endpoints and console on the public domain use the property `adminUrl` to set
-a fixed URL for the admin console, which is different to the `frontendUrl`. It is also required to block access to
-`/auth/admin` externally, for details on how to do that refer to the link:{adminguide_link}[{adminguide_name}].
-
-=== Custom provider
-
-To develop a custom hostname provider you need to implement `org.keycloak.urls.HostnameProviderFactory` and
-`org.keycloak.urls.HostnameProvider`.
-
-Follow the instructions in the Service Provider Interfaces section in link:{developerguide_link}[{developerguide_name}]
-for more information on how to develop a custom provider.
diff --git a/server_installation/topics/installation.adoc b/server_installation/topics/installation.adoc
deleted file mode 100644
index 89d49ee323..0000000000
--- a/server_installation/topics/installation.adoc
+++ /dev/null
@@ -1,10 +0,0 @@
-
-== Installing the software
-ifeval::[{project_community}==true]
-Installing {project_name} is as simple as downloading it and unzipping it. This chapter reviews system requirements
-as well as the directory structure of the distribution.
-endif::[]
-
-ifeval::[{project_product}==true]
-You can install {project_name} by downloading a ZIP file and unzipping it, or by using an RPM. This chapter reviews system requirements as well as the directory structure.
-endif::[]
\ No newline at end of file
diff --git a/server_installation/topics/installation/directory-structure.adoc b/server_installation/topics/installation/directory-structure.adoc
deleted file mode 100644
index 32cb6bb5c6..0000000000
--- a/server_installation/topics/installation/directory-structure.adoc
+++ /dev/null
@@ -1,23 +0,0 @@
-
-=== Important directories
-
-The following are some important directories in the server distribution.
-
-_bin/_::
- This contains various scripts to either boot the server or perform some other management action on the server.
-
-_domain/_::
- This contains configuration files and working directory when running {project_name} in <<_domain-mode,domain mode>>.
-
-_modules/_::
- These are all the Java libraries used by the server.
-
-_standalone/_::
- This contains configuration files and working directory when running {project_name} in <<_standalone-mode,standalone mode>>.
-
-_standalone/deployments/_::
- If you are writing extensions to {project_name}, you can put your extensions here. See the link:{developerguide_link}[{developerguide_name}] for more information on this.
-
-_themes/_::
- This directory contains all the html, style sheets, JavaScript files, and images used to display any UI screen displayed by the server.
- Here you can modify an existing theme or create your own. See the link:{developerguide_link}[{developerguide_name}] for more information on this.
diff --git a/server_installation/topics/installation/distribution-files-community.adoc b/server_installation/topics/installation/distribution-files-community.adoc
deleted file mode 100644
index 4c1df56bbf..0000000000
--- a/server_installation/topics/installation/distribution-files-community.adoc
+++ /dev/null
@@ -1,29 +0,0 @@
-
-=== Installing the Keycloak server
-
-The Keycloak Server has two downloadable distributions:
-
-* `keycloak-{project_version}.[zip|tar.gz]`
-+
-This file is the server only distribution. It contains nothing other than the scripts and binaries
-to run the Keycloak Server.
-
-* `keycloak-overlay-{project_version}.[zip|tar.gz]`
-+
-This file is a WildFly add-on that you can use to install the Keycloak Server on top of an existing
-WildFly distribution. We do not support users who want to run their applications and Keycloak on the same server instance.
-
-.Procedure
-
-. To install the Keycloak server, run your operating system's `unzip` or `gunzip` and `tar` utilities on the `keycloak-{project_version}.[zip|tar.gz]` file.
-
-. To install the Keycloak Service Pack, it must be installed on a different server instance.
-
- .. Change to the root directory of your WildFly distribution.
- .. Unzip the `keycloak-overlay-{project_version}.[zip|tar.gz]` file.
- .. Open the bin directory in a shell.
- .. Run `./jboss-cli.[sh|bat] --file=keycloak-install.cli`.
-
-
-
-
diff --git a/server_installation/topics/installation/distribution-files-product.adoc b/server_installation/topics/installation/distribution-files-product.adoc
deleted file mode 100644
index e9d5294c43..0000000000
--- a/server_installation/topics/installation/distribution-files-product.adoc
+++ /dev/null
@@ -1,50 +0,0 @@
-
-=== Installing RH-SSO from a ZIP file
-
-The {project_name} server download ZIP file contains the scripts and binaries to run the {project_name} server. You install the {project_version_base} server first, then the {project_version} server patch.
-
-.Procedure
-
-. Go to the https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?downloadType=distributions&product=core.service.rhsso[Red Hat customer portal].
-
-. Download the {project_name} {project_version_base} server.
-
-. Unpack the ZIP file using the appropriate `unzip` utility, such as unzip, tar, or Expand-Archive.
-
-. Return to the https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?downloadType=distributions&product=core.service.rhsso[Red Hat customer portal].
-
-. Click the `Patches` tab.
-
-. Download the {project_name} {project_version} server patch.
-
-. Place the downloaded file in a directory you choose.
-
-. Go to the `bin` directory of {appserver_name}.
-
-. Start the {appserver_name} command line interface.
-+
-.Linux/Unix
-[source,bash,subs=+attributes]
-----
-$ jboss-cli.sh
-----
-+
-.Windows
-[source,bash,subs=+attributes]
-----
-> jboss-cli.bat
-----
-
-. Apply the patch.
-+
-[source,bash,subs=+attributes]
-----
-$ patch apply /rh-sso-{project_version}-patch.zip
-----
-
-.Additional resources
-
-For more details on applying patches, see link:https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/{project_version_base}/html/upgrading_guide/upgrading#zip-patching[Patching a ZIP/Installer Installation].
-
-
-
diff --git a/server_installation/topics/installation/installing-rpm.adoc b/server_installation/topics/installation/installing-rpm.adoc
deleted file mode 100644
index 1ec4222613..0000000000
--- a/server_installation/topics/installation/installing-rpm.adoc
+++ /dev/null
@@ -1,69 +0,0 @@
-[[_installing_rpm]]
-
-=== Installing RH-SSO from an RPM
-
-NOTE: With Red Hat Enterprise Linux 7 and 8, the term channel was replaced with the term repository. In these instructions only the term repository is used.
-
-You must subscribe to both the {appserver_name} {appserver_version} and RH-SSO {project_versionDoc} repositories before you can install RH-SSO from an RPM.
-
-NOTE: You cannot continue to receive upgrades to EAP RPMs but stop receiving updates for RH-SSO.
-
-[[subscribing_EAP_repo]]
-==== Subscribing to the {appserver_name} {appserver_version} repository
-
-.Prerequisites
-
-. Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the link:https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html-single/quick_registration_for_rhel/index[Red Hat Subscription Management documentation].
-
-. If you are already subscribed to another {appserver_name} repository, you must unsubscribe from that repository first.
-
-For Red Hat Enterprise Linux 6, 7: Using Red Hat Subscription Manager, subscribe to the {appserver_name} {appserver_version} repository using the following command. Replace with either 6 or 7 depending on your Red Hat Enterprise Linux version.
-
-[source,bash,subs="attributes+"]
-----
-subscription-manager repos --enable=jb-eap-{appserver_version}-for-rhel--server-rpms --enable=rhel--server-rpms
-----
-
-For Red Hat Enterprise Linux 8: Using Red Hat Subscription Manager, subscribe to the {appserver_name} {appserver_version} repository using the following command:
-
-[source,bash,subs="attributes+"]
-----
-subscription-manager repos --enable=jb-eap-{appserver_version}-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms
-----
-
-==== Subscribing to the RH-SSO {project_versionDoc} repository and installing RH-SSO {project_versionDoc}
-
-.Prerequisites
-
-. Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the link:https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html-single/quick_registration_for_rhel/index[Red Hat Subscription Management documentation].
-. Ensure that you have already subscribed to the {appserver_name} {appserver_version} repository. For more information see xref:subscribing_EAP_repo[Subscribing to the {appserver_name} {appserver_version} repository].
-
-.Procedure
-
-. For Red Hat Enterprise Linux 6, 7: Using Red Hat Subscription Manager, subscribe to the RH-SSO {project_versionDoc} repository using the following command. Replace with either 6 or 7 depending on your Red Hat Enterprise Linux version.
-+
-[source,bash,subs="attributes+"]
-----
-subscription-manager repos --enable=rh-sso-{project_versionDoc}-for-rhel--server-rpms
-----
-
-. For Red Hat Enterprise Linux 8: Using Red Hat Subscription Manager, subscribe to the RH-SSO {project_versionDoc} repository using the following command:
-+
-[source,bash,subs="attributes+"]
-----
-subscription-manager repos --enable=rh-sso-{project_versionDoc}-for-rhel-8-x86_64-rpms
-----
-
-. For Red Hat Enterprise Linux 6, 7: Install RH-SSO from your subscribed RH-SSO {project_versionDoc} repository using the following command:
-
- yum groupinstall rh-sso7
-
-. For Red Hat Enterprise Linux 8: Install RH-SSO from your subscribed RH-SSO {project_versionDoc} repository using the following command:
-
- dnf groupinstall rh-sso7
-
-Your installation is complete. The default RH-SSO_HOME path for the RPM installation is /opt/rh/rh-sso7/root/usr/share/keycloak.
-
-.Additional resources
-
-For details on installing the {project_version} patch for {project_name}, see link:https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/{project_version_base}/html/upgrading_guide/upgrading#rpm-patching[RPM patching].
diff --git a/server_installation/topics/installation/system-requirements.adoc b/server_installation/topics/installation/system-requirements.adoc
deleted file mode 100644
index 2678ebe391..0000000000
--- a/server_installation/topics/installation/system-requirements.adoc
+++ /dev/null
@@ -1,18 +0,0 @@
-
-=== Installation prerequisites
-
-These prerequisites exist for installing the {project_name} server:
-
-* Can run on any operating system that runs Java
-* Java 8 JRE or Java 11 JRE
-* zip or gzip and tar
-* At least 512M of RAM
-* At least 1G of diskspace
-* A shared external database like PostgreSQL, MySQL, Oracle, etc. {project_name} requires an external shared
- database if you want to run in a cluster. Please see the <<_database,database configuration>> section of this guide for more information.
-* Network multicast support on your machine if you want to run in a cluster. {project_name} can
- be clustered without multicast, but this requires a bunch of configuration changes. Please see
- the <<_clustering,clustering>> section of this guide for more information.
-* On Linux, it is recommended to use `/dev/urandom` as a source of random data to prevent {project_name} hanging due to lack of available
- entropy, unless `/dev/random` usage is mandated by your security policy. To achieve that on Oracle JDK 8 and OpenJDK 8, set the `java.security.egd`
- system property on startup to `file:/dev/urandom`.
diff --git a/server_installation/topics/manage.adoc b/server_installation/topics/manage.adoc
deleted file mode 100644
index 4d76cb6bd9..0000000000
--- a/server_installation/topics/manage.adoc
+++ /dev/null
@@ -1,49 +0,0 @@
-[[_app_server_cli]]
-
-== Manage Configuration at Runtime
-
-In the upcoming chapters, you'll often be provided two options for applying application server configuration changes to your deployment. You'll be
-shown how to edit the _standalone.xml_ or _domain.xml_ directly. This must be done when the server (or servers) are offline.
-Additionally, you may be shown how to apply config changes on a running server using the app server's command line interface ({appserver_name} CLI). This chapter discusses
-how you will do this.
-
-
-=== Start the {appserver_name} CLI
-
-To start the {appserver_name} CLI, you need to run the `jboss-cli` script.
-
-.Linux/Unix
-[source]
-----
-$ .../bin/jboss-cli.sh
-----
-
-.Windows
-[source]
-----
-> ...\bin\jboss-cli.bat
-----
-
-This will bring you to a prompt like this:
-
-.Prompt
-[source]
-----
-[disconnected /]
-----
-
-There's a few commands you can execute without a running standalone server or domain controller, but usually you will
-have to have those services booted up before you can execute CLI commands. To connect to a running server simply
-execute the `connect` command.
-
-.connect
-[source]
-----
-[disconnected /] connect
-connect
-[domain@localhost:9990 /]
-----
-
-You may be thinking to yourself, "I didn't enter in any username or password!". If you run `jboss-cli` on the same machine
-as your running standalone server or domain controller and your account has appropriate file permissions, you do not have
-to setup or enter in an admin username and password. See the link:{appserver_admindoc_link}[_{appserver_admindoc_name}_] for more details on how to make things more secure if you are uncomfortable with that setup.
diff --git a/server_installation/topics/network.adoc b/server_installation/topics/network.adoc
deleted file mode 100644
index 633bbfcb28..0000000000
--- a/server_installation/topics/network.adoc
+++ /dev/null
@@ -1,21 +0,0 @@
-
-[[_network]]
-
-== Setting up the network
-
-The default installation of {project_name} can run with some networking limitations. For one, all network endpoints bind to `localhost`
-so the auth server is really only usable on one local machine. For HTTP based connections, it does not use default ports
-like 80 and 443. HTTPS/SSL is not configured out of the box and without it, {project_name} has many security
-vulnerabilities.
-Finally, {project_name}
-may often need to make secure SSL and HTTPS connections to external servers and thus need a trust store set up so that endpoints can
-be validated correctly. This chapter discusses all of these things.
-
-
-
-
-
-
-
-
-
diff --git a/server_installation/topics/network/bind-address.adoc b/server_installation/topics/network/bind-address.adoc
deleted file mode 100644
index d7cba9115f..0000000000
--- a/server_installation/topics/network/bind-address.adoc
+++ /dev/null
@@ -1,53 +0,0 @@
-
-[[_bind-address]]
-
-=== Bind addresses
-
-By default {project_name} binds to the localhost loopback address `127.0.0.1`. That's not a very useful default if
-you want the authentication server available on your network. Generally, what we recommend is that you deploy a reverse proxy
-or load balancer on a public network and route traffic to individual {project_name} server instances on a private network.
-In either case though, you still need to set up your network interfaces to bind to something other than `localhost`.
-
-Setting the bind address is quite easy and can be done on the command line with either the _standalone.sh_ or
-_domain.sh_ boot scripts discussed in the <<_operating-mode, Choosing an Operating Mode>> chapter.
-
-[source]
-----
-$ standalone.sh -b 192.168.0.5
-----
-
-The `-b` switch sets the IP bind address for any public interfaces.
-
-Alternatively, if you don't want to set the bind address at the command line, you can edit the profile configuration of your deployment.
-Open up the profile configuration file (_standalone.xml_ or _domain.xml_ depending on your
-<<_operating-mode, operating mode>>) and look for the `interfaces` XML block.
-
-[source,xml]
-----
-
-
-
-
-
-
-
-
-----
-
-The `public` interface corresponds to subsystems creating sockets that are available publicly. An example of one
-of these subsystems is the web layer which serves up the authentication endpoints of {project_name}. The `management`
-interface corresponds to sockets opened up by the management layer of the {appserver_name}. Specifically the sockets
-which allow you to use the `jboss-cli.sh` command line interface and the {appserver_name} web console.
-
-In looking at the `public` interface you see that it has a special string `${jboss.bind.address:127.0.0.1}`. This string
-denotes a value `127.0.0.1` that can be overridden on the command line by setting a Java system property, i.e.:
-
-[source]
-----
-$ domain.sh -Djboss.bind.address=192.168.0.5
-----
-
-The `-b` is just a shorthand notation for this command. So, you can either change the bind address value directly in the profile config, or change it on the command line when
-you boot up.
-
-NOTE: There are many more options available when setting up `interface` definitions. For more information, see link:{appserver_network_link}[the network interface] in the _{appserver_network_name}_.
diff --git a/server_installation/topics/network/https.adoc b/server_installation/topics/network/https.adoc
deleted file mode 100644
index 360e08295e..0000000000
--- a/server_installation/topics/network/https.adoc
+++ /dev/null
@@ -1,168 +0,0 @@
-[[_setting_up_ssl]]
-=== Setting up HTTPS/SSL
-
-WARNING: {project_name} is not set up by default to handle SSL/HTTPS.
- It is highly recommended that you either enable SSL on the {project_name} server itself or on a reverse proxy in front of the {project_name} server.
-
-This default behavior is defined by the SSL/HTTPS mode of each {project_name} realm. This is discussed in more detail in the
-link:{adminguide_link}[{adminguide_name}], but let's give some context and a brief overview of these modes.
-
-external requests::
- {project_name} can run out of the box without SSL so long as you stick to private IP addresses like `localhost`, `127.0.0.1`, `10.x.x.x`, `192.168.x.x`, and `172.16.x.x`.
- If you don't have SSL/HTTPS configured on the server or you try to access {project_name} over HTTP from a non-private IP adress you will get an error.
-
-none::
- {project_name} does not require SSL. This should really only be used in development when you are playing around with things.
-
-all requests::
- {project_name} requires SSL for all IP addresses.
-
-The SSL mode for each realm can be configured in the {project_name} admin console.
-
-==== Enabling SSL/HTTPS for the {project_name} server
-
-If you are not using a reverse proxy or load balancer to handle HTTPS traffic for you, you'll need to enable HTTPS
-for the {project_name} server. This involves
-
-. Obtaining or generating a keystore that contains the private key and certificate for SSL/HTTP traffic
-. Configuring the {project_name} server to use this keypair and certificate.
-
-===== Creating the Certificate and Java Keystore
-
-In order to allow HTTPS connections, you need to obtain a self signed or third-party signed certificate and import it into a Java keystore before you can enable HTTPS in the web container where you are deploying the {project_name} Server.
-
-====== Self Signed Certificate
-
-In development, you will probably not have a third party signed certificate available to test a {project_name} deployment so you'll need to generate a self-signed one
-using the `keytool` utility that comes with the Java JDK.
-
-
-[source]
-----
-
-$ keytool -genkey -alias localhost -keyalg RSA -keystore keycloak.jks -validity 10950
- Enter keystore password: secret
- Re-enter new password: secret
- What is your first and last name?
- [Unknown]: localhost
- What is the name of your organizational unit?
- [Unknown]: Keycloak
- What is the name of your organization?
- [Unknown]: Red Hat
- What is the name of your City or Locality?
- [Unknown]: Westford
- What is the name of your State or Province?
- [Unknown]: MA
- What is the two-letter country code for this unit?
- [Unknown]: US
- Is CN=localhost, OU=Keycloak, O=Test, L=Westford, ST=MA, C=US correct?
- [no]: yes
-----
-
-When you see the question `What is your first and last name ?`, supply the DNS name of the machine where you are installing the server. For testing purposes, `localhost` should be used.
-After executing this command, the `keycloak.jks` file will be generated in the same directory as you executed the `keytool` command in.
-
-If you want a third-party signed certificate, but don't have one, you can obtain one for free at http://www.cacert.org[cacert.org]. However, you first need to use the following procedure.
-
-.Procedure
-
-. Generate a Certificate Request:
-+
-[source]
-----
-$ keytool -certreq -alias yourdomain -keystore keycloak.jks > keycloak.careq
-----
-+
-Where `yourdomain` is a DNS name for which this certificate is generated.
-Keytool generates the request:
-+
-[source]
-----
------BEGIN NEW CERTIFICATE REQUEST-----
-MIIC2jCCAcICAQAwZTELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk1BMREwDwYDVQQHEwhXZXN0Zm9y
-ZDEQMA4GA1UEChMHUmVkIEhhdDEQMA4GA1UECxMHUmVkIEhhdDESMBAGA1UEAxMJbG9jYWxob3N0
-MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAr7kck2TaavlEOGbcpi9c0rncY4HhdzmY
-Ax2nZfq1eZEaIPqI5aTxwQZzzLDK9qbeAd8Ji79HzSqnRDxNYaZu7mAYhFKHgixsolE3o5Yfzbw1
-29RvyeUVe+WZxv5oo9wolVVpdSINIMEL2LaFhtX/c1dqiqYVpfnvFshZQaIg2nL8juzZcBjj4as
-H98gIS7khql/dkZKsw9NLvyxgJvp7PaXurX29fNf3ihG+oFrL22oFyV54BWWxXCKU/GPn61EGZGw
-Ft2qSIGLdctpMD1aJR2bcnlhEjZKDksjQZoQ5YMXaAGkcYkG6QkgrocDE2YXDbi7GIdf9MegVJ35
-2DQMpwIDAQABoDAwLgYJKoZIhvcNAQkOMSEwHzAdBgNVHQ4EFgQUQwlZJBA+fjiDdiVzaO9vrE/i
-n2swDQYJKoZIhvcNAQELBQADggEBAC5FRvMkhal3q86tHPBYWBuTtmcSjs4qUm6V6f63frhveWHf
-PzRrI1xH272XUIeBk0gtzWo0nNZnf0mMCtUBbHhhDcG82xolikfqibZijoQZCiGiedVjHJFtniDQ
-9bMDUOXEMQ7gHZg5q6mJfNG9MbMpQaUVEEFvfGEQQxbiFK7hRWU8S23/d80e8nExgQxdJWJ6vd0X
-MzzFK6j4Dj55bJVuM7GFmfdNC52pNOD5vYe47Aqh8oajHX9XTycVtPXl45rrWAH33ftbrS8SrZ2S
-vqIFQeuLL3BaHwpl3t7j2lMWcK1p80laAxEASib/fAwrRHpLHBXRcq6uALUOZl4Alt8=
------END NEW CERTIFICATE REQUEST-----
-----
-
-. Send this CA request to your Certificate Authority (CA).
-+
-The CA will issue you a signed certificate and send it to you.
-
-. Obtain and import the root certificate of the CA.
-+
-You can download the cert from CA (in other words: root.crt) and import as follows:
-+
-[source]
-----
-$ keytool -import -keystore keycloak.jks -file root.crt -alias root
-----
-
-. Import your new CA generated certificate to your keystore:
-+
-[source]
-----
-$ keytool -import -alias yourdomain -keystore keycloak.jks -file your-certificate.cer
-----
-
-===== Configure {project_name} to Use the Keystore
-
-Now that you have a Java keystore with the appropriate certificates, you need to configure your {project_name} installation to use it.
-
-.Procedure
-
-. Edit the _standalone.xml_, _standalone-ha.xml_, or _host.xml_ file to use the keystore and enable HTTPS.
-
-. Either move the keystore file to the _configuration/_ directory of your deployment or the file in a location you choose and provide an absolute path to it.
-+
-If you are using absolute paths, remove the optional `relative-to` parameter from your configuration (See <<_operating-mode, operating mode>>).
-
-. Configure the keystore using the CLI:
-+
-[source]
-----
-$ /subsystem=elytron/key-store=httpsKS:add(relative-to=jboss.server.config.dir,path=keycloak.jks,credential-reference={clear-text=secret},type=JKS)
-$ /subsystem=elytron/key-manager=httpsKM:add(key-store=httpsKS,credential-reference={clear-text=secret})
-$ /subsystem=elytron/server-ssl-context=httpsSSC:add(key-manager=httpsKM,protocols=[\"TLSv1.3\"])
-----
-+
-If using domain mode, the commands should be executed in every host using the `/host=/` prefix (in order to create the `security-realm` in all of them). Here is an example, which you would repeat for each host:
-+
-[source]
-----
-$ /host=/subsystem=elytron/key-store=httpsKS:add(relative-to=jboss.server.config.dir,path=keycloak.jks,credential-reference={clear-text=secret},type=JKS)
-----
-+
-. Modify the `https-listener` to use the `server-ssl-context`previously created:
-+
-[source]
-----
-$ /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value=httpsSSC)
-----
-+
-If using domain mode, prefix the command with the profile that is being used with: `/profile=/`.
-+
-The resulting element, `server name="default-server"`, which is a child element of `subsystem xmlns="{subsystem_undertow_xml_urn}"`, should contain the following stanza:
-+
-[source,xml,subs="attributes+"]
-----
-
-
-
-
- ...
-
-----
-
-For more information on configuring TLS refer to the https://docs.wildfly.org/25/WildFly_Elytron_Security.html#configure-ssltls[WildFly documentation].
-
diff --git a/server_installation/topics/network/outgoing.adoc b/server_installation/topics/network/outgoing.adoc
deleted file mode 100644
index cc8320871b..0000000000
--- a/server_installation/topics/network/outgoing.adoc
+++ /dev/null
@@ -1,237 +0,0 @@
-
-=== Outgoing HTTP requests
-
-The {project_name} server often needs to make non-browser HTTP requests to the applications and services it secures.
-The auth server manages these outgoing connections by maintaining an HTTP client connection pool. There are some things
-you'll need to configure in `standalone.xml`, `standalone-ha.xml`, or `domain.xml`. The location of this file
-depends on your <<_operating-mode, operating mode>>.
-
-.HTTP client Config example
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-Possible configuration options are:
-
-establish-connection-timeout-millis::
- Timeout for establishing a socket connection.
-
-socket-timeout-millis::
- If an outgoing request does not receive data for this amount of time, timeout the connection.
-
-connection-pool-size::
- How many connections can be in the pool (128 by default).
-
-max-pooled-per-route::
- How many connections can be pooled per host (64 by default).
-
-connection-ttl-millis::
- Maximum connection time to live in milliseconds.
- Not set by default.
-
-max-connection-idle-time-millis::
- Maximum time the connection might stay idle in the connection pool (900 seconds by default). Will start background cleaner thread of Apache HTTP client.
- Set to `-1` to disable this checking and the background thread.
-
-disable-cookies::
- `true` by default.
- When set to true, this will disable any cookie caching.
-
-client-keystore::
- This is the file path to a Java keystore file.
- This keystore contains client certificate for two-way SSL.
-
-client-keystore-password::
- Password for the client keystore.
- This is _REQUIRED_ if `client-keystore` is set.
-
-client-key-password::
- Password for the client's key.
- This is _REQUIRED_ if `client-keystore` is set.
-
-proxy-mappings::
- Denotes proxy configurations for outgoing HTTP requests.
- See the section on <<_proxymappings, Proxy Mappings for Outgoing HTTP Requests>> for more details.
-
-disable-trust-manager::
- If an outgoing request requires HTTPS and this config option is set to `true` you do not have to specify a truststore.
- This setting should only be used during development and *never* in production as it will disable verification of SSL certificates.
- This is _OPTIONAL_.
- The default value is `false`.
-
-[[_proxymappings]]
-==== Proxy mappings for outgoing HTTP requests
-
-Outgoing HTTP requests sent by {project_name} can optionally use a proxy server based on a comma delimited list of proxy-mappings.
-A proxy-mapping denotes the combination of a regex based hostname pattern and a proxy-uri in the form of `hostnamePattern;proxyUri`,
-e.g.:
-[source]
-----
-.*\.(google|googleapis)\.com;http://www-proxy.acme.com:8080
-----
-
-To determine the proxy for an outgoing HTTP request the target hostname is matched against the configured
-hostname patterns. The first matching pattern determines the proxy-uri to use.
-If none of the configured patterns match for the given hostname then no proxy is used.
-
-If the proxy server requires authentication, include the proxy user's credentials in this format `username:password@`.
-For example:
-
-[source]
-----
-.*\.(google|googleapis)\.com;http://user01:pas2w0rd@www-proxy.acme.com:8080
-----
-
-The special value `NO_PROXY` for the proxy-uri can be used to indicate that no proxy
-should be used for hosts matching the associated hostname pattern.
-It is possible to specify a catch-all pattern at the end of the proxy-mappings to define a default
-proxy for all outgoing requests.
-
-The following example demonstrates the proxy-mapping configuration.
-
-[source]
-----
-# All requests to Google APIs should use http://www-proxy.acme.com:8080 as proxy
-.*\.(google|googleapis)\.com;http://www-proxy.acme.com:8080
-
-# All requests to internal systems should use no proxy
-.*\.acme\.com;NO_PROXY
-
-# All other requests should use http://fallback:8080 as proxy
-.*;http://fallback:8080
-----
-
-This can be configured via the following `jboss-cli` command.
-Note that you need to properly escape the regex-pattern as shown below.
-[source]
-----
-echo SETUP: Configure proxy routes for HttpClient SPI
-
-# In case there is no connectionsHttpClient definition yet
-/subsystem=keycloak-server/spi=connectionsHttpClient/provider=default:add(enabled=true)
-
-# Configure the proxy-mappings
-/subsystem=keycloak-server/spi=connectionsHttpClient/provider=default:write-attribute(name=properties.proxy-mappings,value=[".*\\.(google|googleapis)\\.com;http://www-proxy.acme.com:8080",".*\\.acme\\.com;NO_PROXY",".*;http://fallback:8080"])
-----
-
-The `jboss-cli` command results in the following subsystem configuration.
-Note that one needs to encode `"` characters with `\"`.
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-
-[[_proxy_env_vars]]
-==== Using standard environment variables
-
-Alternatively, it is possible to use standard environment variables to configure the proxy mappings, that is `HTTP_PROXY`, `HTTPS_PROXY`
-and `NO_PROXY` variables.
-
-The `HTTP_PROXY` and `HTTPS_PROXY` variables represent the proxy server that should be used for all outgoing HTTP requests.
-{project_name} does not differ between the two. If both are specified, `HTTPS_PROXY` takes the precedence regardless of
-the actual scheme the proxy server uses.
-
-The `NO_PROXY` variable is used to define a comma separated list of hostnames that should not use the proxy.
-If a hostname is specified, all its prefixes (subdomains) are also excluded from using proxy.
-
-Take the following example:
-[source]
-----
-HTTPS_PROXY=https://www-proxy.acme.com:8080
-NO_PROXY=google.com,login.facebook.com
-----
-In this example, all outgoing HTTP requests will use `\https://www-proxy.acme.com:8080` proxy server except for requests
-to for example `login.google.com`, `google.com`, `auth.login.facebook.com`. However, for example `groups.facebook.com` will be routed
-through the proxy.
-
-NOTE: The environment variables can be lowercase or uppercase. Lowercase takes precedence. For example if both `HTTP_PROXY` and
- `http_proxy` are defined, `http_proxy` will be used.
-
-If proxy mappings are defined using the subsystem configuration (as described above), the environment variables are not
-considered by {project_name}. This scenario applies in case no proxy server should be used despite having for example `HTTP_PROXY`
-environment variable defined. To do so, you can specify a generic no proxy route as follows:
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-
-[[_truststore]]
-==== Outgoing HTTPS request truststore
-
-When {project_name} invokes on remote HTTPS endpoints, it has to validate the remote server's certificate in order to ensure it is connecting to a trusted server.
-This is necessary in order to prevent man-in-the-middle attacks. The certificates of these remote server's or the CA that signed these
-certificates must be put in a truststore. This truststore is managed by the {project_name} server.
-
-The truststore is used when connecting securely to identity brokers, LDAP identity providers, when sending emails, and for backchannel communication with client applications.
-
-WARNING: By default, a truststore provider is not configured, and any https connections fall back to standard java truststore configuration as described in
- https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html[Java's JSSE Reference Guide]. If there is no trust
- established, then these outgoing HTTPS requests will fail.
-
-You can use _keytool_ to create a new truststore file or add trusted host certificates to an existing one:
-
-[source]
-----
-
-$ keytool -import -alias HOSTDOMAIN -keystore truststore.jks -file host-certificate.cer
-----
-
-The truststore is configured within the `standalone.xml`,
-`standalone-ha.xml`, or `domain.xml` file in your distribution. The location of this file
-depends on your <<_operating-mode, operating mode>>.
-You can add your truststore configuration by using the following template:
-
-[source,xml]
-----
-
-
-
-
-
-
-
-
-
-
-----
-
-Possible configuration options for this setting are:
-
-file::
- The path to a Java keystore file.
- HTTPS requests need a way to verify the host of the server they are talking to.
- This is what the trustore does.
- The keystore contains one or more trusted host certificates or certificate authorities.
- This truststore file should only contain public certificates of your secured hosts.
- This is _REQUIRED_ if any of these properties are defined.
-
-password::
- Password of the keystore.
- This is _REQUIRED_ if any of these properties are defined.
-
-hostname-verification-policy::
- `WILDCARD` by default.
- For HTTPS requests, this verifies the hostname of the server's certificate.
- `ANY` means that the hostname is not verified. `WILDCARD` Allows wildcards in subdomain names i.e.
- *.foo.com. `STRICT` CN must match hostname exactly.
-
diff --git a/server_installation/topics/network/ports.adoc b/server_installation/topics/network/ports.adoc
deleted file mode 100644
index 3521b250b0..0000000000
--- a/server_installation/topics/network/ports.adoc
+++ /dev/null
@@ -1,58 +0,0 @@
-
-[[_ports]]
-
-=== Socket port bindings
-
-The ports opened for each socket have a pre-defined default that can be overridden at the command line or within configuration.
-To illustrate this configuration, let's pretend you are running in <<_standalone-mode,standalone mode>> and
-open up the _.../standalone/configuration/standalone.xml_. Search for `socket-binding-group`.
-
-[source,xml]
-----
-
-
-
-
-
-
-
-
-
-
-
-
-----
-
-`socket-bindings` define socket connections that will be opened by the server. These bindings specify the
-`interface` (bind address) they use as well as what port number they will open. The ones you will be most interested in are:
-
-http::
- Defines the port used for {project_name} HTTP connections
-https::
- Defines the port used for {project_name} HTTPS connections
-ajp::
- This socket binding defines the port used for the AJP protocol. This protocol is used by Apache HTTPD server
- in conjunction `mod-cluster` when you are using Apache HTTPD as a load balancer.
-management-http::
- Defines the HTTP connection used by {appserver_name} CLI and web console.
-
-When running in <<_domain-mode,domain mode>> setting the socket configurations
-is a bit trickier as the example _domain.xml_ file has multiple `socket-binding-groups` defined. If you scroll down
-to the `server-group` definitions you can see what `socket-binding-group` is used for each `server-group`.
-
-.domain socket bindings
-[source,xml]
-----
-
-
- ...
-
-
-
- ...
-
-
-
-----
-
-NOTE: There are many more options available when setting up `socket-binding-group` definitions. For more information, see link:{appserver_socket_link}[the socket binding group] in the _{appserver_socket_name}_.
diff --git a/server_installation/topics/openshift.adoc b/server_installation/topics/openshift.adoc
deleted file mode 100644
index 48166363d0..0000000000
--- a/server_installation/topics/openshift.adoc
+++ /dev/null
@@ -1,45 +0,0 @@
-
-[[_openshift]]
-
-== Running {project_name} Server on OpenShift
-
-{project_name} provides a OpenShift cartridge to make it easy to get it running on OpenShift.
-If you don't already have an account or don't know how to create applications go to https://www.openshift.com/ first.
-You can create the {project_name} instance either with the web tool or the command line tool, both approaches are described below.
-
-WARNING: It's important that immediately after creating a {project_name} instance you open the `Administration Console` and login to reset the password.
-If this is not done anyone can easily gain admin rights to your {project_name} instance.
-
-=== Create {project_name} instance with the web tool
-
-. Open https://openshift.redhat.com/app/console/applications and click on `Add Application`.
-. Scroll down to the bottom of the page to find the `Code Anything` section.
-. Insert `http://cartreflect-claytondev.rhcloud.com/github/keycloak/openshift-keycloak-cartridge` into the `URL to a cartridge definition` field and click on `Next`.
-. Fill in the following form and click on `Create Application`.
-. Click on `Continue to the application overview page`.
-. Under the list of applications you should find your {project_name} instance and the status should be `Started`.
-. Click on it to open the {project_name} servers homepage.
-
-=== Create {project_name} instance with the command-line tool
-
-. Run the following command from a terminal:
-
-[source]
-----
-rhc app create http://cartreflect-claytondev.rhcloud.com/github/keycloak/openshift-keycloak-cartridge
-----
-
-. Replace `` with the name you want (for example {project_name}).
-
-Once the instance is created the rhc tool outputs details about it.
-Open the returned `URL` in a browser to open the {project_name} servers homepage.
-
-=== Next steps
-
-The {project_name} servers homepage shows the {project_name} logo and `Welcome to {project_name}`.
-There is also a link to the `Administration Console`.
-Open that and log in using username `admin` and password `admin`.
-On the first login you are required to change the password.
-
-TIP: On OpenShift {project_name} has been configured to only accept requests over https.
-If you try to use http you will be redirected to https.
diff --git a/server_installation/topics/operating-mode.adoc b/server_installation/topics/operating-mode.adoc
deleted file mode 100644
index 3418727f19..0000000000
--- a/server_installation/topics/operating-mode.adoc
+++ /dev/null
@@ -1,15 +0,0 @@
-
-[[_operating-mode]]
-
-== Using operating modes
-
-Before deploying {project_name} in a production environment you need to decide which type of operating mode you are going to use.
-
-* Will you run {project_name} within a cluster?
-* Do you want a centralized way to manage your server configurations?
-
-Your choice of operating mode affects how you configure databases, configure caching and even how you boot the server.
-
-TIP: The {project_name} is built on top of the {appserver_name} Application Server. This guide will only
- go over the basics for deployment within a specific mode. If you want specific information on this, a better place
- to go would be the link:{appserver_admindoc_link}[_{appserver_admindoc_name}_].
diff --git a/server_installation/topics/operating-mode/crossdc.adoc b/server_installation/topics/operating-mode/crossdc.adoc
deleted file mode 100644
index cadc0bcc92..0000000000
--- a/server_installation/topics/operating-mode/crossdc.adoc
+++ /dev/null
@@ -1,432 +0,0 @@
-
-[[crossdc-mode]]
-=== Using cross-site replication mode
-
-:tech_feature_name: cross-site replication mode
-:tech_feature_disabled: false
-include::../templates/techpreview.adoc[]
-
-Use cross-site replication mode to run {project_name} in a cluster across multiple data centers. Typically you use data center sites that are in different geographic regions. When using this mode, each data center will have its own cluster of {project_name} servers.
-
-This documentation will refer to the following example architecture diagram to illustrate and describe a simple cross-site replication use case.
-
-[[archdiagram]]
-.Example Architecture Diagram
-image:{project_images}/cross-dc-architecture.png[]
-
-
-[[prerequisites]]
-==== Prerequisites
-
-As this is an advanced topic, we recommend you first read the following, which provide valuable background knowledge:
-
-* link:{installguide_clustering_link}[Clustering with {project_name}]
-When setting up for cross-site replication, you will use more independent {project_name} clusters, so you must understand how a cluster works and the basic concepts and requirements such as load balancing, shared databases, and multicasting.
-
-ifeval::[{project_product}==true]
-* link:{jdgserver_crossdcdocs_link}[Red Hat Data Grid Cross-Site Replication]
-{project_name} uses Red Hat Data Grid (RHDG) for the replication of data between the data centers.
-endif::[]
-
-ifeval::[{project_community}==true]
-* link:https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html#xsite_replication[Infinispan Cross-Site Replication] replicates data across clusters in separate geographic locations.
-endif::[]
-
-
-[[technicaldetails]]
-==== Technical details
-
-This section provides an introduction to the concepts and details of how {project_name} cross-site replication is accomplished.
-
-.Data
-
-{project_name} is stateful application. It uses the following as data sources:
-
-* A database is used to persist permanent data, such as user information.
-* An Infinispan cache is used to cache persistent data from the database and also to save some short-lived and frequently-changing metadata, such as for user sessions.
-Infinispan is usually much faster than a database, however the data saved using Infinispan are not permanent and is not expected to persist across cluster restarts.
-
-In our example architecture, there are two data centers called `site1` and `site2`. For cross-site replication, we must make sure that both sources of data work reliably and that {project_name}
-servers from `site1` are eventually able to read the data saved by {project_name} servers on `site2` .
-
-Based on the environment, you have the option to decide if you prefer:
-
-* Reliability - which is typically used in Active/Active mode. Data written on `site1` must be visible immediately on `site2`.
-* Performance - which is typically used in Active/Passive mode. Data written on `site1` does not need to be visible immediately on `site2`.
-In some cases, the data may not be visible on `site2` at all.
-
-For more details, see <>.
-
-
-[[requestprocessing]]
-==== Request processing
-
-An end user's browser sends an HTTP request to the link:{installguide_loadbalancer_link}[front end load balancer]. This load balancer is usually HTTPD or WildFly with mod_cluster, NGINX, HA Proxy, or perhaps some other kind of software or hardware load balancer.
-
-The load balancer then forwards the HTTP requests it receives to the underlying {project_name} instances, which can be spread among multiple data centers. Load balancers typically offer support for link:{installguide_stickysessions_link}[sticky sessions], which means that the load balancer is able to always forward all HTTP requests from the same user to the same {project_name} instance in same data center.
-
-HTTP requests that are sent from client applications to the load balancer are called `backchannel requests`.
-These are not seen by an end user's browser and therefore can not be part of a sticky session between the user and the load balancer. For backchannel requests, the loadbalancer can forward the HTTP request to any {project_name} instance in any data center. This is challenging as some OpenID Connect and some SAML flows require multiple HTTP requests from both the user and the application. Because we can not reliably depend on sticky sessions to force all the related requests to be sent to the same {project_name} instance in the same data center, we must instead replicate some data across data centers, so the data are seen by subsequent HTTP requests during a particular flow.
-
-
-[[modes]]
-==== Modes
-
-According your requirements, there are two basic operating modes for cross-site replication:
-
-* Active/Passive - Here the users and client applications send the requests just to the {project_name} nodes in just a single data center.
-The second data center is used just as a `backup` for saving the data. In case of the failure in the main data center, the data can be usually restored from the second data center.
-
-* Active/Active - Here the users and client applications send the requests to the {project_name} nodes in both data centers.
-It means that data need to be visible immediately on both sites and available to be consumed immediately from {project_name} servers on both sites. This is especially true if {project_name} server writes some data on `site1`, and it is required that the data are available immediately for reading by {project_name} servers on `site2` immediately after the write on `site1` is finished.
-
-The active/passive mode is better for performance. For more information about how to configure caches for either mode, see: <>.
-
-
-[[database]]
-==== Database
-
-{project_name} uses a relational database management system (RDBMS) to persist some metadata about realms, clients, users, and so on. See link:{installguide_database_link}[this chapter] of the server installation guide for more details. In a cross-site replication setup, we assume that either both data centers talk to the same database or that every data center has its own database node and both database nodes are synchronously replicated across the data centers. In both cases, it is required that when a {project_name} server on `site1` persists some data and commits the transaction, those data are immediately visible by subsequent DB transactions on `site2`.
-
-Details of DB setup are out-of-scope for {project_name}, however many RDBMS vendors like MariaDB and Oracle offer replicated databases and synchronous replication. We test {project_name} with these vendors:
-
-* Oracle Database 19c RAC
-* Galera 3.12 cluster for MariaDB server version 10.1.19-MariaDB
-
-
-[[cache]]
-==== Infinispan caches
-
-This section begins with a high level description of the Infinispan caches. More details of the cache setup follow.
-
-.Authentication sessions
-
-In {project_name} we have the concept of authentication sessions. There is a separate Infinispan cache called `authenticationSessions` used to save data during authentication of particular user. Requests from this cache usually involve only a browser and the {project_name} server, not the application. Here we can rely on sticky sessions and the `authenticationSessions` cache content does not need to be replicated across data centers, even if you are in Active/Active mode.
-
-ifeval::[{project_community}==true]
-.Action tokens
-
-We also have the concept of link:{developerguide_actiontoken_link}[action tokens], which are used typically for scenarios when the user needs to confirm an action asynchronously by email. For example, during the `forget password` flow the `actionTokens` Infinispan cache is used to track metadata about related action tokens, such as which action token was already used, so it can't be reused second time. This usually needs to be replicated across data centers.
-endif::[]
-
-.Caching and invalidation of persistent data
-
-{project_name} uses Infinispan to cache persistent data to avoid many unnecessary requests to the database.
-Caching improves performance, however it adds an additional challenge. When some {project_name} server updates any data, all other {project_name} servers in all data centers need to be aware of it, so they invalidate particular data from their caches. {project_name} uses local Infinispan caches called `realms`, `users`, and `authorization` to cache persistent data.
-
-We use a separate cache, `work`, which is replicated across all data centers. The work cache itself does not cache
-any real data. It is used only for sending invalidation messages between cluster nodes and data centers.
-In other words, when data is updated, such as the user `john`, the {project_name} node sends the invalidation message to all other cluster nodes in the same data center and also to all other data centers. After receiving the invalidation notice, every node then invalidates the appropriate data from their local cache.
-
-
-.User sessions
-
-There are Infinispan caches called `sessions`, `clientSessions`, `offlineSessions`, and `offlineClientSessions`,
-all of which usually need to be replicated across data centers. These caches are used to save data about user
-sessions, which are valid for the length of a user's browser session. The caches must handle the HTTP requests from the end user and from the application. As described above, sticky sessions can not be reliably used in this instance, but we still want to ensure that subsequent HTTP requests can see the latest data. For this reason, the data are usually replicated across data centers.
-
-
-.Brute force protection
-
-Finally the `loginFailures` cache is used to track data about failed logins, such as how many times the user `john`
-entered a bad password. The details are described link:{adminguide_bruteforce_link}[here]. It is up to the admin whether this cache should be replicated across data centers. To have an accurate count of login failures, the replication is needed. On the other hand, not replicating this data can save some performance. So if performance is more important than accurate counts of login failures, the replication can be avoided.
-
-For more detail about how caches can be configured see <>.
-
-
-[[communication]]
-==== Communication details
-
-{project_name} uses multiple, separate clusters of Infinispan caches. Every {project_name} node is in the cluster with the other {project_name} nodes in same data center, but not with the {project_name} nodes in different data centers. A {project_name} node does not communicate directly with the {project_name} nodes from different data centers. {project_name} nodes use external JDG (actually {jdgserver_name} servers) for communication across data centers. This is done using the link:https://infinispan.org/docs/10.1.x/titles/server/server.html#hot_rod[Infinispan HotRod protocol].
-
-The Infinispan caches on the {project_name} side must be configured with the link:https://infinispan.org/docs/10.1.x/titles/configuring/configuring.html#remote_cache_store[remoteStore] to ensure that data are saved to the remote cache. There is separate Infinispan cluster between JDG servers, so the data saved on JDG1 on `site1` are replicated to JDG2 on `site2` .
-
-The receiving {jdgserver_name} server notifies the {project_name} servers in its cluster through Client Listeners, which are a feature of the Hot Rod protocol. {project_name} nodes on `site2` then update their Infinispan caches and the particular user session is also visible on {project_name} nodes on `site2`.
-
-See the <> for more details.
-
-include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3]
-
-
-[[administration]]
-==== Administration of cross-site deployment
-
-
-This section contains some tips and options related to cross-site replication.
-
-* When you run the {project_name} server inside a data center, it is required that the database referenced in `KeycloakDS` datasource is already running and available in that data center. It is also necessary that the {jdgserver_name} server referenced by the `outbound-socket-binding`, which is referenced from the Infinispan cache `remote-store` element, is already running. Otherwise the {project_name} server will fail to start.
-
-* Every data center can have more database nodes if you want to support database failover and better reliability. Refer to the documentation of your database and JDBC driver for the details how to set this up on the database side and how the `KeycloakDS` datasource on Keycloak side needs to be configured.
-
-* Every datacenter can have more {jdgserver_name} servers running in the cluster. This is useful if you want some failover and better fault tolerance. The Hot Rod protocol used for communication between {jdgserver_name} servers and {project_name} servers has a feature that {jdgserver_name} servers will automatically send new topology to the {project_name} servers about the change in the {jdgserver_name} cluster, so the remote store on {project_name} side will know to which {jdgserver_name} servers it can connect. Read the {jdgserver_name} and WildFly documentation for more details.
-
-* It is highly recommended that a master {jdgserver_name} server is running in every site before the {project_name} servers in **any** site are started. As in our example, we started both `server1` and `server2` first, before all {project_name} servers. If you still need to run the {project_name} server and the backup site is offline, it is recommended to manually switch the backup site offline on the {jdgserver_name} servers on your site, as described in <>. If you do not manually switch the unavailable site offline, the first startup may fail or they may be some exceptions during startup until the backup site is taken offline automatically due the configured count of failed operations.
-
-
-[[onoffline]]
-==== Bringing sites offline and online
-
-For example, assume this scenario:
-
-. Site `site2` is entirely offline from the `site1` perspective. This means that all {jdgserver_name} servers on `site2` are off *or* the network between `site1` and `site2` is broken.
-. You run {project_name} servers and {jdgserver_name} server `server1` in site `site1`
-. Someone logs in on a {project_name} server on `site1`.
-. The {project_name} server from `site1` will try to write the session to the remote cache on `server1` server, which is supposed to backup data to the `server2` server in the `site2`. See <> for more information.
-. Server `server2` is offline or unreachable from `server1`. So the backup from `server1` to `server2` will fail.
-. The exception is thrown in `server1` log and the failure will be propagated from `server1` server to {project_name} servers as well because the default `FAIL` backup failure policy is configured. See <> for details around the backup policies.
-. The error will happen on {project_name} side too and user may not be able to finish his login.
-
-According to your environment, it may be more or less probable that the network between sites is unavailable or temporarily broken (split-brain). In case this happens, it is good that {jdgserver_name} servers on `site1` are aware of the fact that {jdgserver_name} servers on `site2` are unavailable, so they will stop trying to reach the servers in the `server2` site and the backup failures won't happen. This is called `Take site offline` .
-
-.Take site offline
-
-There are 2 ways to take the site offline.
-
-**Manually by admin** - Admin can use the `jconsole` or other tool and run some JMX operations to manually take the particular site offline.
-This is useful especially if the outage is planned. With `jconsole` or CLI, you can connect to the `server1` server and take the `site2` offline.
-More details about this are available in the
-ifeval::[{project_product}==true]
-link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation].
-endif::[]
-ifeval::[{project_community}==true]
-link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation].
-endif::[]
-
-WARNING: These steps usually need to be done for all the {project_name} caches mentioned in <>.
-
-**Automatically** - After some number of failed backups, the `site2` will usually be taken offline automatically. This is done due the configuration of `take-offline` element inside the cache configuration.
-
-```xml
-
-```
-
-This example shows that the site will be taken offline automatically for the particular single cache if there are at least 3 subsequent failed backups and there is no any successful backup within 60 seconds.
-
-Automatically taking a site offline is useful especially if the broken network between sites is unplanned. The disadvantage is that there will be some failed backups until the network outage is detected, which could also mean failures on the application side.
-For example, there will be failed logins for some users or big login timeouts. Especially if `failure-policy` with value `FAIL` is used.
-
-WARNING: The tracking of whether a site is offline is tracked separately for every cache.
-
-
-.Take site online
-
-Once your network is back and `site1` and `site2` can talk to each other, you may need to put the site online. This needs to be done manually through JMX or CLI in similar way as taking a site offline.
-Again, you may need to check all the caches and bring them online.
-
-Once the sites are put online, it's usually good to:
-
-* Do the <>.
-* Manually <>.
-
-
-[[statetransfer]]
-==== State transfer
-
-State transfer is a required, manual step. {jdgserver_name} server does not do this automatically, for example during split-brain, it is only the admin who may decide which site has preference and hence if state transfer needs to be done bidirectionally between both sites or just unidirectionally, as in only from `site1` to `site2`, but not from `site2` to `site1`.
-
-A bidirectional state transfer will ensure that entities which were created *after* split-brain on `site1` will be transferred to `site2`. This is not an issue as they do not yet exist on `site2`. Similarly, entities created *after* split-brain on `site2` will be transferred to `site1`. Possibly problematic parts are those entities which exist *before* split-brain on both sites and which were updated during split-brain on both sites. When this happens, one of the sites will *win* and will overwrite the updates done during split-brain by the second site.
-
-Unfortunately, there is no any universal solution to this. Split-brains and network outages are just state, which is usually impossible to be handled 100% correctly with 100% consistent data between sites. In the case of {project_name}, it typically is not a critical issue. In the worst case, users will need to re-login again to their clients, or have the improper count of loginFailures tracked for brute force protection. See the {jdgserver_name}/JGroups documentation for more tips how to deal with split-brain.
-
-The state transfer can be also done on the {jdgserver_name} server side through JMX. The operation name is `pushState`. There are few other operations to monitor status, cancel push state, and so on.
-More info about state transfer is available in the link:{jdgserver_crossdcdocs_link}[{jdgserver_name} docs].
-
-
-[[clearcache]]
-==== Clear caches
-
-After split-brain it is safe to manually clear caches in the {project_name} admin console. This is because there might be some data changed in the database on `site1` and because of the event, that the cache should be invalidated wasn't transferred during split-brain to `site2`. Hence {project_name} nodes on `site2` may still have some stale data in their caches.
-
-To clear the caches, see {adminguide_clearcache_link}[{adminguide_clearcache_name}].
-
-When the network is back, it is sufficient to clear the cache just on one {project_name} node on any random site. The cache invalidation event will be sent to all the other {project_name} nodes in all sites. However, it needs to be done for all the caches (realms, users, keys). See link:{adminguide_clearcache_link}[{adminguide_clearcache_name}] for more information.
-
-
-[[tuningcache]]
-==== Tuning the {jdgserver_name} cache configuration
-
-This section contains tips and options for configuring your JDG cache.
-
-[[backupfailure]]
-.Backup failure policy
-
-By default, the configuration of backup `failure-policy` in the Infinispan cache configuration in the {jdgserver_name} `clustered.xml` file is configured as `FAIL`. You may change it to `WARN` or `IGNORE`, as you prefer.
-
-The difference between `FAIL` and `WARN` is that when `FAIL` is used and the {jdgserver_name} server tries to back data up to the other site and the backup fails then the failure will be propagated back to the caller (the {project_name} server). The backup might fail because the second site is temporarily unreachable or there is a concurrent transaction which is trying to update same entity. In this case, the {project_name} server will then retry the operation a few times. However, if the retry fails, then the user might see the error after a longer timeout.
-
-When using `WARN`, the failed backups are not propagated from the {jdgserver_name} server to the {project_name} server. The user won't see the error and the failed backup will be just ignored. There will be a shorter timeout, typically 10 seconds as that's the default timeout for backup. It can be changed by the attribute `timeout` of `backup` element. There won't be retries. There will just be a WARNING message in the {jdgserver_name} server log.
-
-The potential issue is, that in some cases, there may be just some a short network outage between sites, where the retry (usage of the `FAIL` policy) may help, so with `WARN` (without retry), there will be some data inconsistencies across sites. This can also happen if there is an attempt to update the same entity concurrently on both sites.
-
-How bad are these inconsistencies? Usually only means that a user will need to re-authenticate.
-
-When using the `WARN` policy, it may happen that the single-use cache, which is provided by the `actionTokens` cache and which handles that particular key is really single use, but may "successfully" write the same key twice. But, for example, the OAuth2 specification link:https://datatracker.ietf.org/doc/html/rfc6749#section-10.5[mentions] that code must be single-use. With the `WARN` policy, this may not be strictly guaranteed and the same code could be written twice if there is an attempt to write it concurrently in both sites.
-
-If there is a longer network outage or split-brain, then with both `FAIL` and `WARN`, the other site will be taken offline after some time and failures as described in <>. With the default 1 minute timeout, it is usually 1-3 minutes until all the involved caches are taken offline. After that, all the operations will work fine from an end user perspective.
-You only need to manually restore the site when it is back online as mentioned in <>.
-
-In summary, if you expect frequent, longer outages between sites and it is acceptable for you to have some data inconsistencies and a not 100% accurate single-use cache, but you never want end-users to see the errors and long timeouts, then switch to `WARN`.
-
-The difference between `WARN` and `IGNORE` is, that with `IGNORE` warnings are not written in the {jdgserver_name} log. See more details in the Infinispan documentation.
-
-
-.Lock acquisition timeout
-
-The default configuration is using transaction in NON_DURABLE_XA mode with acquire timeout 0. This means that transaction will fail-fast if there is another transaction in progress for the same key.
-
-The reason to switch this to 0 instead of default 10 seconds was to avoid possible deadlock issues. With {project_name}, it can happen that the same entity (typically session entity or loginFailure) is updated concurrently from both sites. This can cause deadlock under some circumstances, which will cause the transaction to be blocked for 10 seconds. See link:https://issues.redhat.com/browse/JDG-1318[this JIRA report] for details.
-
-With timeout 0, the transaction will immediately fail and then will be retried from {project_name} if backup `failure-policy` with the value `FAIL` is configured. As long as the second concurrent transaction is finished, the retry will usually be successful and the entity will have applied updates from both concurrent transactions.
-
-We see very good consistency and results for concurrent transaction with this configuration, and it is recommended to keep it.
-
-The only (non-functional) problem is the exception in the {jdgserver_name} server log, which happens every time when the lock is not immediately available.
-
-
-[[backups]]
-==== SYNC or ASYNC backups
-
-An important part of the `backup` element is the `strategy` attribute. You must decide whether it needs to be `SYNC` or `ASYNC`. We have 7 caches which might be cross-site replication aware, and these can be configured in 3 different modes regarding cross-site:
-
-. SYNC backup
-. ASYNC backup
-. No backup at all
-
-If the `SYNC` backup is used, then the backup is synchronous and operation is considered finished on the caller ({project_name} server) side once the backup is processed on the second site. This has worse performance than `ASYNC`, but on the other hand, you are sure that subsequent reads of the particular entity, such as user session, on `site2` will see the updates from `site1`. Also, it is needed if you want data consistency. As with `ASYNC` the caller is not notified at all if backup to the other site failed.
-
-For some caches, it is even possible to not backup at all and completely skip writing data to the {jdgserver_name} server. To set this up, do not use the `remote-store` element for the particular cache on the {project_name} side (file `KEYCLOAK_HOME/standalone/configuration/standalone-ha.xml`) and then the particular `replicated-cache` element is also not needed on the {jdgserver_name} server side.
-
-By default, all 7 caches are configured with `SYNC` backup, which is the safest option. Here are a few things to consider:
-
-* If you are using active/passive mode (all {project_name} servers are in single site `site1` and the {jdgserver_name} server in `site2` is used purely as backup. See <> for more details), then it is usually fine to use `ASYNC` strategy for all the caches to save the performance.
-
-* The `work` cache is used mainly to send some messages, such as cache invalidation events, to the other site. It is also used to ensure that some special events, such as userStorage synchronizations, happen only on single site. It is recommended to keep this set to `SYNC`.
-
-* The `actionTokens` cache is used as single-use cache to track that some tokens/tickets were used just once. For example action tokens or OAuth2 codes. It is possible to set this to `ASYNC` to slightly improved performance, but then it is not guaranteed that particular ticket is really single-use. For example, if there is concurrent request for same ticket in both sites, then it is possible that both requests will be successful with the `ASYNC` strategy. So what you set here will depend on whether you prefer better security (`SYNC` strategy) or better performance (`ASYNC` strategy).
-
-* The `loginFailures` cache may be used in any of the 3 modes. If there is no backup at all, it means that count of login failures for a user will be counted separately for every site (See <> for details). This has some security implications, however it has some performance advantages. Also it mitigates the possible risk of denial of service (DoS) attacks. For example, if an attacker simulates 1000 concurrent requests using the username and password of the user on both sites, it will mean lots of messages being passed between the sites, which may result in network congestion. The `ASYNC` strategy might be even worse as the attacker requests won't be blocked by waiting for the backup to the other site, resulting in potentially even more congested network traffic.
-The count of login failures also will not be accurate with the `ASYNC` strategy.
-
-For the environments with slower network between data centers and probability of DoS, it is recommended to not backup the `loginFailures` cache at all.
-
-* It is recommended to keep the `sessions` and `clientSessions` caches in `SYNC`. Switching them to `ASYNC` is possible only if you are sure that user requests and backchannel requests (requests from client applications to {project_name} as described in <>) will be always processed on same site. This is true, for example, if:
-** You use active/passive mode as described <>.
-** All your client applications are using the {project_name} {adapterguide_link_js_adapter}[JavaScript Adapter]. The JavaScript adapter sends the backchannel requests within the browser and hence they participate on the browser sticky session and will end on same cluster node (hence on same site) as the other browser requests of this user.
-** Your load balancer is able to serve the requests based on client IP address (location) and the client applications are deployed on both sites.
-+
-For example you have 2 sites LON and NYC. As long as your applications are deployed in both LON and NYC sites too, you can ensure that all the user requests from London users will be redirected to the applications in LON site and also to the {project_name} servers in LON site. Backchannel requests from the LON site client deployments will end on {project_name} servers in LON site too. On the other hand, for the American users, all the {project_name} requests, application requests and backchannel requests will be processed on NYC site.
-+
-* For `offlineSessions` and `offlineClientSessions` it is similar, with the difference that you even don't need to backup them at all if you never plan to use offline tokens for any of your client applications.
-
-Generally, if you are in doubt and performance is not a blocker for you, it's safer to keep the caches in `SYNC` strategy.
-
-WARNING: Regarding the switch to SYNC/ASYNC backup, make sure that you edit the `strategy` attribute of the `backup` element. For example like this:
-```xml
-
-```
-Note the `mode` attribute of cache-configuration element.
-
-
-[[troubleshooting]]
-==== Troubleshooting
-
-The following tips are intended to assist you should you need to troubleshoot:
-
-* We recommend that you go through the procedure for xref:assembly-setting-up-crossdc[Setting up cross-site replication] and have this one working first, so that you have some understanding of how things work. It is also wise to read this entire document to have some understanding of things.
-
-* For the {project_name} servers, you should see a message such as this during the server startup:
-+
-```
-18:09:30,156 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (ServerService Thread Pool -- 54)
-Node name: node11, Site name: site1
-```
-+
-Check that the site name and the node name looks as expected during the startup of {project_name} server.
-
-* Check that {project_name} servers are in cluster as expected, including that only the {project_name} servers from the same data center are in cluster with each other.
-This can be also checked in JConsole through the GMS view.
-
-* If there are exceptions during startup of {project_name} server like this:
-+
-```
-17:33:58,605 ERROR [org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation] (ServerService Thread Pool -- 59) ISPN004007: Exception encountered. Retry 10 out of 10: org.infinispan.client.hotrod.exceptions.TransportException:: Could not fetch transport
-...
-Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: 127.0.0.1:12232
- at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.(TcpTransport.java:82)
-
-```
-+
-it usually means that {project_name} server is not able to reach the {jdgserver_name} server in his own datacenter. Make sure that firewall is set as expected and {jdgserver_name} server is possible to connect.
-
-* If there are exceptions during startup of {project_name} server like this:
-+
-```
-16:44:18,321 WARN [org.infinispan.client.hotrod.impl.protocol.Codec21] (ServerService Thread Pool -- 57) ISPN004005: Error received from the server: javax.transaction.RollbackException: ARJUNA016053: Could not commit transaction.
- ...
-```
-+
-then check the log of corresponding {jdgserver_name} server of your site and check if has failed to backup to the other site. If the backup site is unavailable, then it is recommended to switch it offline, so that {jdgserver_name} server won't try to backup to the offline site causing the operations to pass successfully on {project_name} server side as well. See <> for more information.
-
-* Check the Infinispan statistics, which are available through JMX. For example, try to login and then see if the new session was successfully written to both {jdgserver_name} servers and is available in the `sessions` cache there. This can be done indirectly by checking the count of elements in the `sessions` cache for the MBean `jboss.datagrid-infinispan:type=Cache,name="sessions(repl_sync)",manager="clustered",component=Statistics` and attribute `numberOfEntries`. After login, there should be one more entry for `numberOfEntries` on both {jdgserver_name} servers on both sites.
-
-* If you updated the entity, such as `user`, on {project_name} server on `site1` and you do not see that entity updated on the {project_name} server on `site2`, then the issue can be either in the replication of the synchronous database itself or that {project_name} caches are not properly invalidated. You may try to temporarily disable the {project_name} caches as described link:{installguide_disablingcaching_link}[here] to nail down if the issue is at the database replication level. Also it may help to manually connect to the database and check if data are updated as expected. This is specific to every database, so you will need to consult the documentation for your database.
-
-* Sometimes you may see the exceptions related to locks like this in {jdgserver_name} server log:
-+
-```
-(HotRodServerHandler-6-35) ISPN000136: Error executing command ReplaceCommand,
-writing keys [[B0x033E243034396234..[39]]: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after
-0 milliseconds for key [B0x033E243034396234..[39] and requestor GlobalTx:server1:4353. Lock is held by GlobalTx:server1:4352
-```
-+
-Those exceptions are not necessarily an issue. They may happen anytime when a concurrent edit of the same entity is triggered on both DCs. This is common in a deployment. Usually the {project_name} server is notified about the failed operation and will retry it, so from the user's point of view, there is usually not any issue.
-
-* If you try to authenticate with {project_name} to your application, but authentication fails with an infinite number
-of redirects in your browser and you see the errors like this in the {project_name} server log:
-+
-```
-2017-11-27 14:50:31,587 WARN [org.keycloak.events] (default task-17) type=LOGIN_ERROR, realmId=master, clientId=null, userId=null, ipAddress=aa.bb.cc.dd, error=expired_code, restart_after_timeout=true
-```
-+
-it probably means that your load balancer needs to be set to support sticky sessions. Make sure that the provided route name used during startup of {project_name} server (Property `jboss.node.name`) contains the correct name used by the load balancer server to identify the current server.
-
-* If the {jdgserver_name} `work` cache grows indefinitely, you may be experiencing https://issues.redhat.com/browse/JDG-987[this {jdgserver_name} issue],
-which is caused by cache items not being properly expired. In that case, update the cache declaration with an empty `` tag like this:
-+
-```xml
-
-
-
-```
-
-* If you see Warnings in the {jdgserver_name} server log like:
-+
-```
-18:06:19,687 WARN [org.infinispan.server.hotrod.Decoder2x] (HotRod-ServerWorker-7-12) ISPN006011: Operation 'PUT_IF_ABSENT' forced to
- return previous value should be used on transactional caches, otherwise data inconsistency issues could arise under failure situations
-18:06:19,700 WARN [org.infinispan.server.hotrod.Decoder2x] (HotRod-ServerWorker-7-10) ISPN006010: Conditional operation 'REPLACE_IF_UNMODIFIED' should
- be used with transactional caches, otherwise data inconsistency issues could arise under failure situations
-```
-+
-you can just ignore them. To avoid the warning, the caches on {jdgserver_name} server side could be changed to transactional caches, but this is not recommended as it can cause some
-other issues caused by the bug https://issues.redhat.com/browse/ISPN-9323. So for now, the warnings just need to be ignored.
-
-* If you see errors in the {jdgserver_name} server log like:
-+
-```
-12:08:32,921 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRod-ServerWorker-7-11) ISPN005003: Exception reported: org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 7
- at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.java:184)
- at org.infinispan.server.hotrod.HotRodDecoder.decodeHeader(HotRodDecoder.java:133)
- at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:92)
- at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
- at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
-```
-+
-and you see some similar errors in the {project_name} log, it can indicate that there are incompatible versions of the Hot Rod protocol being used.
-This is likely happen when you try to use {project_name} with an old version of the Infinispan server. It
-will help if you add the `protocolVersion` property as an additional property to the `remote-store` element in the {project_name}
-configuration file. For example:
-+
-```xml
-2.6
-```
diff --git a/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc b/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc
deleted file mode 100644
index fc2ea57b8a..0000000000
--- a/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc
+++ /dev/null
@@ -1,32 +0,0 @@
-[id="assembly-setting-up-crossdc"]
-:context: xsite
-= Setting up cross-site replication
-
-[role="_abstract"]
-Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of cross-site replication.
-
-[NOTE]
-====
-The exact instructions for your version of {jdgserver_name} may be slightly different from these instructions, which apply to {jdgserver_name} 8.1. Check the {jdgserver_name} documentation for any steps that do not work as described.
-====
-
-This example for {jdgserver_name} {jdgserver_version_latest} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total.
-
-* `Site1` consists of {jdgserver_name} server, `server1`, and 2 {project_name} servers, `node11` and `node12` .
-
-* `Site2` consists of {jdgserver_name} server, `server2`, and 2 {project_name} servers, `node21` and `node22` .
-
-* {jdgserver_name} servers `server1` and `server2` are connected to each other through the RELAY2 protocol and `backup` based {jdgserver_name}
-caches in a similar way as described in the link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation].
-
-* {project_name} servers `node11` and `node12` form a cluster with each other, but they do not communicate directly with any server in `site2`.
-They communicate with the Infinispan server `server1` using the Hot Rod protocol (Remote cache). See <> for more information.
-
-* The same details apply for `node21` and `node22`. They cluster with each other and communicate only with `server2` server using the Hot Rod protocol.
-
-Our example setup assumes that the four {project_name} servers talk to the same database. In production, we recommend that you use separate synchronously replicated databases across data centers as described in <>.
-
-include::proc-setting-up-infinispan.adoc[leveloffset=4]
-include::proc-configuring-infinispan.adoc[leveloffset=4]
-include::proc-creating-infinispan-caches.adoc[leveloffset=4]
-include::proc-configuring-remote-cache.adoc[leveloffset=4]
diff --git a/server_installation/topics/operating-mode/crossdc/proc-configuring-infinispan.adoc b/server_installation/topics/operating-mode/crossdc/proc-configuring-infinispan.adoc
deleted file mode 100644
index 03339e266d..0000000000
--- a/server_installation/topics/operating-mode/crossdc/proc-configuring-infinispan.adoc
+++ /dev/null
@@ -1,145 +0,0 @@
-[id='configuring-infinispan-{context}']
-= Configuring {jdgserver_name} Clusters
-Configure {jdgserver_name} clusters to replicate {project_name} data across data centers.
-
-.Prerequisites
-
-* Install and set up {jdgserver_name} Server.
-
-.Procedure
-
-. Open `infinispan.xml` for editing.
-+
-By default, {jdgserver_name} Server uses `server/conf/infinispan.xml` for static configuration such as cluster transport and security mechanisms.
-
-. Create a stack that uses TCPPING as the cluster discovery protocol.
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
-
-
-
- stack.combine="REPLACE" stack.position="MPING"/>
-
-----
-<1> Lists the host names for `server1` and `server2`.
-+
-. Configure the {jdgserver_name} cluster transport to perform cross-site replication.
-.. Add the RELAY2 protocol to a JGroups stack.
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
-
- <1>
-
- max_site_masters="1000"/> <3>
- <4>
-
-
-
-
-
-----
-<1> Creates a stack named `xsite` that extends the default UDP cluster transport.
-<2> Adds the RELAY2 protocol and names the cluster you are configuring as `site1`. The site name must be unique to each {jdgserver_name} cluster.
-<3> Sets 1000 as the number of relay nodes for the cluster. You should set a value that is equal to or greater than the maximum number of nodes in your {jdgserver_name} cluster.
-<4> Names all {jdgserver_name} clusters that backup caches with {jdgserver_name} data and uses the default TCP stack for inter-cluster transport.
-+
-.. Configure the {jdgserver_name} cluster transport to use the stack.
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
-
- <1>
-
-----
-<1> Uses the `xsite` stack for the cluster.
-+
-. Configure the keystore as an SSL identity in the server security realm.
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
-
-
-
- relative-to="infinispan.server.config.path"
- keystore-password="password" <2>
- alias="server" /> <3>
-
-
-----
-<1> Specifies the path of the keystore that contains the SSL identity.
-<2> Specifies the password to access the keystore.
-<3> Names the alias of the certificate in the keystore.
-+
-. Configure the authentication mechanism for the Hot Rod endpoint.
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
-
-
-
-
- server-name="infinispan" /> <2>
-
-
-
-
-----
-<1> Configures the SASL authentication mechanism for the Hot Rod endpoint. SCRAM-SHA-512 is the default SASL mechanism for Hot Rod. However you can use whatever is appropriate for your environment, such as GSSAPI.
-<2> Defines the name that {jdgserver_name} servers present to clients. You specify this name in the Hot Rod client configuration when you set up {project_name}.
-+
-. Create a cache template.
-+
-NOTE: Add the cache template to `infinispan.xml` on each node in the {jdgserver_name} cluster.
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
-
-
- mode="SYNC"> <2>
- <3>
-
- <4>
-
-
-
-----
-<1> Creates a cache template named `sessions-cfg`.
-<2> Defines a cache that synchronously replicates data across the cluster.
-<3> Disables timeout for lock acquisition.
-<4> Names the backup site for the {jdgserver_name} cluster you are configuring.
-+
-. Start {jdgserver_name} server1.
-+
-[source,bash,options="nowrap",subs=attributes+]
-----
-./server.sh -c infinispan.xml -b PUBLIC_IP_ADDRESS -k PUBLIC_IP_ADDRESS -Djgroups.mcast_addr=228.6.7.10
-----
-+
-. Start {jdgserver_name} server2.
-+
-[source,bash,options="nowrap",subs=attributes+]
-----
-./server.sh -c infinispan.xml -b PUBLIC_IP_ADDRESS -k PUBLIC_IP_ADDRESS -Djgroups.mcast_addr=228.6.7.11
-----
-
-+
-. Check {jdgserver_name} server logs to verify the clusters form cross-site views.
-+
-[source,options="nowrap",subs=attributes+]
-----
-INFO [org.infinispan.XSITE] (jgroups-5,${server.hostname}) ISPN000439: Received new x-site view: [site1]
-INFO [org.infinispan.XSITE] (jgroups-7,${server.hostname}) ISPN000439: Received new x-site view: [site1, site2]
-----
-
-ifeval::[{project_product}==true]
-[role="_additional-resources"]
-.Additional resources
-link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] +
-link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_relay-xsite[Configuring Data Grid Clusters for Cross-Site Replication] +
-link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#ssl_identity-server[Setting Up SSL Identities for Data Grid Server] +
-link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configuring_endpoints[Configuring Data Grid Endpoints] +
-link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configure_hotrod_authentication-server[Configuring Hot Rod Authentication Mechanisms]
-endif::[]
diff --git a/server_installation/topics/operating-mode/crossdc/proc-configuring-remote-cache.adoc b/server_installation/topics/operating-mode/crossdc/proc-configuring-remote-cache.adoc
deleted file mode 100644
index 5c04eadc8c..0000000000
--- a/server_installation/topics/operating-mode/crossdc/proc-configuring-remote-cache.adoc
+++ /dev/null
@@ -1,214 +0,0 @@
-[id="proc-configuring-remote-cache-{context}"]
-= Configuring Remote Cache Stores on {project_name}
-After you set up remote {jdgserver_name} clusters, you configure the Infinispan subsystem on {project_name} to externalize data to those clusters through remote stores.
-
-.Prerequisites
-
-* Set up remote {jdgserver_name} clusters for cross-site configuration.
-* Create a truststore that contains the SSL certificate with the {jdgserver_name} Server identity.
-
-.Procedure
-
-. Add the truststore to the {project_name} deployment.
-. Create a socket binding that points to your {jdgserver_name} cluster.
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
- <1>
-
- port="${remote.cache.port:11222}"/> <3>
-
-----
-<1> Names the socket binding as `remote-cache`.
-<2> Specifies one or more hostnames for the {jdgserver_name} cluster.
-<3> Defines the port of `11222` where the Hot Rod endpoint listens.
-+
-. Add the `org.keycloak.keycloak-model-infinispan` module to the `keycloak` cache container in the Infinispan subsystem.
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
-
-
-----
-
-. Update the `work` cache in the Infinispan subsystem so it has the following configuration:
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
- <1>
-
- remote-servers="remote-cache" <3>
- passivation="false"
- fetch-state="false"
- purge="false"
- preload="false"
- shared="true">
- true
- org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
- myuser
- qwer1234!
- default
- infinispan
- SCRAM-SHA-512
- /path/to/truststore.jks
- JKS
- password
-
-
-----
-<1> Names the cache in the {jdgserver_name} configuration.
-<2> Names the corresponding cache on the remote {jdgserver_name} cluster.
-<3> Specifies the `remote-cache` socket binding.
-+
-The preceding cache configuration includes recommended settings for {jdgserver_name} caches.
-Hot Rod client configuration properties specify the {jdgserver_name} user credentials and SSL keystore and truststore details.
-+
-Refer to the
-ifeval::[{project_community}==true]
-https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html#configure_clients-xsite[{jdgserver_name} documentation]
-endif::[]
-ifeval::[{project_product}==true]
-https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_clients-xsite[{jdgserver_name} documentation]
-endif::[]
-for descriptions of each property.
-
-. Add distributed caches to the Infinispan subsystem for each of the following caches:
-+
-* sessions
-* clientSessions
-* offlineSessions
-* offlineClientSessions
-* actionTokens
-* loginFailures
-+
-For example, add a cache named `sessions` with the following configuration:
-+
-[source,xml,options="nowrap",subs=attributes+]
-----
-
- owners="1"> <2>
-
- remote-servers="remote-cache" <4>
- passivation="false"
- fetch-state="false"
- purge="false"
- preload="false"
- shared="true">
- true
- org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory
- myuser
- qwer1234!
- default
- infinispan
- SCRAM-SHA-512
- /path/to/truststore.jks
- JKS
- password
-
-
-----
-<1> Names the cache in the {jdgserver_name} configuration.
-<2> Configures one replica of each cache entry across the {jdgserver_name} cluster.
-<3> Names the corresponding cache on the remote {jdgserver_name} cluster.
-<4> Specifies the `remote-cache` socket binding.
-+
-
-. Copy the `NODE11` to 3 other directories referred later as `NODE12`, `NODE21` and `NODE22`.
-
-. Start `NODE11` :
-+
-[source,subs="+quotes"]
-----
-cd NODE11/bin
-./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \
- -Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
- -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
-----
-+
-If you notice the following warning messages in logs, you can safely ignore them:
-+
-[source,options="nowrap",subs=attributes+]
-----
-WARN [org.infinispan.CONFIG] (MSC service thread 1-5) ISPN000292: Unrecognized attribute 'infinispan.client.hotrod.auth_password'. Please check your configuration. Ignoring!
-WARN [org.infinispan.CONFIG] (MSC service thread 1-5) ISPN000292: Unrecognized attribute 'infinispan.client.hotrod.auth_username'. Please check your configuration. Ignoring!
-----
-+
-. Start `NODE12` :
-+
-[source,subs="+quotes"]
-----
-cd NODE12/bin
-./standalone.sh -c standalone-ha.xml -Djboss.node.name=node12 -Djboss.site.name=site1 \
- -Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
- -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
-----
-+
-The cluster nodes should be connected. Something like this should be in the log of both NODE11 and NODE12:
-+
-```
-Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12]
-```
-NOTE: The channel name in the log might be different.
-
-. Start `NODE21` :
-+
-[source,subs="+quotes"]
-----
-cd NODE21/bin
-./standalone.sh -c standalone-ha.xml -Djboss.node.name=node21 -Djboss.site.name=site2 \
- -Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \
- -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
-----
-+
-It shouldn't be connected to the cluster with `NODE11` and `NODE12`, but to a separate one:
-+
-```
-Received new cluster view for channel keycloak: [node21|0] (1) [node21]
-```
-+
-
-. Start `NODE22` :
-+
-[source,subs="+quotes"]
-----
-cd NODE22/bin
-./standalone.sh -c standalone-ha.xml -Djboss.node.name=node22 -Djboss.site.name=site2 \
- -Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \
- -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
-----
-+
-It should be in cluster with `NODE21` :
-+
-```
-Received new cluster view for channel keycloak: [node21|1] (2) [node21, node22]
-```
-+
-
-NOTE: The channel name in the log might be different.
-
-. Test:
-
-.. Go to `http://node11:8080/auth/` and create the initial admin user.
-
-.. Go to `http://node11:8080/auth/admin` and login as admin to admin console.
-
-.. Open a second browser and go to any of nodes `http://node12:8080/auth/admin` or `http://node21:8080/auth/admin` or `http://node22:8080/auth/admin`. After login, you should be able to see
-the same sessions in tab `Sessions` of particular user, client or realm on all 4 servers.
-
-.. After making a change in the {project_name} Admin Console, such as modifying a user or a realm, that change should be immediately visible on any of the four nodes. Caches should be properly invalidated everywhere.
-
-.. Check server.logs if needed. After login or logout, the message like this should be on all the nodes `NODEXY/standalone/log/server.log` :
-+
-```
-2017-08-25 17:35:17,737 DEBUG [org.keycloak.models.sessions.infinispan.remotestore.RemoteCacheSessionListener] (Client-Listener-sessions-30012a77422542f5) Received event from remote store.
-Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false'
-```
-
-ifeval::[{project_product}==true]
-[role="_additional-resources"]
-.Additional resources
-link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/configuring_data_grid/index[Data Grid Configuration Guide] +
-link:https://access.redhat.com/webassets/avalon/d/red-hat-data-grid/8.1/api/org/infinispan/client/hotrod/configuration/package-summary.html[Hot Rod Client Configuration API] +
-link:https://access.redhat.com/webassets/avalon/d/red-hat-data-grid/8.1/configdocs/[Data Grid Configuration Schema Reference]
-endif::[]
diff --git a/server_installation/topics/operating-mode/crossdc/proc-creating-infinispan-caches.adoc b/server_installation/topics/operating-mode/crossdc/proc-creating-infinispan-caches.adoc
deleted file mode 100644
index 29525034f5..0000000000
--- a/server_installation/topics/operating-mode/crossdc/proc-creating-infinispan-caches.adoc
+++ /dev/null
@@ -1,51 +0,0 @@
-[id='creating-infinispan-caches-{context}']
-= Creating Infinispan Caches
-Create the Infinispan caches that {project_name} requires.
-
-We recommend that you create caches on {jdgserver_name} clusters at runtime rather than adding caches to `infinispan.xml`.
-This strategy ensures that your caches are automatically synchronized across the cluster and permanently stored.
-
-The following procedure uses the {jdgserver_name} Command Line Interface (CLI) to create all the required caches in a single batch command.
-
-.Prerequisites
-
-* Configure your {jdgserver_name} clusters.
-
-.Procedure
-
-. Create a batch file that contains caches, for example:
-+
-[source,bash,options="nowrap",subs=attributes+]
------
-cat > /tmp/caches.batch<
-
- ...
-
-
- ...
-
-----
-
-The `auth-server-standalone` profile is a non-clustered setup. The `auth-server-clustered` profile is the clustered setup.
-
-If you scroll down further, you'll see various `socket-binding-groups` defined.
-
-.socket-binding-groups
-[source,xml]
-----
-
-
- ...
-
-
- ...
-
-
-
- ...
-
-
-----
-
-
-This configration defines the default port mappings for various connectors that are opened with each
-{project_name} server instance. Any value that contains `${...}` is a value that can be overridden on the command line
-with the `-D` switch, i.e.
-
-----
-$ domain.sh -Djboss.http.port=80
-----
-
-The definition of the server group for {project_name} resides in the `server-groups` XML block. It specifies the domain profile
-that is used (`default`) and also some default boot arguments for the Java VM when the host controller boots an instance. It also
-binds a `socket-binding-group` to the server group.
-
-.server group
-[source,xml]
-----
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-----
-
-==== Host controller configuration
-
-{project_name} comes with two host controller configuration files that reside in the _.../domain/configuration/_ directory:
-_host-master.xml_ and _host-slave.xml_. _host-master.xml_ is configured to boot up a domain controller, a load balancer, and
-one {project_name} server instance. _host-slave.xml_ is configured to talk to the domain controller and boot up
-one {project_name} server instance.
-
-NOTE: The load balancer is not a required service. It exists so that you can easily test drive clustering on your development
- machine. While usable in production, you have the option of replacing it if you have a different hardware or software
- based load balancer you want to use.
-
-.Host Controller Config
-image:{project_images}/host-files.png[]
-
-To disable the load balancer server instance, edit _host-master.xml_ and comment out or remove the `"load-balancer"` entry.
-
-[source,xml]
-----
-
-
-
- ...
-
-----
-
-Another interesting thing to note about this file is the declaration of the authentication server instance. It has
-a `port-offset` setting. Any network port defined in the _domain.xml_ `socket-binding-group` or the server group
-will have the value of `port-offset` added to it. For this sample domain setup, we do this so that ports opened by
-the load balancer server don't conflict with the authentication server instance that is started.
-
-[source,xml]
-----
-
- ...
-
-
-
-
-----
-
-==== Server instance working directories
-
-Each {project_name} server instance defined in your host files creates a working directory under _.../domain/servers/{SERVER NAME}_.
-Additional configuration can be put there, and any temporary, log, or data files the server instance needs or creates go there too.
-The structure of these per server directories ends up looking like any other {appserver_name} booted server.
-
-.Working Directories
-image:{project_images}/domain-server-dir.png[]
-
-==== Booting in domain clustered mode
-
-When running the server in domain mode, there is a specific script you need to run to boot the server depending on your
-operating system. These scripts live in the _bin/_ directory of the server distribution.
-
-.Domain Boot Script
-image:{project_images}/domain-boot-files.png[]
-
-To boot the server:
-
-.Linux/Unix
-[source]
-----
-$ .../bin/domain.sh --host-config=host-master.xml
-----
-
-.Windows
-[source]
-----
-> ...\bin\domain.bat --host-config=host-master.xml
-----
-
-When running the boot script you will need to pass in the host controlling configuration file you are going to use via the
-`--host-config` switch.
-
-[[_clustered-domain-example]]
-==== Testing with a sample clustered domain
-
-You can test drive clustering using the sample _domain.xml_ configuration. This sample domain is meant to run on one machine and boots up:
-
-* a domain controller
-* an HTTP load balancer
-* two {project_name} server instances
-
-.Procedure
-
-. Run the `domain.sh` script twice to start two separate host controllers.
-+
-The first one is the master host controller that starts a domain controller, an HTTP load balancer, and one
-{project_name} authentication server instance. The second one is a slave host controller that starts
-up only an authentication server instance.
-
-. Configure the slave host controller so that it can talk securely to the domain controller. Perform these steps:
-+
-If you omit these steps, the slave host cannot obtain the centralized configuration from the domain controller.
-
-.. Set up a secure connection by creating a server admin user and a secret that are shared between the master and the slave.
-+
-Run the `.../bin/add-user.sh` script.
-
-.. Select `Management User` when the script asks about the type of user to add.
-+
-This choice generates a secret that you cut and paste into the
-_.../domain/configuration/host-slave.xml_ file.
-+
-.Add App Server Admin
-[source]
-----
-$ add-user.sh
- What type of user do you wish to add?
- a) Management User (mgmt-users.properties)
- b) Application User (application-users.properties)
- (a): a
- Enter the details of the new user to add.
- Using realm 'ManagementRealm' as discovered from the existing property files.
- Username : admin
- Password recommendations are listed below. To modify these restrictions edit the add-user.properties configuration file.
- - The password should not be one of the following restricted values {root, admin, administrator}
- - The password should contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s)
- - The password should be different from the username
- Password :
- Re-enter Password :
- What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]:
- About to add user 'admin' for realm 'ManagementRealm'
- Is this correct yes/no? yes
- Added user 'admin' to file '/.../standalone/configuration/mgmt-users.properties'
- Added user 'admin' to file '/.../domain/configuration/mgmt-users.properties'
- Added user 'admin' with groups to file '/.../standalone/configuration/mgmt-groups.properties'
- Added user 'admin' with groups to file '/.../domain/configuration/mgmt-groups.properties'
- Is this new user going to be used for one AS process to connect to another AS process?
- e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
- yes/no? yes
- To represent the user add the following to the server-identities definition
-----
-+
-NOTE: The add-user.sh script does not add the user to the {project_name} server but to the underlying JBoss Enterprise Application Platform. The credentials used and generated in this script are only for demonstration purposes. Please use the ones generated on your system.
-
-. Cut and paste the secret value into the _.../domain/configuration/host-slave.xml_ file as follows:
-+
-[source,xml]
-----
-
-
-
-
-
-
-----
-
-. Add the _username_ of the created user in the _.../domain/configuration/host-slave.xml_ file:
-+
-[source,xml]
-----
-
-----
-+
-. Run the boot script twice to simulate a two node cluster on one development machine.
-+
-.Boot up master
-[source,shell]
-----
-$ domain.sh --host-config=host-master.xml
-----
-+
-.Boot up slave
-[source,shell]
-----
-$ domain.sh --host-config=host-slave.xml
-----
-
-. Open your browser and go to http://localhost:8080/auth to try it out.
diff --git a/server_installation/topics/operating-mode/standalone-ha.adoc b/server_installation/topics/operating-mode/standalone-ha.adoc
deleted file mode 100644
index 11f7ba7d78..0000000000
--- a/server_installation/topics/operating-mode/standalone-ha.adoc
+++ /dev/null
@@ -1,47 +0,0 @@
-
-[[_standalone-ha-mode]]
-
-=== Using standalone clustered mode
-
-Standalone clustered operation mode applies when you want to run {project_name} within a cluster. This mode
-requires that you have a copy of the {project_name} distribution on each machine where you want to run a server instance.
-This mode can be very easy to deploy initially, but can become quite cumbersome. To make a configuration change,
-you modify each distribution on each machine. For a large cluster, this mode can become time consuming and error prone.
-
-==== Standalone clustered configuration
-
-The distribution has a mostly pre-configured app server configuration file for running within a cluster. It has all the specific
-infrastructure settings for networking, databases, caches, and discovery. This file resides
-in _.../standalone/configuration/standalone-ha.xml_. There's a few things missing from this configuration.
-You can't run {project_name} in a cluster without configuring a shared database connection. You also need to
-deploy some type of load balancer in front of the cluster. The <<_clustering,clustering>> and
-<<_database,database>> sections of this guide walk you through these things.
-
-.Standalone HA Config
-image:{project_images}/standalone-ha-config-file.png[]
-
-WARNING: Any changes you make to this file while the server is running will not take effect and may even be overwritten
- by the server. Instead use the command line scripting or the web console of {appserver_name}. See
- the link:{appserver_admindoc_link}[_{appserver_admindoc_name}_] for more information.
-
-==== Booting in standalone clustered mode
-
-You use the same boot scripts to start {project_name} as you do in standalone mode. The difference is that
-you pass in an additional flag to point to the HA config file.
-
-.Standalone Clustered Boot Scripts
-image:{project_images}/standalone-boot-files.png[]
-
-To boot the server:
-
-.Linux/Unix
-[source]
-----
-$ .../bin/standalone.sh --server-config=standalone-ha.xml
-----
-
-.Windows
-[source]
-----
-> ...\bin\standalone.bat --server-config=standalone-ha.xml
-----
diff --git a/server_installation/topics/operating-mode/standalone.adoc b/server_installation/topics/operating-mode/standalone.adoc
deleted file mode 100644
index 6047c4184d..0000000000
--- a/server_installation/topics/operating-mode/standalone.adoc
+++ /dev/null
@@ -1,44 +0,0 @@
-
-[[_standalone-mode]]
-=== Using standalone mode
-
-Standalone operating mode is only useful when you want to run one, and only one {project_name} server instance.
-It is not usable for clustered deployments and all caches are non-distributed and local-only. It is not recommended that
-you use standalone mode in production as you will have a single point of failure. If your standalone mode server goes down,
-users will not be able to log in. This mode is really only useful to test drive and play with the features of {project_name}
-
-==== Booting in standalone mode
-
-When running the server in standalone mode, there is a specific script you need to boot the server depending on your
-operating system. These scripts live in the _bin/_ directory of the server distribution.
-
-.Standalone Boot Scripts
-image:{project_images}/standalone-boot-files.png[]
-
-To boot the server:
-
-.Linux/Unix
-[source]
-----
-$ .../bin/standalone.sh
-----
-
-.Windows
-[source]
-----
-> ...\bin\standalone.bat
-----
-
-==== Standalone configuration
-
-The bulk of this guide walks you through how to configure infrastructure level aspects of {project_name}. These
-aspects are configured in a configuration file that is specific to the application server that {project_name} is a
-derivative of. In the standalone operation mode, this file lives in _.../standalone/configuration/standalone.xml_. This file
-is also used to configure non-infrastructure level things that are specific to {project_name} components.
-
-.Standalone Config File
-image:{project_images}/standalone-config-file.png[]
-
-WARNING: Any changes you make to this file while the server is running will not take effect and may even be overwritten
- by the server. Instead use the command line scripting or the web console of {appserver_name}. See
- the link:{appserver_admindoc_link}[_{appserver_admindoc_name}_] for more information.
diff --git a/server_installation/topics/operator.adoc b/server_installation/topics/operator.adoc
deleted file mode 100644
index 3e9be152eb..0000000000
--- a/server_installation/topics/operator.adoc
+++ /dev/null
@@ -1,30 +0,0 @@
-
-[[_operator]]
-== {project_operator}
-
-ifeval::[{project_community}==true]
-:tech_feature_name: The {project_name} Operator
-:tech_feature_disabled: false
-include::templates/techpreview.adoc[]
-endif::[]
-
-The {project_operator} automates {project_name} administration in
-ifeval::[{project_community}==true]
-Kubernetes or
-endif::[]
-Openshift. You use this Operator to create custom resources (CRs), which automate administrative tasks. For example, instead of creating a client or a user in the {project_name} admin console, you can create custom resources to perform those tasks. A custom resource is a YAML file that defines the parameters for the administrative task.
-
-You can create custom resources to perform the following tasks:
-
-* xref:_keycloak_cr[Install {project_name}]
-* xref:_realm-cr[Create realms]
-* xref:_client-cr[Create clients]
-* xref:_user-cr[Create users]
-* xref:_external_database[Connect to an external database]
-* xref:_backup-cr[Schedule database backups]
-* xref:_operator-extensions[Install extensions and themes]
-
-[NOTE]
-After you create custom resources for realms, clients, and users, you can manage them by using the {project_name} admin console or as custom resources using the `{create_cmd_brief}` command. However, you cannot use both methods, because the Operator performs a one way sync for custom resources that you modify. For example, if you modify a realm custom resource, the changes show up in the admin console. However, if you modify the realm using the admin console, those changes have no effect on the custom resource.
-
-Begin using the Operator by xref:_installing-operator[Installing the {project_operator} on a cluster].
diff --git a/server_installation/topics/operator/command-options.adoc b/server_installation/topics/operator/command-options.adoc
deleted file mode 100644
index 28a879c294..0000000000
--- a/server_installation/topics/operator/command-options.adoc
+++ /dev/null
@@ -1,27 +0,0 @@
-
-[[_command-options]]
-=== Command options for managing custom resources
-
-After you create a custom request, you can edit it or delete using the `{create_cmd_brief}` command.
-
-* To edit a custom request, use this command: `{create_cmd_brief} edit `
-* To delete a custom request, use this command: `{create_cmd_brief} delete `
-
-For example, to edit a realm custom request named `test-realm`, use this command:
-
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} edit test-realm
-----
-
-A window opens where you can make changes.
-
-[NOTE]
-====
-You can update the Keycloak CR YAML file and changes will be applied to the deployment.
-
-Updates to the other resources are limited:
-
-Keycloak Realm CR only supports basic creation and deletion without sync options.
-Keycloak User and Client CRs support unidirectional updates (changes to the CR are reflected in Keycloak but changes done in Keycloak are not updated in the CR).
-====
diff --git a/server_installation/topics/operator/extensions.adoc b/server_installation/topics/operator/extensions.adoc
deleted file mode 100644
index 8d8ba6a0c6..0000000000
--- a/server_installation/topics/operator/extensions.adoc
+++ /dev/null
@@ -1,48 +0,0 @@
-
-[[_operator-extensions]]
-=== Installing extensions and themes
-
-You can use the operator to install extensions and themes that you need for your company or organization. The extension or theme can be anything that {project_name} can consume. For example, you can add a metrics extension. You add the extension or theme to the Keycloak custom resource.
-
-.Example YAML file for a Keycloak custom resource
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: Keycloak
-metadata:
- name: example-keycloak
- labels:
-ifeval::[{project_community}==true]
- app: keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: sso
-endif::[]
-spec:
- instances: 1
- extensions:
- -
- externalAccess:
- enabled: True
-```
-
-You can package and deploy themes in the same way as any other extensions. See {developerguide_deploying_themes}[Deploying Themes] manual entry for more information.
-
-.Prerequisites
-
-* You have a YAML file for the Keycloak custom resource.
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-
-.Procedure
-
-. Edit the YAML file for the Keycloak custom resource: `{create_cmd_brief} edit `
-
-. Add a line called `extensions:` after the `instances` line.
-
-. Add a URL to a JAR file for your custom extension or theme.
-
-. Save the file.
-
-The Operator downloads the extension or theme and installs it.
-
diff --git a/server_installation/topics/operator/external-database.adoc b/server_installation/topics/operator/external-database.adoc
deleted file mode 100644
index 8200819b71..0000000000
--- a/server_installation/topics/operator/external-database.adoc
+++ /dev/null
@@ -1,159 +0,0 @@
-
-[[_external_database]]
-=== Connecting to an external database
-
-You can use the Operator to connect to an external PostgreSQL database by creating a `keycloak-db-secret` YAML file and setting Keycloak CR externalDatabase property to enabled. Note that values are Base64 encoded.
-
-.Example YAML file for `keycloak-db-secret`
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: keycloak-db-secret
- namespace: keycloak
-stringData:
- POSTGRES_DATABASE:
- POSTGRES_EXTERNAL_ADDRESS:
- POSTGRES_EXTERNAL_PORT:
- POSTGRES_PASSWORD:
- # Required for AWS Backup functionality
- POSTGRES_SUPERUSER: "true"
- POSTGRES_USERNAME:
- SSLMODE:
-type: Opaque
-```
-
-The following properties set the hostname or IP address and port of the database.
-
-* `POSTGRES_EXTERNAL_ADDRESS` - an IP address or a hostname of the external database.
-ifeval::[{project_community}==true]
-This address needs be resolvable in a Kubernetes cluster.
-endif::[]
-* `POSTGRES_EXTERNAL_PORT` - (Optional) A database port.
-
-The other properties work in the same way for a hosted or external database. Set them as follows:
-
-* `POSTGRES_DATABASE` - Database name to be used.
-* `POSTGRES_USERNAME` - Database username
-* `POSTGRES_PASSWORD` - Database password
-* `POSTGRES_SUPERUSER` - Indicates whether backups should run as super user. Typically `true`.
-* `SSL_MODE` - Indicates whether to use TLS on the connection to the external PostgreSQL database. Check the possible https://www.postgresql.org/docs/current/libpq-ssl.html[values]
-
-When `SSL_MODE` is enabled, the operator searches for a secret called `keycloak-db-ssl-cert-secret` containing the `root.crt` that has been used by the PostgreSQL database. Creating the secret is optional and the secret is used only when you want to verify the Database's certificate (for example `SSLMODE: verify-ca`). Here is an example :
-
-.Example YAML file for `TLS Secret` to be used by the operator.
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: keycloak-db-ssl-cert-secret
- namespace: keycloak
-type: Opaque
-data:
- root.crt: {root.crt base64}
-```
-
-The Operator will create a Service named `keycloak-postgresql`. This Service is configured by the Operator to expose the external database based on the content of `POSTGRES_EXTERNAL_ADDRESS`. {project_name} uses this Service to connect to the Database, which means it does not connect to the Database directly but rather through this Service.
-
-The Keycloak custom resource requires updates to enable external database support.
-
-.Example YAML file for `Keycloak` custom resource that supports an external database
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: Keycloak
-metadata:
- labels:
-ifeval::[{project_community}==true]
- app: example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: sso
-endif::[]
- name: example-keycloak
- namespace: keycloak
-spec:
- externalDatabase:
- enabled: true
- instances: 1
-```
-
-.Prerequisites
-
-* You have a YAML file for `keycloak-db-secret`.
-* You have modified the Keycloak custom resource to set `externalDatabase` to `true`.
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-. Locate the secret for your PostgreSQL database: `{create_cmd_brief} get secret -o yaml`. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get secret keycloak-db-secret -o yaml
-apiVersion: v1
-data
- POSTGRES_DATABASE: cm9vdA==
- POSTGRES_EXTERNAL_ADDRESS: MTcyLjE3LjAuMw==
- POSTGRES_EXTERNAL_PORT: NTQzMg==
-----
-+
-The `POSTGRES_EXTERNAL_ADDRESS` is in Base64 format.
-
-. Decode the value for the secret: `echo "" | base64 -decode`. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ echo "MTcyLjE3LjAuMw==" | base64 -decode
-192.0.2.3
-----
-
-. Confirm that the decoded value matches the IP address for your database:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get pods -o wide
-NAME READY STATUS RESTARTS AGE IP
-keycloak-0 1/1 Running 0 13m 192.0.2.0
-keycloak-postgresql-c8vv27m 1/1 Running 0 24m 192.0.2.3
-----
-
-. Confirm that `keycloak-postgresql` appears in a list of running services:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get svc
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-keycloak ClusterIP 203.0.113.0 8443/TCP 27m
-keycloak-discovery ClusterIP None 8080/TCP 27m
-keycloak-postgresql ClusterIP 203.0.113.1 5432/TCP 27m
-----
-+
-The `keycloak-postgresql` service sends requests to a set of IP addresses in the backend. These IP addresses are called endpoints.
-
-. View the endpoints used by the `keycloak-postgresql` service to confirm that they use the IP addresses for your database:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get endpoints keycloak-postgresql
-NAME ENDPOINTS AGE
-keycloak-postgresql 192.0.2.3.5432 27m
-----
-
-. Confirm that {project_name} is running with the external database. This example shows that everything is running:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get pods
-NAME READY STATUS RESTARTS AGE IP
-keycloak-0 1/1 Running 0 26m 192.0.2.0
-keycloak-postgresql-c8vv27m 1/1 Running 0 36m 192.0.2.3
-----
-
-ifeval::[{project_community}==true]
-.Additional Resources
-
-* To back up your database using custom resources, see xref:_backup-cr[Scheduling database backups].
-
-
-* For more information on Base64 encoding, see the https://kubernetes.io/docs/concepts/configuration/secret/[Kubernetes Secrets manual].
-endif::[]
diff --git a/server_installation/topics/operator/external-keycloak.adoc b/server_installation/topics/operator/external-keycloak.adoc
deleted file mode 100644
index 0296d4482f..0000000000
--- a/server_installation/topics/operator/external-keycloak.adoc
+++ /dev/null
@@ -1,72 +0,0 @@
-
-[[_external_keycloak]]
-=== Connecting to an external {project_name}
-
-This operator can also be used to partially manage an external {project_name} instance.
-In it's current state, it will only be able to create clients.
-
-To do this, you'll need to create unmanaged versions of the `Keycloak` and `KeycloakRealm` CRDs to use for targeting and configuration.
-
-.Example YAML file for `external-keycloak`
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: Keycloak
-metadata:
- name: external-ref
- labels:
-ifeval::[{project_community}==true]
- app: external-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: external-sso
-endif::[]
-spec:
- unmanaged: true
- external:
- enabled: true
- url: https://some.external.url
-```
-
-In order to authenticate against this keycloak, the operator infers the secret name from the CRD by prefixing the CRD name with `credential-`.
-
-.Example YAML file for `credential-external-ref`
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: credential-external-ref
-type: Opaque
-data:
- ADMIN_USERNAME: YWRtaW4=
- ADMIN_PASSWORD: cGFzcw==
-```
-
-.Example YAML file for `external-realm`
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: KeycloakRealm
-metadata:
- name: external-realm
- labels:
-ifeval::[{project_community}==true]
- app: external-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: external-sso
-endif::[]
-spec:
- unmanaged: true
- realm:
- id: "basic"
- realm: "basic"
- instanceSelector:
- matchLabels:
-ifeval::[{project_community}==true]
- app: external-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: external-sso
-endif::[]
-```
-
-You can now use the realm reference in your client as usual, and it will create the client on the external {project_name} instance.
diff --git a/server_installation/topics/operator/installation.adoc b/server_installation/topics/operator/installation.adoc
deleted file mode 100644
index 021b93236b..0000000000
--- a/server_installation/topics/operator/installation.adoc
+++ /dev/null
@@ -1,185 +0,0 @@
-
-[[_installing-operator]]
-=== Installing the {project_operator} on a cluster
-
-To install the {project_operator}, you can use:
-
-* xref:_install_by_olm[The Operator Lifecycle Manager (OLM)]
-ifeval::[{project_community}==true]
-* xref:_install_by_command[Command line installation]
-endif::[]
-
-[[_install_by_olm]]
-==== Installing using the Operator Lifecycle Manager
-
-ifeval::[{project_community}==true]
-You can install the Operator on an xref:_openshift-olm[OpenShift] or xref:_kubernetes-olm[Kubernetes] cluster.
-
-[[_openshift-olm]]
-===== Installation on an OpenShift cluster
-endif::[]
-
-.Prerequisites
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-Perform this procedure on an OpenShift cluster.
-
-. Open the OpenShift Container Platform web console.
-
-. In the left column, click `Operators, OperatorHub`.
-
-. Search for {project_name} Operator.
-+
-.OperatorHub tab in OpenShift
-image:{project_images}/operator-openshift-operatorhub.png[]
-
-. Click the {project_name} Operator icon.
-+
-An Install page opens.
-+
-.Operator Install page on OpenShift
-image:{project_images}/operator-olm-installation.png[]
-
-. Click `Install`.
-
-. Select a namespace and click Subscribe.
-+
-.Namespace selection in OpenShift
-image:images/installed-namespace.png[]
-+
-The Operator starts installing.
-
-.Additional resources
-
-* When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource].
-ifeval::[{project_community}==true]
-However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
-endif::[]
-
-* For more information on OpenShift Operators, see the link:https://docs.openshift.com/container-platform/4.4/operators/olm-what-operators-are.html[OpenShift Operators guide].
-
-ifeval::[{project_community}==false]
-[[_operator_rhsso_compatibility]]
-=== Operator compatibility with different RH-SSO versions
-The RH-SSO Operator 7.6 is fully compatible with RH-SSO 7.5. The 7.6 Operator by default deploys RH-SSO 7.6. To use the RH-SSO 7.5 image instead, override the image coordinates by setting the `RELATED_IMAGE_RHSSO` environment variable in the Operator pod. To make that change, update your OLM `Subscription` for the RH-SSO Operator inside
-your OpenShift cluster, for example:
-```yaml
-apiVersion: operators.coreos.com/v1alpha1
-kind: Subscription
-spec:
- channel: stable
- config:
- env:
- - name: RELATED_IMAGE_RHSSO
- value: 'registry.redhat.io/rh-sso-7/sso75-openshift-rhel8:7.5'
- name: rhsso-operator
- source: redhat-operators
- sourceNamespace: openshift-marketplace
- ...
-```
-The usage of RH-SSO Operator 7.6 with RH-SSO 7.5 is fully supported.
-endif::[]
-
-ifeval::[{project_community}==true]
-
-[[_kubernetes-olm]]
-===== Installation on a Kubernetes cluster
-
-.Prerequisites
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-For a Kubernetes cluster, perform these steps.
-
-. Go to link:https://operatorhub.io/operator/keycloak-operator[Keycloak Operator on OperatorHub.io].
-
-. Click `Install`.
-
-. Follow the instructions on the screen.
-+
-.Operator Install page on Kubernetes
-image:{project_images}/operator-operatorhub-install.png[]
-
-.Additional resources
-
-* When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource]. However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
-
-* For more information on a Kubernetes installation, see link:https://operatorhub.io/how-to-install-an-operator[How to install an Operator from OperatorHub.io].
-
-
-[[_install_by_command]]
-==== Installing from the command line
-
-You can install the {project_operator} from the command line.
-
-.Prerequisites
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-. Obtain the software to install from this location: link:{operatorRepo_link}[Github repo].
-
-. Install all required custom resource definitions:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f deploy/crds/
-----
-
-. Create a new namespace (or reuse an existing one) such as the namespace `myproject`:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} create namespace myproject
-----
-
-. Deploy a role, role binding, and service account for the Operator:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f deploy/role.yaml -n myproject
-$ {create_cmd} -f deploy/role_binding.yaml -n myproject
-$ {create_cmd} -f deploy/service_account.yaml -n myproject
-----
-
-. Deploy the Operator:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f deploy/operator.yaml -n myproject
-----
-
-. Confirm that the Operator is running:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get deployment keycloak-operator -n myproject
-NAME READY UP-TO-DATE AVAILABLE AGE
-keycloak-operator 1/1 1 1 41s
-----
-endif::[]
-.Additional resources
-
-* When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource].
-ifeval::[{project_community}==true]
-However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
-
-* For more information on a Kubernetes installation, see link:https://operatorhub.io/how-to-install-an-operator[How to install an Operator from OperatorHub.io].
-endif::[]
-
-* For more information on OpenShift Operators, see the link:https://docs.openshift.com/container-platform/4.4/operators/olm-what-operators-are.html[OpenShift Operators guide].
-
-[[_operator_production_usage]]
-=== Using the {project_operator} in production environment
-
-* The usage of embedded DB is not supported in a production environment.
-* Backup CRD is deprecated and not supported in a production environment.
-* The `podDisruptionBudget` field in the Keycloak CR is deprecated and will be ignored when the Operator is deployed on Kubernetes version 1.25 and higher.
-* We fully support using the rest of the CRDs in production, despite the `v1alpha1` version. We do not plan to make any breaking changes in this CRDs version.
-
diff --git a/server_installation/topics/operator/keycloak-backup-cr.adoc b/server_installation/topics/operator/keycloak-backup-cr.adoc
deleted file mode 100644
index 20fe51584f..0000000000
--- a/server_installation/topics/operator/keycloak-backup-cr.adoc
+++ /dev/null
@@ -1,204 +0,0 @@
-
-[[_backup-cr]]
-=== Scheduling database backups
-
-[WARNING]
-====
-Backup CR is *deprecated* and could be removed in future releases.
-====
-
-You can use the Operator to schedule automatic backups of the database as defined by custom resources. The custom resource triggers a backup job
-ifeval::[{project_community}==true]
-(or a `CronJob` in the case of Periodic Backups)
-endif::[]
-and reports back its status.
-
-ifeval::[{project_community}==true]
-Two options exist to schedule backups:
-
-* xref:_backups-cr-aws[Backing up to AWS S3 storage]
-* xref:_backups-local-cr[Backing up to local storage]
-
-If you have AWS S3 storage, you can perform a one-time backup or periodic backups. If you do not have AWS S3 storage, you can back up to local storage.
-
-[[_backups-cr-aws]]
-==== Backing up to AWS S3 storage
-
-You can back up your database to AWS S3 storage one time or periodically. To back up your data periodically, enter a valid `CronJob` into the `schedule`.
-
-For AWS S3 storage, you create a YAML file for the backup custom resource and a YAML file for the AWS secret. The backup custom resource requires a YAML file with the following structure:
-
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: KeycloakBackup
-metadata:
- name:
-spec:
- aws:
- # Optional - used only for Periodic Backups.
- # Follows usual crond syntax (for example, use "0 1 * * *" to perform the backup every day at 1 AM.)
- schedule:
- # Required - the name of the secret containing the credentials to access the S3 storage
- credentialsSecretName:
-```
-
-The AWS secret requires a YAML file with the following structure:
-
-.AWS S3 `Secret`
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name:
-type: Opaque
-stringData:
- AWS_S3_BUCKET_NAME:
- AWS_ACCESS_KEY_ID:
- AWS_SECRET_ACCESS_KEY:
-```
-
-.Prerequisites
-
-* Your Backup custom resource YAML file includes a `credentialsSecretName` that references a `Secret` containing AWS S3 credentials.
-
-* Your `KeycloakBackup` custom resource has `aws` sub-properties.
-
-* You have a YAML file for the AWS S3 Secret that includes a `` that matches the one identified in the backup custom resource.
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-
-.Procedure
-
-. Create the secret with credentials: `{create_cmd} -f .yaml`. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f secret.yaml
-keycloak.keycloak.org/aws_s3_secret created
-----
-
-. Create a backup job: `{create_cmd} -f .yaml`. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f aws_one-time-backup.yaml
-keycloak.keycloak.org/aws_s3_backup created
-----
-
-. View a list of backup jobs:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get jobs
-NAME COMPLETIONS DURATION AGE
-aws_s3_backup 0/1 6s 6s
-----
-
-. View the list of executed backup jobs.
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get pods
-NAME READY STATUS RESTARTS AGE
-aws_s3_backup-5b4rfdd 0/1 Completed 0 24s
-keycloak-0 1/1 Running 0 52m
-keycloak-postgresql-c824c6-vv27m 1/1 Running 0 71m
-----
-
-. View the log of your completed backup job:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} logs aws_s3_backup-5b4rf
-==> Component data dump completed
-.
-.
-.
-.
-[source,bash,subs=+attributes]
-----
-
-The status of the backup job also appears in the AWS console.
-
-[[_backups-local-cr]]
-==== Backing up to Local Storage
-
-endif::[]
-You can use Operator to create a backup job that performs a one-time backup to a local Persistent Volume.
-
-.Example YAML file for a Backup custom resource
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: KeycloakBackup
-metadata:
- name: test-backup
-```
-
-.Prerequisites
-
-* You have a YAML file for this custom resource.
-ifeval::[{project_community}==true]
-Be sure to omit the `aws` sub-properties from this file.
-endif::[]
-
-* You have a `PersistentVolume` with a `claimRef` to reserve it only for a `PersistentVolumeClaim` created by the {project_name} Operator.
-
-.Procedure
-
-. Create a backup job: `{create_cmd} -f `. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f one-time-backup.yaml
-keycloak.keycloak.org/test-backup
-----
-+
-The Operator creates a `PersistentVolumeClaim` with the following naming scheme: `Keycloak-backup-`.
-
-. View a list of volumes:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get pvc
-NAME STATUS VOLUME
-keycloak-backup-test-backup Bound pvc-e242-ew022d5-093q-3134n-41-adff
-keycloak-postresql-claim Bound pvc-e242-vs29202-9bcd7-093q-31-zadj
-----
-
-. View a list of backup jobs:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get jobs
-NAME COMPLETIONS DURATION AGE
-test-backup 0/1 6s 6s
-----
-
-. View the list of executed backup jobs:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get pods
-NAME READY STATUS RESTARTS AGE
-test-backup-5b4rf 0/1 Completed 0 24s
-keycloak-0 1/1 Running 0 52m
-keycloak-postgresql-c824c6-vv27m 1/1 Running 0 71m
-----
-
-. View the log of your completed backup job:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} logs test-backup-5b4rf
-==> Component data dump completed
-.
-.
-.
-.
-----
-
-.Additional resources
-
-* For more details on persistent volumes, see link:https://docs.openshift.com/container-platform/4.4/storage/understanding-persistent-storage.html[Understanding persistent storage].
-
diff --git a/server_installation/topics/operator/keycloak-client-cr.adoc b/server_installation/topics/operator/keycloak-client-cr.adoc
deleted file mode 100644
index a38b49cb3f..0000000000
--- a/server_installation/topics/operator/keycloak-client-cr.adoc
+++ /dev/null
@@ -1,118 +0,0 @@
-
-[[_client-cr]]
-=== Creating a client custom resource
-
-You can use the Operator to create clients in {project_name} as defined by a custom resource. You define the properties of the realm in a YAML file.
-
-[NOTE]
-====
-You can update the YAML file and changes appear in the {project_name} admin console, however changes to the admin console do not update the custom resource.
-====
-
-.Example YAML file for a Client custom resource
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: KeycloakClient
-metadata:
- name: example-client
- labels:
-ifeval::[{project_community}==true]
- app: app=example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: sso
-endif::[]
-spec:
- realmSelector:
- matchLabels:
- app:
- client:
- # auto-generated if not supplied
- #id: 123
- clientId: client-secret
- secret: client-secret
- # ...
- # other properties of Keycloak Client
-```
-
-.Prerequisites
-
-* You have a YAML file for this custom resource.
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-. Use this command on the YAML file that you created: `{create_cmd} -f .yaml`. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f initial_client.yaml
-keycloak.keycloak.org/example-client created
-----
-
-. Log into the {project_name} admin console for the related instance of {project_name}.
-
-. Click Clients.
-+
-The new client appears in the list of clients.
-+
-image:images/clients.png[]
-
-.Results
-After a client is created, the Operator creates a Secret containing the `Client ID` and the client's secret using the following naming pattern: `keycloak-client-secret-`. For example:
-
-.Client's Secret
-```yaml
-apiVersion: v1
-data:
- CLIENT_ID:
- CLIENT_SECRET:
-kind: Secret
-```
-
-After the Operator processes the custom resource, view the status with this command:
-
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} describe keycloak
-----
-
-.Client custom resource Status
-```yaml
-Name: client-secret
-Namespace: keycloak
-ifeval::[{project_community}==true]
-Labels: app=example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
-Labels: app=sso
-endif::[]
-API Version: keycloak.org/v1alpha1
-Kind: KeycloakClient
-Spec:
- Client:
- Client Authenticator Type: client-secret
- Client Id: client-secret
- Id: keycloak-client-secret
- Realm Selector:
- Match Labels:
-ifeval::[{project_community}==true]
- App: keycloak
-endif::[]
-ifeval::[{project_product}==true]
- App: sso
-endif::[]
-Status:
- Message:
- Phase: reconciling
- Ready: true
- Secondary Resources:
- Secret:
- keycloak-client-secret-client-secret
-Events:
-```
-
-.Additional resources
-
-* When the client creation completes, you are ready to xref:_user-cr[create a user custom resource].
\ No newline at end of file
diff --git a/server_installation/topics/operator/keycloak-cr.adoc b/server_installation/topics/operator/keycloak-cr.adoc
deleted file mode 100644
index 03c2362860..0000000000
--- a/server_installation/topics/operator/keycloak-cr.adoc
+++ /dev/null
@@ -1,249 +0,0 @@
-
-[[_keycloak_cr]]
-=== Installing {project_name} using a custom resource
-
-You can use the Operator to automate the installation of {project_name} by creating a Keycloak custom resource. When you use a custom resource to install {project_name}, you create the components and services that are described here and illustrated in the graphic that follows.
-
-* `keycloak-db-secret` - Stores properties such as the database username, password, and external address (if you connect to an external database)
-* `credentials-` - Admin username and password to log into the {project_name} admin console (the `` is based on the `Keycloak` custom resource name)
-* `keycloak` - Keycloak deployment specification that is implemented as a StatefulSet with high availability support
-* `keycloak-postgresql` - Starts a PostgreSQL database installation
-* `keycloak-discovery` Service - Performs `JDBC_PING` discovery
-* `keycloak` Service - Connects to {project_name} through HTTPS (HTTP is not supported)
-* `keycloak-postgresql` Service - Connects an internal and external, if used, database instance
-* `keycloak` Route - The URL for accessing the {project_name} admin console from OpenShift
-ifeval::[{project_community}==true]
-* `keycloak` Ingress - The URL for accessing the {project_name} admin console from Kubernetes
-endif::[]
-
-.How Operator components and services interact
-image:{project_images}/operator-components.png[]
-
-==== The Keycloak custom resource
-
-The Keycloak custom resource is a YAML file that defines the parameters for installation. This file contains three properties.
-
-* `instances` - controls the number of instances running in high availability mode.
-* `externalAccess` - if the `enabled` is `True`, the Operator creates a route for OpenShift
-ifeval::[{project_community}==true]
- or an Ingress for Kubernetes
-endif::[]
- for the {project_name} cluster. You can set `host` to override the automatically chosen host name for Route
-ifeval::[{project_community}==true]
- or default value `keycloak.local` set for Ingress.
-endif::[]
-* `externalDatabase` - in order to connect to an externally hosted database. That topic is covered in the xref:_external_database[external database] section of this guide. Setting it to false should be used only for testing purposes and will install an embedded PostgreSQL database. Be aware that externalDatabase:false is *NOT* supported in production environments.
-
-.Example YAML file for a Keycloak custom resource
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: Keycloak
-metadata:
-ifeval::[{project_community}==true]
- name: example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- name: example-sso
-endif::[]
- labels:
-ifeval::[{project_community}==true]
- app: example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: sso
-endif::[]
-spec:
- instances: 1
- externalAccess:
- enabled: True
-```
-
-[NOTE]
-====
-You can update the YAML file and the changes appear in the {project_name} admin console, however changes to the admin console do not update the custom resource.
-====
-
-==== Creating a Keycloak custom resource on OpenShift
-
-On OpenShift, you use the custom resource to create a route, which is the URL of the admin console, and find the secret, which holds the username and password for the admin console.
-
-.Prerequisites
-
-* You have a YAML file for this custom resource.
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-ifeval::[{project_community}==true]
-* If you want to start tracking all Operator activities now, install the monitoring application before you create this custom resource. See xref:_monitoring-operator[The Application Monitoring Operator].
-endif::[]
-
-.Procedure
-
-. Create a route using your YAML file: `{create_cmd} -f .yaml -n `. For example:
-+
-[source,bash,subs=+attributes]
-----
-ifeval::[{project_community}==true]
-$ {create_cmd} -f keycloak.yaml -n keycloak
-keycloak.keycloak.org/example-keycloak created
-endif::[]
-ifeval::[{project_product}==true]
-$ {create_cmd} -f sso.yaml -n sso
-keycloak.keycloak.org/example-sso created
-endif::[]
-----
-+
-A route is created in OpenShift.
-
-. Log into the OpenShift web console.
-
-. Select `Networking`, `Routes` and search for Keycloak.
-+
-.Routes screen in OpenShift web console
-image:images/route-ocp.png[]
-
-. On the screen with the Keycloak route, click the URL under `Location`.
-+
-The {project_name} admin console login screen appears.
-+
-.Admin console login screen
-image:images/login-empty.png[]
-
-. Locate the username and password for the admin console in the OpenShift web console; under `Workloads`, click `Secrets` and search for Keycloak.
-+
-.Secrets screen in OpenShift web console
-image:images/secrets-ocp.png[]
-
-. Enter the username and password into the admin console login screen.
-+
-.Admin console login screen
-image:images/login-complete.png[]
-+
-You are now logged into an instance of {project_name} that was installed by a Keycloak custom resource. You are ready to create custom resources for realms, clients, and users.
-+
-.{project_name} master realm
-image:images/new_install_cr.png[]
-
-. Check the status of the custom resource:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} describe keycloak
-----
-
-ifeval::[{project_community}==true]
-==== Creating a Keycloak custom resource on Kubernetes
-
-On Kubernetes, you use the custom resource to create an ingress, which is the IP address of the admin console, and find the secret, which holds the username and password for that console.
-
-.Prerequisites
-
-* You have a YAML file for this custom resource.
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-. Create the ingress using your YAML file. `{create_cmd} -f .yaml -n `. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f keycloak.yaml -n keycloak
-keycloak.keycloak.org/example-keycloak created
-----
-
-. Find the ingress: `{create_cmd_brief} get ingress -n `. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get ingress -n example-keycloak
-NAME HOSTS ADDRESS PORTS AGE
-keycloak keycloak.redhat.com 192.0.2.0 80 3m
-----
-
-. Copy and paste the ADDRESS (the ingress) into a web browser.
-+
-The {project_name} admin console login screen appears.
-+
-.Admin console login screen
-image:images/login-empty.png[]
-
-. Locate the username and password.
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} get secret credential- -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
-----
-
-. Enter the username and password in the admin console login screen.
-+
-.Admin console login screen
-image:images/login-complete.png[]
-+
-You are now logged into an instance of {project_name} that was installed by a Keycloak custom resource. You are ready to create custom resources for realms, clients, and users.
-+
-.Admin console master realm
-image:images/new_install_cr.png[]
-endif::[]
-
-.Results
-
-After the Operator processes the custom resource, view the status with this command:
-
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} describe keycloak
-----
-
-.Keycloak custom resource Status
-```yaml
-Name: example-keycloak
-Namespace: keycloak
-ifeval::[{project_community}==true]
-Labels: app=example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
-Labels: app=sso
-endif::[]
-Annotations:
-API Version: keycloak.org/v1alpha1
-Kind: Keycloak
-Spec:
- External Access:
- Enabled: true
- Instances: 1
-Status:
- Credential Secret: credential-example-keycloak
- Internal URL: https://
- Message:
- Phase: reconciling
- Ready: true
- Secondary Resources:
- Deployment:
- keycloak-postgresql
- Persistent Volume Claim:
- keycloak-postgresql-claim
- Prometheus Rule:
- keycloak
- Route:
- keycloak
- Secret:
- credential-example-keycloak
- keycloak-db-secret
- Service:
- keycloak-postgresql
- keycloak
- keycloak-discovery
- Service Monitor:
- keycloak
- Stateful Set:
- keycloak
- Version:
-Events:
-```
-
-.Additional resources
-
-* Once the installation of {project_name} completes, you are ready to xref:_realm-cr[create a realm custom resource].
-
-* An external database is the supported option and needs to be enabled in the Keycloak custom resource. You can disable this option only for testing and enable it when you switch to a production environment. See xref:_external_database[Connecting to an external database].
diff --git a/server_installation/topics/operator/keycloak-realm-cr.adoc b/server_installation/topics/operator/keycloak-realm-cr.adoc
deleted file mode 100644
index 7682fc273b..0000000000
--- a/server_installation/topics/operator/keycloak-realm-cr.adoc
+++ /dev/null
@@ -1,124 +0,0 @@
-
-[[_realm-cr]]
-=== Creating a realm custom resource
-
-You can use the Operator to create realms in {project_name} as defined by a custom resource. You define the properties of the realm custom resource in a YAML file.
-
-[NOTE]
-====
-You can only create or delete realms by creating or deleting the YAML file, and changes appear in the {project_name} admin console. However changes to the admin console are not reflected back and updates of the CR after the realm is created are not supported.
-====
-
-.Example YAML file for a `Realm` custom resource
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: KeycloakRealm
-metadata:
- name: test
- labels:
-ifeval::[{project_community}==true]
- app: example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: sso
-endif::[]
-spec:
- realm:
- id: "basic"
- realm: "basic"
- enabled: True
- displayName: "Basic Realm"
- instanceSelector:
- matchLabels:
-ifeval::[{project_community}==true]
- app: example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: sso
-endif::[]
-
-```
-
-.Prerequisites
-
-* You have a YAML file for this custom resource.
-
-* In the YAML file, the `app` under `instanceSelector` matches the label of a Keycloak custom resource. Matching these values ensures that you create the realm in the right instance of {project_name}.
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-. Use this command on the YAML file that you created: `{create_cmd} -f .yaml`. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f initial_realm.yaml
-keycloak.keycloak.org/test created
-----
-
-. Log into the admin console for the related instance of {project_name}.
-
-. Click Select Realm and locate the realm that you created.
-+
-The new realm opens.
-+
-.Admin console master realm
-image:images/test-realm-cr.png[]
-
-.Results
-
-After the Operator processes the custom resource, view the status with this command:
-
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} describe keycloak
-----
-
-.Realm custom resource status
-```yaml
-Name: example-keycloakrealm
-Namespace: keycloak
-ifeval::[{project_community}==true]
-Labels: app=example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
-Labels: app=sso
-endif::[]
-Annotations:
-API Version: keycloak.org/v1alpha1
-Kind: KeycloakRealm
-Metadata:
- Creation Timestamp: 2019-12-03T09:46:02Z
- Finalizers:
- realm.cleanup
- Generation: 1
- Resource Version: 804596
- Self Link: /apis/keycloak.org/v1alpha1/namespaces/keycloak/keycloakrealms/example-keycloakrealm
- UID: b7b2f883-15b1-11ea-91e6-02cb885627a6
-Spec:
- Instance Selector:
- Match Labels:
-ifeval::[{project_community}==true]
- App: example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- App: sso
-endif::[]
- Realm:
- Display Name: Basic Realm
- Enabled: true
- Id: basic
- Realm: basic
-Status:
- Login URL:
- Message:
- Phase: reconciling
- Ready: true
-Events:
-
-```
-
-Additional resources
-
-* When the realm creation completes, you are ready to xref:_client-cr[create a client custom resource].
diff --git a/server_installation/topics/operator/keycloak-user-cr.adoc b/server_installation/topics/operator/keycloak-user-cr.adoc
deleted file mode 100644
index a99a679ebe..0000000000
--- a/server_installation/topics/operator/keycloak-user-cr.adoc
+++ /dev/null
@@ -1,139 +0,0 @@
-
-[[_user-cr]]
-=== Creating a user custom resource
-
-You can use the Operator to create users in {project_name} as defined by a custom resource. You define the properties of the user custom resource in a YAML file.
-
-[NOTE]
-====
-You can update properties, except for the password, in the YAML file and changes appear in the {project_name} admin console, however changes to the admin console do not update the custom resource.
-====
-
-.Example YAML file for a user custom resource
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: KeycloakUser
-metadata:
- name: example-user
-spec:
- user:
- username: "realm_user"
- firstName: "John"
- lastName: "Doe"
- email: "user@example.com"
- enabled: True
- emailVerified: False
- credentials:
- - type: "password"
- value: "12345"
- realmRoles:
- - "offline_access"
- clientRoles:
- account:
- - "manage-account"
- realm-management:
- - "manage-users"
- realmSelector:
- matchLabels:
-ifeval::[{project_community}==true]
- app: example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: sso
-endif::[]
-```
-
-.Prerequisites
-
-* You have a YAML file for this custom resource.
-
-* The `realmSelector` matches the labels of an existing realm custom resource.
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-. Use this command on the YAML file that you created: `{create_cmd} -f .yaml`. For example:
-+
-[source,bash,subs=+attributes]
-----
-$ {create_cmd} -f initial_user.yaml
-keycloak.keycloak.org/example-user created
-----
-
-. Log into the admin console for the related instance of {project_name}.
-
-. Click Users.
-
-. Search for the user that you defined in the YAML file.
-+
-You may need to switch to a different realm to find the user.
-+
-image:images/realm_user.png[]
-
-.Results
-
-After a user is created, the Operator creates a Secret using the
-following naming pattern: `credential---`, containing the username and, if it has been specified in the CR `credentials` attribute, the password.
-
-Here's an example:
-
-.`KeycloakUser` Secret
-```yaml
-kind: Secret
-apiVersion: v1
-data:
- password:
- username:
-type: Opaque
-```
-
-Once the Operator processes the custom resource, view the status with this command:
-
-[source,bash,subs=+attributes]
-----
-$ {create_cmd_brief} describe keycloak
-----
-
-.User custom resource Status
-```yaml
-Name: example-realm-user
-Namespace: keycloak
-ifeval::[{project_community}==true]
-Labels: app=example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
-Labels: app=sso
-endif::[]
-API Version: keycloak.org/v1alpha1
-Kind: KeycloakUser
-Spec:
- Realm Selector:
- Match Labels:
-ifeval::[{project_community}==true]
- App: example-keycloak
-endif::[]
-ifeval::[{project_product}==true]
- App: sso
-endif::[]
- User:
- Email: realm_user@redhat.com
- Credentials:
- Type: password
- Value:
- Email Verified: false
- Enabled: true
- First Name: John
- Last Name: Doe
- Username: realm_user
-Status:
- Message:
- Phase: reconciled
-Events:
-```
-
-.Additional resources
-
-* If you have an external database, you can modify the Keycloak custom resource to support it. See xref:_external_database[Connecting to an external database].
-
-* To back up your database using custom resources, see xref:_backup-cr[schedule database backups].
diff --git a/server_installation/topics/operator/monitoring-stack.adoc b/server_installation/topics/operator/monitoring-stack.adoc
deleted file mode 100644
index 83de7d7628..0000000000
--- a/server_installation/topics/operator/monitoring-stack.adoc
+++ /dev/null
@@ -1,56 +0,0 @@
-
-[[_monitoring-operator]]
-=== The {application_monitoring_operator}
-
-Before using the Operator to install {project_name} or create components, we recommend that you install the {application_monitoring_operator}, which tracks Operator activity. To view metrics for the Operator, you can use the Grafana Dashboard and Prometheus Alerts from the {application_monitoring_operator}. For example, you can view metrics such as the number of controller runtime reconciliation loops, the reconcile loop time, and errors.
-
-The {project_operator} integration with the {application_monitoring_operator} requires no action. You only need to install the {application_monitoring_operator} in the cluster.
-
-==== Installing the {application_monitoring_operator}
-
-.Prerequisites
-
-* The {project_operator} is installed.
-
-* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
-
-.Procedure
-
-. Install the {application_monitoring_operator} by using the link:{application_monitoring_operator_installation_link}[documentation].
-
-. Annotate the namespace used for the {project_operator} installation. For example:
-+
-[source,bash,subs=+attributes]
-----
-{create_cmd_brief} label namespace monitoring-key=middleware
-----
-
-. Log into the OpenShift web console.
-
-. Confirm monitoring is working by searching for Prometheus and Grafana route in the `application-monitoring` namespace.
-+
-.Routes in OpenShift web console
-image:{project_images}/operator-application-monitoring-routes.png[]
-
-==== Viewing Operator Metrics
-
-Grafana and Promotheus each provide graphical information about Operator activities.
-
-* The Operator installs a pre-defined Grafana Dashboard as shown here:
-+
-.Grafana Dashboard
-image:{project_images}/operator-graphana-dashboard.png[]
-+
-[NOTE]
-====
-If you make customizations, we recommend that you clone the Grafana Dashboard so that your changes are not overwritten during an upgrade.
-====
-
-* The Operator installs a set of pre-defined Prometheus Alerts as shown here:
-+
-.Prometheus Alerts
-image:{project_images}/operator-prometheus-alerts.png[]
-
-.Additional resources
-
-For more information, see link:https://docs.openshift.com/container-platform/latest/monitoring/cluster_monitoring/prometheus-alertmanager-and-grafana.html[Accessing Prometheus, Alertmanager, and Grafana].
diff --git a/server_installation/topics/operator/upgrading.adoc b/server_installation/topics/operator/upgrading.adoc
deleted file mode 100644
index 9d36c37ccd..0000000000
--- a/server_installation/topics/operator/upgrading.adoc
+++ /dev/null
@@ -1,43 +0,0 @@
-
-[[_operator-upgrade-strategy]]
-=== Upgrade strategy
-
-You can configure how the operator performs {project_name} upgrades. You can choose from the following upgrade strategies.
-
-* `recreate`: This is the default strategy. The operator removes all {project_name} replicas, optionally creates a backup
- and then creates the replicas based on a newer {project_name} image. This strategy is suitable for major upgrades as
- a single {project_name} version is accessing the underlying database. The downside is {project_name} needs to be shut
- down during the upgrade.
-* `rolling`: The operator removes one replica at a time and creates it again based on a newer {project_name} image. This
- ensures a zero-downtime upgrade but is more suitable for minor version upgrades that do not require database migration
- since the database is accessed by multiple {project_name} versions concurrently. Automatic backups are not supported
- with this strategy.
-
-.Example YAML file for a Keycloak custom resource
-```yaml
-apiVersion: keycloak.org/v1alpha1
-kind: Keycloak
-metadata:
- name: example-keycloak
- labels:
-ifeval::[{project_community}==true]
- app: keycloak
-endif::[]
-ifeval::[{project_product}==true]
- app: sso
-endif::[]
-spec:
- instances: 2
- migration:
- strategy: recreate
- backups:
- enabled: True
- externalAccess:
- enabled: True
-```
-
-ifeval::[{project_community}==true]
-.Additional Resources
-
-* For more information on rolling updates, see the https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update[Updating StatefulSets manual].
-endif::[]
\ No newline at end of file
diff --git a/server_installation/topics/overview.adoc b/server_installation/topics/overview.adoc
deleted file mode 100644
index 1eb70dc450..0000000000
--- a/server_installation/topics/overview.adoc
+++ /dev/null
@@ -1,15 +0,0 @@
-
-== Guide Overview
-
-The purpose of this guide is to walk through the steps that need to be completed prior to booting up the
-{project_name} server for the first time. If you just want to test drive {project_name}, it pretty much runs out of the box with its
-own embedded and local-only database. For
- actual deployments that are going to be run in production you'll need to decide how you want to manage server configuration
- at runtime (standalone or domain mode), configure a shared database for {project_name} storage, set up encryption and HTTPS,
- and finally set up {project_name} to run in a cluster. This guide walks through each and every aspect of any pre-boot
- decisions and setup you must do prior to deploying the server.
-
-One thing to particularly note is that {project_name} is derived from the {appserver_name} Application Server.
-Many aspects of configuring {project_name} revolve around {appserver_name} configuration elements. Often
-this guide will direct you to documentation outside of the manual if you want to dive into more detail.
-
diff --git a/server_installation/topics/overview/recommended-reading.adoc b/server_installation/topics/overview/recommended-reading.adoc
deleted file mode 100644
index d95de9381e..0000000000
--- a/server_installation/topics/overview/recommended-reading.adoc
+++ /dev/null
@@ -1,8 +0,0 @@
-
-=== Recommended additional external documentation
-
-{project_name} is built on top of the {appserver_name} application server and its sub-projects like Infinispan (for caching) and Hibernate (for persistence).
-This guide only covers basics for infrastructure-level configuration. It is highly recommended that you peruse the documentation
-for {appserver_name} and its sub projects. Here is the link to the documentation:
-
-* link:{appserver_admindoc_link}[_{appserver_admindoc_name}_]
diff --git a/server_installation/topics/profiles.adoc b/server_installation/topics/profiles.adoc
deleted file mode 100644
index 2f2b13d8b3..0000000000
--- a/server_installation/topics/profiles.adoc
+++ /dev/null
@@ -1,155 +0,0 @@
-[[profiles]]
-
-== Profiles
-
-There are features in {project_name} that are not enabled by default, these include features that are not fully
-supported. In addition there are some features that are enabled by default, but that can be disabled.
-
-The features that can be enabled and disabled are:
-
-[cols="4*", options="header"]
-|===
-|Name
-|Description
-|Enabled by default
-|Support level
-
-|account2
-|New Account Management Console
-|Yes
-|Supported
-
-|account_api
-|Account Management REST API
-|Yes
-|Supported
-
-|admin_fine_grained_authz
-|Fine-Grained Admin Permissions
-|No
-|Preview
-
-|ciba
-|OpenID Connect Client Initiated Backchannel Authentication (CIBA)
-|Yes
-|Supported
-
-|client_policies
-|Add client configuration policies
-|Yes
-|Supported
-
-|client_secret_rotation
-|Enables client secret rotation for confidential clients
-|Yes
-|Preview
-
-|par
-|OAuth 2.0 Pushed Authorization Requests (PAR)
-|Yes
-|Supported
-
-|declarative_user_profile
-|Configure user profiles using a declarative style
-|No
-|Preview
-
-|docker
-|Docker Registry protocol
-|No
-|Supported
-
-|impersonation
-|Ability for admins to impersonate users
-|Yes
-|Supported
-
-|openshift_integration
-|Extension to enable securing OpenShift
-|No
-|Preview
-
-|recovery_codes
-|Recovery codes for authentication
-|No
-|Preview
-
-|scripts
-|Write custom authenticators using JavaScript
-|No
-|Preview
-
-|step_up_authentication
-|Step-up authentication
-|Yes
-|Supported
-
-|token_exchange
-|Token Exchange Service
-|No
-|Preview
-
-|upload_scripts
-|Upload scripts
-|No
-|Deprecated
-
-|web_authn
-|W3C Web Authentication (WebAuthn)
-|Yes
-|Supported
-
-|update_email
-|Update Email Workflow
-|No
-|Preview
-
-|===
-
-To enable all preview features start the server with:
-
-[source]
-----
-bin/standalone.sh|bat -Dkeycloak.profile=preview
-----
-
-You can set this permanently by creating the file `standalone/configuration/profile.properties`
-(or `domain/servers/server-one/configuration/profile.properties` for `server-one` in domain mode). Add the following to
-the file:
-
-[source]
-----
-profile=preview
-----
-
-To enable a specific feature start the server with:
-
-[source]
-----
-bin/standalone.sh|bat -Dkeycloak.profile.feature.=enabled
-----
-
-For example to enable Docker use `-Dkeycloak.profile.feature.docker=enabled`.
-
-You can set this permanently in the `profile.properties` file by adding:
-
-[source]
-----
-feature.docker=enabled
-----
-
-To disable a specific feature start the server with:
-
-[source]
-----
-bin/standalone.sh|bat -Dkeycloak.profile.feature.=disabled
-----
-
-For example to disable Impersonation use `-Dkeycloak.profile.feature.impersonation=disabled`.
-
-You can set this permanently in the `profile.properties` file by adding:
-
-[source]
-----
-feature.impersonation=disabled
-----
diff --git a/server_installation/topics/templates b/server_installation/topics/templates
deleted file mode 120000
index d191264115..0000000000
--- a/server_installation/topics/templates
+++ /dev/null
@@ -1 +0,0 @@
-../../topics/templates
\ No newline at end of file
diff --git a/tests/pom.xml b/tests/pom.xml
index bbc54db622..508d0017d0 100644
--- a/tests/pom.xml
+++ b/tests/pom.xml
@@ -80,12 +80,6 @@
${project.version}pom
-
- org.keycloak.documentation
- server-installation
- ${project.version}
- pom
- org.junit.jupiter
@@ -125,23 +119,4 @@
-
-
- product
-
-
- product
-
-
-
-
- org.keycloak.documentation
- getting-started
- ${project.version}
- pom
-
-
-
-
-
diff --git a/tests/src/test/java/org/keycloak/documentation/test/Guides.java b/tests/src/test/java/org/keycloak/documentation/test/Guides.java
index 72575b9dcf..8b7aa09f6a 100644
--- a/tests/src/test/java/org/keycloak/documentation/test/Guides.java
+++ b/tests/src/test/java/org/keycloak/documentation/test/Guides.java
@@ -15,14 +15,8 @@ public class Guides {
g.add("securing_apps");
g.add("server_admin");
g.add("server_development");
- g.add("server_installation");
g.add("upgrading");
- if (product) {
- g.add("getting_started");
- g.add("openshift");
- }
-
guides = g.toArray(new String[g.size()]);
}
diff --git a/topics/templates/deprecated.adoc b/topics/templates/deprecated.adoc
index 671eb73e37..526bef6c6f 100644
--- a/topics/templates/deprecated.adoc
+++ b/topics/templates/deprecated.adoc
@@ -7,6 +7,6 @@ To enable start the server with
ifdef::tech_feature_setting[]
`{tech_feature_setting}`
endif::[]
-. For more details see link:{installguide_profile_link}[{installguide_profile_name}].
+. For more details see the https://www.keycloak.org/server/features[Enabling and disabling features] guide.
====
endif::[]
diff --git a/topics/templates/document-attributes-community.adoc b/topics/templates/document-attributes-community.adoc
index 3b71bc0790..a085d6d5f1 100644
--- a/topics/templates/document-attributes-community.adoc
+++ b/topics/templates/document-attributes-community.adoc
@@ -31,7 +31,6 @@ endif::[]
:create_cmd: kubectl apply
:create_cmd_brief: kubectl
-:kc_dist: quarkus
:kc_realms_path: /realms
:kc_admins_path: /admin
:kc_js_path: /js
@@ -85,27 +84,6 @@ endif::[]
:releasenotes_link: {project_doc_base_url}/release_notes/
:releasenotes_link_latest: {project_doc_base_url_latest}/release_notes/
-:installguide_name: Server Installation and Configuration Guide
-:installguide_name_short: Server Installation
-:installguide_link: {project_doc_base_url}/server_installation/
-:installguide_link_latest: {project_doc_base_url_latest}/server_installation/
-:installguide_clustering_name: Clustering
-:installguide_clustering_link: {installguide_link}#_clustering
-:installguide_database_name: Database
-:installguide_database_link: {installguide_link}#_database
-:installguide_disablingcaching_name: Disabling caching
-:installguide_disablingcaching_link: {installguide_link}#disabling-caching
-:installguide_loadbalancer_name: Setting Up a Load Balancer or Proxy
-:installguide_loadbalancer_link: {installguide_link}#_setting-up-a-load-balancer-or-proxy
-:installguide_truststore_name: Keycloak truststore
-:installguide_truststore_link: {installguide_link}#_truststore
-:installguide_profile_name: Profiles
-:installguide_profile_link: {installguide_link}#profiles
-:installguide_stickysessions_name: Sticky sessions
-:installguide_stickysessions_link: {installguide_link}#sticky-sessions
-:installguide_troubleshooting_name: Troubleshooting
-:installguide_troubleshooting_link: {installguide_link}#troubleshooting
-
:apidocs_javadocs_name: JavaDocs Documentation
:apidocs_javadocs_link: https://www.keycloak.org/docs-api/{project_versionDoc}/javadocs/
:apidocs_adminrest_name: Administration REST API
diff --git a/topics/templates/document-attributes-product.adoc b/topics/templates/document-attributes-product.adoc
index cb20752e1b..f87e40e072 100644
--- a/topics/templates/document-attributes-product.adoc
+++ b/topics/templates/document-attributes-product.adoc
@@ -31,11 +31,10 @@
:create_cmd: oc create
:create_cmd_brief: oc
-:kc_dist: wildfly
-:kc_realms_path: /auth/realms
-:kc_admins_path: /auth/admin
-:kc_js_path: /auth/js
-:kc_base_path: /auth
+:kc_realms_path: /realms
+:kc_admins_path: /admin
+:kc_js_path: /js
+:kc_base_path:
:project_dirref: RHSSO_HOME
@@ -107,25 +106,6 @@
:ocp311docs_passthrough_route_link: {official_ocp_docs_link}/3.11/architecture/networking/routes.html#passthrough-termination
:ocp311docs_reencrypt_route_link: {official_ocp_docs_link}/3.11/architecture/networking/routes.html#re-encryption-termination
-:installguide_name: Server Installation and Configuration Guide
-:installguide_link: {project_doc_base_url}/server_installation_and_configuration_guide/
-:installguide_clustering_name: Clustering
-:installguide_clustering_link: {installguide_link}#_clustering
-:installguide_database_name: Database
-:installguide_database_link: {installguide_link}#_database
-:installguide_disablingcaching_name: Disabling caching
-:installguide_disablingcaching_link: {installguide_link}#disabling-caching
-:installguide_loadbalancer_name: Setting Up a Load Balancer or Proxy
-:installguide_loadbalancer_link: {installguide_link}#_setting-up-a-load-balancer-or-proxy
-:installguide_truststore_name: Keycloak truststore
-:installguide_truststore_link: {installguide_link}#_truststore
-:installguide_profile_name: Profiles
-:installguide_profile_link: {installguide_link}#profiles
-:installguide_stickysessions_name: Sticky sessions
-:installguide_stickysessions_link: {installguide_link}#sticky-sessions
-:installguide_troubleshooting_name: Troubleshooting
-:installguide_troubleshooting_link: {installguide_link}#troubleshooting
-
:apidocs_javadocs_name: JavaDocs Documentation
:apidocs_javadocs_link: https://access.redhat.com/webassets/avalon/d/red-hat-single-sign-on/version-{project_versionDoc}/javadocs/
:apidocs_adminrest_name: Administration REST API
diff --git a/topics/templates/release-header.adoc b/topics/templates/release-header.adoc
index a40d7ef2a9..b25a0eb590 100644
--- a/topics/templates/release-header.adoc
+++ b/topics/templates/release-header.adoc
@@ -5,9 +5,6 @@
ifeval::["{release_header_guide}" != "{gettingstarted_name_short}"]
* {gettingstarted_link}[{gettingstarted_name_short}]
endif::[]
-ifeval::["{release_header_guide}" != "{installguide_name_short}"]
-* {installguide_link}[{installguide_name_short}]
-endif::[]
ifeval::["{release_header_guide}" != "{adapterguide_name_short}"]
* {adapterguide_link}[{adapterguide_name_short}]
endif::[]
diff --git a/topics/templates/techpreview.adoc b/topics/templates/techpreview.adoc
index fa3f1b866a..f291ae556c 100644
--- a/topics/templates/techpreview.adoc
+++ b/topics/templates/techpreview.adoc
@@ -3,20 +3,11 @@ ifeval::[{tech_feature_disabled}!=false]
====
{tech_feature_name} is *Technology Preview* and is not fully supported. This feature is disabled by default.
-ifeval::["{kc_dist}" == "quarkus"]
To enable start the server with `--features=preview`
ifdef::tech_feature_setting[]
or `--features={tech_feature_id}`
endif::[]
-endif::[]
-ifeval::["{kc_dist}" == "wildfly"]
-To enable start the server with `-Dkeycloak.profile=preview`
-ifdef::tech_feature_setting[]
-or `{tech_feature_setting}`
-endif::[]
-. For more details see link:{installguide_profile_link}[{installguide_profile_name}].
-endif::[]
====
endif::[]
ifeval::[{tech_feature_disabled}==false]
diff --git a/upgrading/topics/install_new_version-quarkus.adoc b/upgrading/topics/install_new_version-quarkus.adoc
deleted file mode 100644
index 5428412c9a..0000000000
--- a/upgrading/topics/install_new_version-quarkus.adoc
+++ /dev/null
@@ -1,15 +0,0 @@
-[[_install_new_version]]
-
-== Upgrading the {project_name} server
-
-It is important that you upgrade {project_name} server before upgrading the adapters.
-
-.Prerequisites
-* Handle any open transactions and delete the data/tx-object-store/ transaction directory.
-
-.Procedure
-. Download the new server archive
-. Move the downloaded archive to the desired location.
-. Extract the archive. This step installs a clean instance of the latest {project_name} release.
-. Copy `conf/`, `providers/` and `themes/` from the previous installation to the new installation.
-. Re-build the server with `bin/kc.[sh|bat] build` unless you are using auto-build
\ No newline at end of file
diff --git a/upgrading/topics/install_new_version.adoc b/upgrading/topics/install_new_version.adoc
index 0e52a2ca92..de249c8f41 100644
--- a/upgrading/topics/install_new_version.adoc
+++ b/upgrading/topics/install_new_version.adoc
@@ -2,156 +2,13 @@
== Upgrading the {project_name} server
-Follow these guidelines to be sure the server upgrade is successful:
-
-* Test the upgrade in a non-production environment first to prevent any installation issues in production,
-* Upgrade the {project_name} server before upgrading the adapters. Also ensure the upgraded server is functional in production before upgrading adapters.
-
-[WARNING]
-====
-This upgrade procedure may require modification due to manual changes that are specific to your installation. For details on manual changes that might affect the upgrade, see xref:release_changes[Release-specific changes].
-====
-
-ifeval::[{project_product}==true]
-Upgrade the server from a xref:upgrade-zip[ZIP file] or an xref:rpm-upgrade[RPM] based on the method you had used for installation.
-
-[id="upgrade-zip"]
-=== Upgrading the server from a ZIP file
-endif::[]
+It is important that you upgrade {project_name} server before upgrading the adapters.
.Prerequisites
* Handle any open transactions and delete the data/tx-object-store/ transaction directory.
.Procedure
-. Download the new server archive.
+. Download the new server archive
. Move the downloaded archive to the desired location.
. Extract the archive. This step installs a clean instance of the latest {project_name} release.
-. For standalone installations, copy the `{project_dirref}/standalone/` directory from the previous installation over the
- directory in the new installation.
-+
-For domain installations, copy the `{project_dirref}/domain/` directory from the previous installation over the directory
-in the new installation.
-+
-For domain installations, create the empty directory `{project_dirref}/domain/deployments`.
-+
-NOTE:
-Files in the bin directory should not be overwritten by the files from previous versions. Changes should be made manually.
-
-. Copy any custom modules that have been added to the modules directory.
-. Continue with the section, xref:upgrade-script[Running the server upgrade script].
-
-ifeval::[{project_product}==true]
-
-[id="rpm-upgrade"]
-=== Upgrading the server from an RPM
-
-.Prerequisites
-* Handle any open transactions and delete the /var/opt/rh/rh-sso7/lib/keycloak/standalone/data/tx-object-store/ transaction directory.
-
-.Procedure
-
-. Subscribe to the proper repository containing {project_name}.
-+
-For Red Hat Enterprise Linux 7:
-+
- subscription-manager repos --enable=rh-sso-7.6-for-rhel-7-x86_64-rpms
-+
-For Red Hat Enterprise Linux 8:
-+
- subscription-manager repos --enable=rh-sso-7.6-for-rhel-8-x86_64-rpms
-+
-
-. Disable the older product repository for {project_name}:
-
- subscription-manager repos --disable=rh-sso-7.5-for-rhel-8-x86_64-rpms
-
-. Check the list of repositories:
-
- dnf repolist
-
- Updating Subscription Management repositories.
- repo id repo name
- rh-sso-7.6-for-rhel-8-x86_64-rpms Single Sign-On 7.6 for RHEL 8 x86_64 (RPMs)
- rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
- rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
-
-. Back up any modified configuration files and custom modules.
-
-. Use `dnf upgrade` to upgrade to the new Red Hat Single Sign-On version.
-+
-The RPM upgrade process does not replace any modified configuration files. Instead, this process creates .rpmnew files for the default configuration of the new Red Hat Single Sign-On version.
-
-. To activate any new features in the new release, such as new subsystems, manually merge each .rpmnew file into your existing configuration files.
-
-. Copy any custom modules that have been added to the modules directory.
-
-. Continue with the section, xref:upgrade-script[Running the server upgrade script].
-+
-[NOTE]
-====
-{project_name} RPM server distribution is using
-
-`{project_dirref}=/opt/rh/rh-sso7/root/usr/share/keycloak`
-
-Use it when calling migration scripts below.
-====
-
-endif::[]
-
-[id="upgrade-script"]
-== Running the server upgrade script
-
-Based on your previous installation, run the appropriate upgrade script that applies to your situation:
-
-* xref:standalone-mode[Standalone mode]
-* xref:standalone-ha[Standalone high availability mode]
-* xref:domain-mode[Domain mode]
-* xref:domain-clustered[Domain-clustered mode]
-
-[id="standalone-mode"]
-=== Running the Standalone Mode upgrade script
-
-.Procedure
-
-. If you are using a different configuration file than the default one, edit the migration script to specify the new file name.
-. Stop the server.
-. Run the upgrade script:
-
- bin/jboss-cli.sh --file=bin/migrate-standalone.cli
-
-[id="standalone-ha"]
-=== Running the Standalone-High Availability Mode upgrade script
-For standalone-high availability (HA) mode, all instances must be upgraded at the same time.
-
-.Procedure
-. If you are using a different configuration file than the default one, edit the migration script to specify the new file name.
-. Stop the server.
-. Run the upgrade script:
-
- bin/jboss-cli.sh --file=bin/migrate-standalone-ha.cli
-
-[id="domain-mode"]
-=== Running the Domain Mode upgrade script
-For domain mode, all instances must be upgraded at the same time.
-
-.Procedure
-
-. If you have changed the profile name, you must edit the upgrade script to change a variable near the beginning of the script.
-. Edit the domain script to include the location of the keycloak-server.json file.
-. Stop the server.
-. Run the upgrade script on the domain controller
-
- bin/jboss-cli.sh --file=bin/migrate-domain.cli
-
-[id="domain-clustered"]
-=== Running the Domain-clustered Mode upgrade script
-For domain-clustered mode, all instances must be upgraded at the same time.
-
-.Procedure
-
-. If you have changed the profile name, you must edit the upgrade script to change a variable near the beginning of the script.
-. Edit the domain-clustered script to include the location of the keycloak-server.json file.
-. Stop the server.
-. Run the upgrade script on the domain controller only:
-
- bin/jboss-cli.sh --file=bin/migrate-domain-clustered.cli
+. Copy `conf/`, `providers/` and `themes/` from the previous installation to the new installation.
\ No newline at end of file
diff --git a/upgrading/topics/keycloak/changes-16_0_0.adoc b/upgrading/topics/keycloak/changes-16_0_0.adoc
index 773cbd8351..c120c820cf 100644
--- a/upgrading/topics/keycloak/changes-16_0_0.adoc
+++ b/upgrading/topics/keycloak/changes-16_0_0.adoc
@@ -21,13 +21,10 @@ HTTP requests. This change could lead to unexpected use of a proxy server if you
explicit proxy mappings specified in your SPI configuration. To prevent {project_name} from using those environment variables,
you can explicitly create a no proxy route for all requests as `.*;NO_PROXY`.
-For more details, see the link:{installguide_link}#_proxy_env_vars[related chapter in the {installguide_name}].
-
= Deprecated features in the {project_operator}
With this release, we have deprecated and/or marked as unsupported some features in the {project_operator}. This
-concerns the Backup CRD and the operator managed Postgres Database. For more details, please see the
-link:{installguide_link}#_operator_production_usage[related chapter in the {installguide_name}].
+concerns the Backup CRD and the operator managed Postgres Database.
= Keycloak Operator examples including unsupported Metrics extension
diff --git a/upgrading/topics/keycloak/changes-18_0_0.adoc b/upgrading/topics/keycloak/changes-18_0_0.adoc
index 8185af5f33..629ae5378f 100644
--- a/upgrading/topics/keycloak/changes-18_0_0.adoc
+++ b/upgrading/topics/keycloak/changes-18_0_0.adoc
@@ -9,7 +9,7 @@ the client scope is not automatically added to all existing clients during migra
the migration. Consider these possible actions:
- If you do not plan to use step-up authentication feature, but you rely on the `acr` claim in the token, you can disable `step_up_authentication`
- feature as described in the link:{installguide_link}#profiles[{installguide_name}]. The claim will be added with the value `1` in case of normal authentication and `0` in case of SSO authentication.
+ feature. The claim will be added with the value `1` in case of normal authentication and `0` in case of SSO authentication.
- Add `acr` client scope to your clients manually by admin REST API or admin console. This is needed especially if you want to use step-up authentication.
If you have a large number of clients in the realm and want to use `acr` claim for all of them, you can trigger some SQL similar to this against your DB.
However, remember to clear the cache or restart the server if {project_name} is already started:
@@ -43,28 +43,11 @@ The existing deployments are affected in the following ways:
There is a backwards compatibility option, which allows your application to still use the old format of the `redirect_uri` parameter.
-ifeval::["{kc_dist}" == "quarkus"]
You can enable this parameter when you start the server by entering the following command:
```
bin/kc.[sh|bat] --spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true start
```
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-You can enable this parameter by including the following configuration in the `standalone-*.xml` file
-
-[source,bash,subs=+attributes]
-----
-
-
-
-
-
-
-
-----
-endif::[]
With this configuration, you can still use the format with the `redirect_uri` parameter. Note the confirmation screen will be needed if the `id_token_hint` is omitted.
diff --git a/upgrading/topics/keycloak/changes-19_0_2.adoc b/upgrading/topics/keycloak/changes-19_0_2.adoc
index b54ad3bd60..698f79fdec 100644
--- a/upgrading/topics/keycloak/changes-19_0_2.adoc
+++ b/upgrading/topics/keycloak/changes-19_0_2.adoc
@@ -3,28 +3,11 @@ At Keycloak 18.0.0, the logout is now compatible with the new OIDC specification
While the url parameters can now be configured to be compatible, there was still one incompatibility with keycloak 17 and earlier releases. If the user does not provide an valid `idTokenHint`, a logout prompt appears instead of a successful logout redirect. Therefore, a new compatibility flag `suppress-logout-confirmation-screen` is introduced to suppress the logout screen.
-ifeval::["{kc_dist}" == "quarkus"]
You can enable this parameter when you start the server by entering the following command:
```
bin/kc.[sh|bat] --spi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true start
```
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
-You can enable this parameter by including the following configuration in the `standalone-*.xml` file
-
-[source,bash,subs=+attributes]
-----
-
-
-
-
-
-
-
-----
-endif::[]
With this configuration, you can still use the logout endpoint without a user prompt.
diff --git a/upgrading/topics/keycloak/upgrading.adoc b/upgrading/topics/keycloak/upgrading.adoc
index 925634d5a7..e839cab8cd 100644
--- a/upgrading/topics/keycloak/upgrading.adoc
+++ b/upgrading/topics/keycloak/upgrading.adoc
@@ -4,26 +4,12 @@
include::../prep_migration.adoc[leveloffset=1]
-ifeval::["{kc_dist}" == "quarkus"]
-include::../install_new_version-quarkus.adoc[leveloffset=1]
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
include::../install_new_version.adoc[leveloffset=1]
-endif::[]
-ifeval::["{kc_dist}" == "quarkus"]
-include::../migrate_db-quarkus.adoc[leveloffset=1]
-endif::[]
-
-ifeval::["{kc_dist}" == "wildfly"]
include::../migrate_db.adoc[leveloffset=1]
-endif::[]
include::../migrate_themes.adoc[leveloffset=1]
-
include::../upgrade_adapters.adoc[leveloffset=1]
-
include::../upgrade_admin_client.adoc[leveloffset=1]
diff --git a/upgrading/topics/migrate_db-quarkus.adoc b/upgrading/topics/migrate_db-quarkus.adoc
deleted file mode 100644
index 056b480fdf..0000000000
--- a/upgrading/topics/migrate_db-quarkus.adoc
+++ /dev/null
@@ -1,49 +0,0 @@
-[[_migrate_db]]
-
-== Database migration
-
-{project_name} can automatically migrate the database schema, or you can choose to do it manually. By default the
-database is automatically migrated when you start the new installation for the first time.
-
-=== Automatic relational database migration
-
-To enable automatic upgrading of the database schema, set the migration-strategy property value to "update" for the
-default connections-jpa provider:
-
-[source,bash]
-----
-kc.[sh|bat] start --spi-connections-jpa-default-migration-strategy=update
-----
-
-When you start the server with this setting your database is automatically migrated if the database schema has changed
-in the new version.
-
-Creating an index on huge tables with millions of records can easily take a huge amount of time
-and potentially cause major service disruption on upgrades.
-For those cases, we added a threshold (the number of records) for automated index creation.
-By default, this threshold is `300000` records.
-When the number of records is higher than the threshold, the index is not created automatically,
-and there will be a warning message in server logs including SQL commands which can be applied later manually.
-
-To change the threshold, set the `index-creation-threshold` property, value for the default `connections-liquibase` provider:
-
-[source,bash]
-----
-kc.[sh|bat] start --spi-connections-liquibase-default-index-creation-threshold=300000
-----
-
-=== Manual relational database migration
-
-To enable manual upgrading of the database schema, set the migration-strategy property value to "manual" for the default
-connections-jpa provider:
-
-[source,bash]
-----
-kc.[sh|bat] start --spi-connections-jpa-default-migration-strategy=manual
-----
-
-When you start the server with this configuration it checks if the database needs to be migrated. The required changes
-are written to an SQL file that you can review and manually run against the database. For further details on how to
-apply this file to the database, see the documentation for the relational database you're using. After the changes have
-been written to the file, the server exits.
-
diff --git a/upgrading/topics/migrate_db.adoc b/upgrading/topics/migrate_db.adoc
index e18820fd87..307b72f9ed 100644
--- a/upgrading/topics/migrate_db.adoc
+++ b/upgrading/topics/migrate_db.adoc
@@ -7,26 +7,12 @@ database is automatically migrated when you start the new installation for the f
=== Automatic relational database migration
-To enable automatic upgrading of the database schema, set the migrationStrategy property value to `update` for the
-default connectionsJpa provider:
-
-[source,xml]
-----
-
-
-
- ...
-
-
-
-
-----
-
-Or run this CLI command:
+To enable automatic upgrading of the database schema, set the migration-strategy property value to "update" for the
+default connections-jpa provider:
[source,bash]
----
-/subsystem=keycloak-server/spi=connectionsJpa/provider=default/:map-put(name=properties,key=migrationStrategy,value=update)
+kc.[sh|bat] start --spi-connections-jpa-legacy-migration-strategy=update
----
When you start the server with this setting your database is automatically migrated if the database schema has changed
@@ -39,49 +25,21 @@ By default, this threshold is `300000` records.
When the number of records is higher than the threshold, the index is not created automatically,
and there will be a warning message in server logs including SQL commands which can be applied later manually.
-To change the threshold, set the `indexCreationThreshold` property, value for the default `connectionsLiquibase` provider:
-
-[source,xml]
-----
-
-
-
-
-
-
-
-----
-
-Or run this CLI command:
+To change the threshold, set the `index-creation-threshold` property, value for the default `connections-liquibase` provider:
[source,bash]
----
-/subsystem=keycloak-server/spi=connectionsLiquibase/:add(default-provider=default)
-/subsystem=keycloak-server/spi=connectionsLiquibase/provider=default/:add(properties={indexCreationThreshold => "300000"},enabled=true)
+kc.[sh|bat] start --spi-connections-liquibase-default-index-creation-threshold=300000
----
=== Manual relational database migration
-To enable manual upgrading of the database schema, set the migrationStrategy property value to `manual` for the default
-connectionsJpa provider:
-
-[source,xml]
-----
-
-
-
- ...
-
-
-
-
-----
-
-Or run this CLI command:
+To enable manual upgrading of the database schema, set the migration-strategy property value to "manual" for the default
+connections-jpa provider:
[source,bash]
----
-/subsystem=keycloak-server/spi=connectionsJpa/provider=default/:map-put(name=properties,key=migrationStrategy,value=manual)
+kc.[sh|bat] start --spi-connections-jpa-legacy-migration-strategy=manual
----
When you start the server with this configuration it checks if the database needs to be migrated. The required changes
diff --git a/upgrading/topics/rhsso/changes-76.adoc b/upgrading/topics/rhsso/changes-76.adoc
index 9d457c7ee8..4dc38b4b6b 100644
--- a/upgrading/topics/rhsso/changes-76.adoc
+++ b/upgrading/topics/rhsso/changes-76.adoc
@@ -13,7 +13,7 @@ the client scope is not automatically added to all existing clients during migra
the migration. Consider these possible actions:
- If you do not plan to use step-up authentication feature, but you rely on the `acr` claim in the token, you can disable `step_up_authentication`
- feature as described in the link:{installguide_link}#profiles[{installguide_name}]. The claim will be added with the value `1` in case of normal authentication and `0` in case of SSO authentication.
+ feature. The claim will be added with the value `1` in case of normal authentication and `0` in case of SSO authentication.
- Add `acr` client scope to your clients manually by admin REST API or admin console. This is needed especially if you want to use step-up authentication.
If you have a large number of clients in the realm and want to use `acr` claim for all of them, you can trigger some SQL similar to this against your DB.
However, remember to clear the cache or restart the server if {project_name} is already started: