Remove WildFly distribution from documentation (#1666)

* Remove WildFly distribution from documentation

Closes #1665

* Update server_admin/topics/authentication/webauthn.adoc

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>

* Update upgrading/topics/install_new_version.adoc

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>

* Update upgrading/topics/migrate_db.adoc

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>

* Update upgrading/topics/migrate_db.adoc

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>

* Update server_admin/topics/realms/ssl.adoc

* Update server_admin/topics/user-federation/ldap.adoc

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>

* Update server_development/topics/providers.adoc

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>

* Update server_development/topics/providers.adoc

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>

* Remove section on cilent cert lookup in x509.adoc

* Update securing_apps/topics/oidc/fapi-support.adoc

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>

* Add missing images for rh-sso images by moving to shared images as we won't have RH-SSO specific theme anymore

Co-authored-by: Pedro Igor <pigor.craveiro@gmail.com>
This commit is contained in:
Stian Thorgersen 2022-08-19 13:15:51 +02:00 committed by GitHub
parent 2ab7428867
commit 06bc4af50e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
286 changed files with 42 additions and 15039 deletions

View file

@ -44,12 +44,6 @@
<version>${project.version}</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>org.keycloak.documentation</groupId>
<artifactId>server-installation</artifactId>
<version>${project.version}</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>org.keycloak.documentation</groupId>
<artifactId>release-notes</artifactId>
@ -161,22 +155,6 @@
</resources>
</configuration>
</execution>
<execution>
<id>copy-server_installation</id>
<phase>process-resources</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${project.build.outputDirectory}/server_installation/</outputDirectory>
<resources>
<resource>
<directory>../server_installation/target/generated-docs</directory>
<include>**/**</include>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>copy-release_notes</id>
<phase>process-resources</phase>

View file

@ -34,7 +34,6 @@ li a:hover {
<body>
<img src="keycloak_logo.png"/>
<ul>
<li><a href="server_installation/${masterFile}.html">Server Installation</a></li>
<li><a href="securing_apps/${masterFile}.html">Securing Apps</a></li>
<li><a href="server_admin/${masterFile}.html">Server Admin</a></li>
<li><a href="server_development/${masterFile}.html">Server Development</a></li>

View file

@ -6,7 +6,6 @@ There is one caveat to this. You have to run a separate {appserver_name} instanc
To boot {project_name} Server:
ifeval::["{kc_dist}" == "quarkus"]
.Linux/Unix
[source]
----
@ -18,21 +17,6 @@ $ .../bin/kc.sh start-dev --http-port 8180
----
> ...\bin\kc.bat start-dev --http-port 8180
----
endif::[]
ifeval::["{kc_dist}" == "wildfly"]
.Linux/Unix
[source]
----
$ .../bin/standalone.sh -Djboss.socket.binding.port-offset=100
----
.Windows
[source]
----
> ...\bin\standalone.bat -Djboss.socket.binding.port-offset=100
----
endif::[]
After installing and booting both servers you should be able to access {project_name} Admin Console at http://localhost:8180/auth/admin/ and also the {appserver_name} instance at
http://localhost:8080.

View file

@ -1 +0,0 @@
:imagesdir: {asciidoctorconfigdir}

View file

@ -1 +0,0 @@
../aggregation/navbar.html

View file

@ -1 +0,0 @@
../aggregation/navbar-head.html

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

View file

@ -1,17 +0,0 @@
:toc: left
:toclevels: 3
:sectanchors:
:linkattrs:
:context:
include::topics/templates/document-attributes-community.adoc[]
:getting_started_guide:
= {gettingstarted_name}
:release_header_guide: {gettingstarted_name_short}
:release_header_latest_link: {gettingstarted_link_latest}
include::topics/templates/release-header.adoc[]
include::topics.adoc[]

View file

@ -1,26 +0,0 @@
<productname>{project_name_full}</productname>
<productnumber>{project_versionDoc}</productnumber>
<subtitle>For Use with {project_name_full} {project_versionDoc}</subtitle>
<title>{gettingstarted_name}</title>
<release>{project_versionDoc}</release>
<abstract>
<para>This guide helps you practice using {project_name} to evaluate it before you use it in a production environment. It includes instructions for installing the {project_name} server in standalone mode, creating accounts and realms for managing users and applications, and securing a {appserver_name} server application.
</para>
</abstract>
<authorgroup>
<orgname>Red Hat Customer Content Services</orgname>
</authorgroup>
<legalnotice lang="en-US" version="5.0" xmlns="http://docbook.org/ns/docbook">
<para> Copyright <trademark class="copyright"></trademark> 2021 Red Hat, Inc. </para>
<para>Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at</para>
<para>
<ulink url="http://www.apache.org/licenses/LICENSE-2.0"> http://www.apache.org/licenses/LICENSE-2.0</ulink>
</para>
<para>Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.</para>
</legalnotice>

View file

@ -1,13 +0,0 @@
:toc:
:toclevels: 3
:numbered:
:linkattrs:
:context:
include::topics/templates/document-attributes-product.adoc[]
:getting_started_guide:
= {gettingstarted_name}
include::topics.adoc[]

View file

@ -1,46 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.keycloak.documentation</groupId>
<artifactId>documentation-parent</artifactId>
<version>999-SNAPSHOT</version>
<relativePath>../</relativePath>
</parent>
<name>Getting Started</name>
<artifactId>getting-started</artifactId>
<packaging>pom</packaging>
<build>
<plugins>
<plugin>
<groupId>org.keycloak.documentation</groupId>
<artifactId>header-maven-plugin</artifactId>
<executions>
<execution>
<id>add-file-headers</id>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<executions>
<execution>
<id>asciidoc-to-html</id>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>echo-output</id>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View file

@ -1,7 +0,0 @@
ifeval::[{project_community}==true]
include::topics/introduction-keycloak.adoc[]
endif::[]
include::topics/templates/making-open-source-more-inclusive.adoc[]
include::topics/assembly-installing-standalone.adoc[]
include::topics/assembly-creating-first-realm.adoc[]
include::topics/assembly-securing-sample-app.adoc[]

View file

@ -1,22 +0,0 @@
// UserStory: As an RH SSO customer, I want to perform initial admin procedures
// This assembly is included in the following assemblies:
//
// <List assemblies here, each on a new line>
// Retains the context of the parent assembly if this assembly is nested within another assembly.
// See also the complementary step on the last line of this file.
ifdef::context[:parent-context: {context}]
[id="creating-first-realm_{context}"]
== Creating a realm and a user
The first use of the {project_name} admin console is to create a realm and create a user in that realm. You use that user to log in to your new realm and visit the built-in account console, to which all users have access.
include::first-realm/con-realms-apps.adoc[leveloffset=2]
include::first-realm/proc-create-realm.adoc[leveloffset=2]
include::first-realm/proc-create-user.adoc[leveloffset=2]
include::first-realm/proc-view-account.adoc[leveloffset=2]
// Restore the context to what it was before this assembly.
ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]

View file

@ -1,36 +0,0 @@
// UserStory: As an RH SSO customer, I want to perform a quick setup of SSO.
// This assembly is included in the following assemblies:
//
// <List assemblies here, each on a new line>
// Retains the context of the parent assembly if this assembly is nested within another assembly.
// See also the complementary step on the last line of this file.
ifdef::context[:parent-context: {context}]
[id="installing-standalone_{context}"]
== Installing a sample instance of {project_name}
This section describes how to install and start a {project_name} server in standalone mode, set up the initial admin user, and log in to the {project_name} Admin Console.
ifeval::[{project_product}==true]
.Additional Resources
This installation is intended for practice use of {project_name}. For instructions on installation in a production environment and full details on all product features, see the other guides in the link:https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/{project_versionDoc}[{project_name}] documentation.
endif::[]
ifeval::[{project_community}==true]
include::standalone/proc-installing-server-community.adoc[leveloffset=2]
endif::[]
ifeval::[{project_product}==true]
include::standalone/proc-installing-server-product.adoc[leveloffset=2]
endif::[]
include::standalone/proc-starting-server.adoc[leveloffset=2]
include::standalone/proc-creating-admin.adoc[leveloffset=2]
include::standalone/proc-logging-in-admin-console.adoc[leveloffset=2]
// Restore the context to what it was before this assembly.
ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]

View file

@ -1,28 +0,0 @@
// UserStory: As an RH SSO customer, I want to complete the initial configuration of my standalone server
// This assembly is included in the following assemblies:
//
// <List assemblies here, each on a new line>
// Retains the context of the parent assembly if this assembly is nested within another assembly.
// See also the complementary step on the last line of this file.
ifdef::context[:parent-context: {context}]
[id="securing-sample-app_{context}"]
== Securing a sample application
Now that you have an admin account, a realm, and a user, you can use {project_name} to secure a sample {appserver_name} servlet application. You install a {appserver_name} client adapter, register the application in the admin console, modify the {appserver_name} instance to work with {project_name}, and use {project_name} with some sample code to secure the application.
.Prerequisites
* You need to adjust the port used by {project_name} to avoid port conflicts with {appserver_name}.
include::sample-app/proc-adjusting-ports.adoc[leveloffset=2]
include::sample-app/proc-installing-client-adapter.adoc[leveloffset=2]
include::sample-app/proc-registering-app.adoc[leveloffset=2]
include::sample-app/proc-modifying-app.adoc[leveloffset=2]
include::sample-app/proc-installing-sample-code.adoc[leveloffset=2]
// Restore the context to what it was before this assembly.
ifdef::parent-context[:context: {parent-context}]
ifndef::parent-context[:!context:]

View file

@ -1,12 +0,0 @@
// UserStory: As an RH SSO customer, I need to know what are the purposes of different realms
[id="realms-apps_{context}"]
= Realms and users
When you log in to the admin console, you work in a realm, which is a space where you manage objects. Two types of realms exist:
* `Master realm` - This realm was created for you when you first started {project_name}. It contains the admin account you created at the first login. You use this realm only to create other realms.
* `Other realms` - These realms are created by the admin in the master realm. In these realms, administrators create users and applications. The applications are owned by the users.
image:images/master_realm.png[Realms and applications]

View file

@ -1,33 +0,0 @@
// UserStory: As an RH SSO customer, I need to know how to create a realm that protects applications
[id="create-realm_{context}"]
= Creating a realm
As the admin in the master realm, you create the realms where administrators create users and applications.
.Prerequisites
* {project_name} is installed.
* You have the initial admin account for the admin console.
.Procedure
. Go to http://localhost:8080/auth/admin/ and log in to the {project_name} admin console using the admin account.
. From the *Master* menu, click *Add Realm*. When you are logged in to the master realm, this menu lists all other realms.
. Type `demo` in the *Name* field.
+
.A new realm
image:images/add-demo-realm.png[A new realm]
+
NOTE: The realm name is case-sensitive, so make note of the case that you use.
. Click *Create*.
+
The main admin console page opens with realm set to `demo`.
+
.Demo realm
image:images/demo-realm.png[Demo realm]
. Switch between managing the `master` realm and the realm you just created by clicking entries in the *Select realm* drop-down list.

View file

@ -1,37 +0,0 @@
// UserStory: As an RH SSO customer, I want to create a user in my first realm
[id="create-user_{context}"]
= Creating a user
In the `demo` realm, you create a new user and a temporary password for that new user.
.Procedure
. From the menu, click *Users* to open the user list page.
. On the right side of the empty user list, click *Add User* to open the Add user page.
. Enter a name in the `Username` field.
+
This is the only required field.
+
.Add user page
image:images/add-user.png[Add user page]
. Flip the *Email Verified* switch to *On* and click *Save*.
+
The management page for the new user opens.
. Click the *Credentials* tab to set a temporary password for the new user.
. Type a new password and confirm it.
. Click *Set Password* to set the user password to the new one you specified.
+
.Manage Credentials page
image:images/user-credentials.png[Manage Credentials page]
+
[NOTE]
====
This password is temporary and the user will be required to change it at the first login. If you prefer to create a password that is persistent, flip the *Temporary* switch to *Off* and click *Set Password*.
====

View file

@ -1,26 +0,0 @@
// UserStory: As an RH SSO customer, I want to test the login for the first user
[id="view-account_{context}"]
= Logging into the Account Console
Every user in a realm has access to the account console. You use this console to update your profile information and change your credentials. You can now test logging in with that user in the realm that you created.
.Procedure
. Log out of the admin console by opening the user menu and selecting *Sign Out*.
. Go to http://localhost:8080/auth/realms/demo/account and log in to your `demo` realm as the user that you just created.
. When you are asked to supply a new password, enter a password that you can remember.
+
.Update password
image:images/update-password.png[Update password]
+
The account console opens for this user.
+
.Account console
image:images/account-console.png[]
. Complete the required fields with any values to test using this page.
.Next steps
You are now ready for the final procedure, which is to secure a sample application that runs on {appserver_name}. See xref:securing-sample-app_{context}[Securing a sample application].

View file

@ -1,4 +0,0 @@
== Trying out Keycloak
This guide helps you practice using {project_name} to evaluate it before you use it in a production environment. It includes instructions for installing the {project_name} server in standalone mode, creating accounts and realms for managing users and applications, and securing a {appserver_name} server application.

View file

@ -1,59 +0,0 @@
[id="adjusting-ports_{context}"]
= Adjusting the port used by {project_name}
The instructions in this guide apply to running {appserver_name} on the same machine as the {project_name} server. In this situation, even though {appserver_name} is bundled with {project_name}, you cannot use {appserver_name} as an application container. You must run a separate {appserver_name} instance for your servlet application.
To avoid port conflicts, you need different ports to run {project_name} and {appserver_name}.
.Prerequisites
* You have an admin account for the admin console.
* You created a demo realm.
* You created a user in the demo realm.
.Procedure
ifeval::[{project_community}==true]
. Download WildFly from link:https://www.wildfly.org/[WildFly.org].
endif::[]
ifeval::[{project_product}==true]
. Download JBoss EAP 7.3 from the https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?product=appplatform&downloadType=distributions[Red Hat customer portal].
endif::[]
. Unzip the downloaded {appserver_name}.
+
[source,bash,subs=+attributes]
----
$ unzip <filename>.zip
----
. Change to the {project_name} root directory.
. Start the {project_name} server by supplying a value for the `jboss.socket.binding.port-offset` system property. This value is added to the base value of every port opened by the {project_name} server. In this example, *100* is the value.
+
.Linux/Unix
[source,bash,subs=+attributes]
----
$ cd bin
$ ./standalone.sh -Djboss.socket.binding.port-offset=100
----
+
.Windows
[source,bash,subs=+attributes]
----
> ...\bin\standalone.bat -Djboss.socket.binding.port-offset=100
----
+
.Windows Powershell
[source,bash,subs=+attributes]
----
> ...\bin\standalone.bat -D"jboss.socket.binding.port-offset=100"
----
. Confirm that the {project_name} server is running. Go to http://localhost:8180/auth/admin/ .
+
If the admin console opens, you are ready to install a client adapter that enables {appserver_name} to work with {project_name}.

View file

@ -1,111 +0,0 @@
[id="installing-client-adapter_{context}"]
= Installing the {appserver_name} client adapter
When {appserver_name} and {project_name} are installed on the same machine, {appserver_name} requires some modification. To make this modification, you install a {project_name} client adapter.
.Prerequisites
* {appserver_name} is installed.
* You have a backup of the `../standalone/configuration/standalone.xml` file if you have customized this file.
.Procedure
ifeval::[{project_community}==true]
. Download the *WildFly OpenID Connect Client Adapter* distribution from link:https://www.keycloak.org/downloads.html[keycloak.org].
endif::[]
ifeval::[{project_product}==true]
. Download the *Client Adapter for EAP 7* from the https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?downloadType=distributions&product=core.service.rhsso[Red Hat customer portal].
endif::[]
. Change to the root directory of {appserver_name}.
. Unzip the downloaded client adapter in this directory. For example:
+
[source,bash,subs=+attributes]
----
$ unzip <filename>.zip
----
. Change to the bin directory.
+
[source,bash,subs=+attributes]
----
$ cd bin
----
. Run the appropriate script for your platform.
+
[NOTE]
====
If you receive a `file not found`, make sure that you used `unzip` in the previous step. This method of extraction installs the files in the right place.
====
ifeval::[{project_product}==true]
+
.Linux/Unix
[source,bash,subs=+attributes]
----
$ ./jboss-cli.sh --file=adapter-elytron-install-offline.cli
----
+
.Windows
[source,bash,subs=+attributes]
----
> jboss-cli.bat --file=adapter-elytron-install-offline.cli
----
endif::[]
ifeval::[{project_community}==true]
+
.WildFly 10 on Linux/Unix
[source,bash,subs=+attributes]
----
$ ./jboss-cli.sh --file=adapter-install-offline.cli
----
+
.WildFly 10 on Windows
[source,bash,subs=+attributes]
----
> jboss-cli.bat --file=adapter-install-offline.cli
----
+
.Wildfly 11 on Linux/Unix
[source,bash,subs=+attributes]
----
$ ./jboss-cli.sh --file=adapter-elytron-install-offline.cli
----
+
.Wildfly 11 on Windows
[source,bash,subs=+attributes]
----
> jboss-cli.bat --file=adapter-elytron-install-offline.cli
----
endif::[]
+
[NOTE]
====
This script makes the necessary edits to the `.../standalone/configuration/standalone.xml` file.
====
. Start the application server.
+
.Linux/Unix
[source,bash,subs=+attributes]
----
$ ./standalone.sh
----
+
.Windows
[source,bash,subs=+attributes]
----
> ...\standalone.bat
----

View file

@ -1,54 +0,0 @@
[id="installing-sample-code_{context}"]
= Installing sample code to secure the application
The final procedure is to make this application secure by installing some sample code from the {quickstartRepo_link} repository. The quickstarts work with the most recent {project_name} release.
The sample code is the *app-profile-jee-vanilla* quickstart. It demonstrates how to change a Jakarta EE application that is secured with basic authentication without changing the WAR. The {project_name} client adapter subsystem changes the authentication method and injects the configuration.
.Prerequisites
You have the following installed on your machine and available in your PATH.
* Java JDK 8
* Apache Maven 3.1.1 or higher
* Git
You have a *keycloak.json* file.
.Procedure
. Make sure your {appserver_name} application server is started.
. Download the code and change directories using the following commands.
+
[source, subs="attributes"]
----
$ git clone {quickstartRepo_link}
$ cd {quickstartRepo_dir}/app-profile-jee-vanilla/config
----
. Copy the `keycloak.json` file to the current directory.
. Move one level up to the `app-profile-jee-vanilla` directory.
. Install the code using the following command.
+
[source, subs="attributes"]
----
$ mvn clean wildfly:deploy
----
. Confirm that the application installation succeeded. Go to http://localhost:8080/vanilla where a login page is displayed.
+
.Login page confirming success
image:images/vanilla.png[Login page confirming success]
. Log in using the account that you created in the demo realm.
+
.Login page to demo realm
image:images/demo-login.png[Login page to demo realm]
+
A message appears indicating you have completed a successful use of {project_name} to protect a sample {appserver_name} application. Congratulations!
+
.Complete success
image:images/success.png[Complete success]

View file

@ -1,57 +0,0 @@
[id="modifying-app_{context}"]
= Modifying the {appserver_name} instance
The {appserver_name} servlet application requires additional configuration before it is secured by {project_name}.
.Prerequisites
* You created a client named *vanilla* in the *demo* realm.
* You saved a template XML file for this client.
.Procedure
. Go to the `standalone/configuration` directory in your {appserver_name} root directory.
. Open the `standalone.xml` file and search for the following text:
+
[source,xml]
----
<subsystem xmlns="urn:jboss:domain:keycloak:1.1"/>
----
. Change the XML entry from self-closing to using a pair of opening and closing tags as shown here:
+
[source,xml]
----
<subsystem xmlns="urn:jboss:domain:keycloak:1.1">
</subsystem>
----
. Paste the contents of the XML template within the `<subsystem>` element, as shown in this example:
+
[source,xml]
----
<subsystem xmlns="urn:jboss:domain:keycloak:1.1">
<secure-deployment name="WAR MODULE NAME.war">
<realm>demo</realm>
<auth-server-url>http://localhost:8180/auth</auth-server-url>
<public-client>true</public-client>
<ssl-required>EXTERNAL</ssl-required>
<resource>vanilla</resource>
</secure-deployment>
</subsystem>
----
. Change `WAR MODULE NAME.war` to `vanilla.war`:
+
[source,xml]
----
<subsystem xmlns="urn:jboss:domain:keycloak:1.1">
<secure-deployment name="vanilla.war">
...
</subsystem>
----
. Reboot the application server.

View file

@ -1,46 +0,0 @@
[id="registering-app_{context}"]
= Registering the {appserver_name} application
You can now define and register the client in the {project_name} admin console.
.Prerequisites
* You installed a client adapter to work with {appserver_name}.
.Procedure
. Log in to the admin console with your admin account: http://localhost:8180/auth/admin/
. In the top left drop-down list, select the `Demo` realm.
. Click `Clients` in the left side menu to open the Clients page.
+
.Clients
image:images/clients.png[Clients]
. On the right side, click *Create*.
. On the Add Client dialog, create a client called *vanilla* by completing the fields as shown below:
+
.Add Client
image:images/add-client.png[Add Client]
. Click *Save*.
. On the *Vanilla* client page that appears, click the *Installation* tab.
. Select *Keycloak OIDC JSON* to generate a file that you need in a later procedure.
+
.Keycloak.json file
image:images/keycloak-json.png[Keycloak.json file]
. Click *Download* to save *Keycloak.json* in a location that you can find later.
. Select *Keycloak OIDC JBoss Subsystem XML* to generate an XML template.
+
.Template XML
image:images/client-install-selected.png[Template XML]
. Click *Download* to save a copy for use in the next procedure, which involves {appserver_name} configuration.

View file

@ -1,20 +0,0 @@
[id="create-admin_{context}"]
= Creating the admin account
Before you can use {project_name}, you need to create an admin account which you use to log in to the {project_name} admin console.
.Prerequisites
* You saw no errors when you started the {project_name} server.
.Procedure
. Open http://localhost:8080/auth in your web browser.
+
The welcome page opens, confirming that the server is running.
+
.Welcome page
image:images/welcome.png[Welcome page]
. Enter a username and password to create an initial admin user.

View file

@ -1,30 +0,0 @@
[id="installing-server-community_{context}"]
= Installing the Server
You can install the server on Linux or Windows. The server download ZIP file contains the scripts and binaries to run the {project_name} server.
.Procedure
. Download *keycloak-{project_version}.[zip|tar.gz]* from https://www.keycloak.org/downloads.html[Keycloak downloads].
. Place the file in a directory you choose.
. Unpack the ZIP file using the appropriate `unzip` utility, such as unzip, tar, or Expand-Archive.
+
.Linux/Unix
[source,bash,subs=+attributes]
----
$ unzip keycloak-{project_version}.zip
or
$ tar -xvzf keycloak-{project_version}.tar.gz
----
+
.Windows
[source,bash,subs=+attributes]
----
> Expand-Archive -Path 'C:Downloads\keycloak-{project_version}.zip' -DestinationPath 'C:\Downloads'
----

View file

@ -1,33 +0,0 @@
[id="installing-server-product_{context}"]
= Installing the {project_name} server
For this sample instance of {project_name}, this procedure involves installation in standalone mode. The server download ZIP file contains the scripts and binaries to run the {project_name} server. You can install the server on Linux or Windows.
.Procedure
. Go to the https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?downloadType=distributions&product=core.service.rhsso[Red Hat customer portal].
. Download the {project_name} Server: *rh-sso-{project_version_base}.zip*
. Place the file in a directory you choose.
. Unpack the ZIP file using the appropriate `unzip` utility, such as unzip, tar, or Expand-Archive.
+
.Linux/Unix
[source,bash,subs=+attributes]
----
$ unzip rhsso-{project_version_base}.zip
or
$ tar -xvzf rh-sso-{project_version_base}.tar.gz
----
+
.Windows
[source,bash,subs=+attributes]
----
> Expand-Archive -Path 'C:Downloads\rhsso-{project_version_base}.zip' -DestinationPath 'C:\Downloads'
----

View file

@ -1,30 +0,0 @@
[id="logging-in-admin-console_{context}"]
= Logging into the admin console
After you create the initial admin account, you can log in to the admin console. In this console, you add users and register applications to be secured by {project_name}.
.Prerequisites
* You have an admin account for the admin console.
.Procedure
. Click the *Administration Console* link on the *Welcome* page or go directly to http://localhost:8080/auth/admin/ (the console URL).
+
[NOTE]
====
The Administration Console is generally referred to as the admin console for short in {project_name} documentation.
====
. Enter the username and password you created on the *Welcome* page to open the *admin console*.
+
.Admin console login screen
image:images/admin-login.png[Admin console login screen]
+
The initial screen for the admin console appears.
+
.Admin console
image:images/admin-console.png[Admin console]
.Next steps
Now that you can log into the admin console, you can begin creating realms where administrators can create users and give them access to applications. For more details, see xref:creating-first-realm_{context}[Creating a realm and a user].

View file

@ -1,26 +0,0 @@
[id="starting-server_{context}"]
= Starting the {project_name} server
You start the server on the system where you installed it.
.Prerequisites
* You saw no errors during the {project_name} server installation.
.Procedure
. Go to the `bin` directory of the server distribution.
. Run the `standalone` boot script.
+
.Linux/Unix
[source,bash,subs=+attributes]
----
$ cd bin
$ ./standalone.sh
----
+
.Windows
[source,bash,subs=+attributes]
----
> ...\bin\standalone.bat
----

View file

@ -1 +0,0 @@
../../topics/templates

View file

@ -1 +0,0 @@
../openshift/docinfo-footer.html

View file

@ -1 +0,0 @@
../openshift/docinfo.html

View file

@ -1 +0,0 @@
../openshift/images

View file

@ -1,19 +0,0 @@
:toc:
:toclevels: 3
:numbered:
:linkattrs:
include::topics/templates/document-attributes-product.adoc[]
:project_templates_url: {project_templates_base_url}/openj9
:project_templates_version: {openshift_openj9_project_templates_version}
:openshift_name: {openshift_openj9_name}
:openshift_link: {openshift_openj9_link}
:openshift_image_platforms: {openshift_openj9_platforms}
:openshift_name_other: {openshift_openjdk_name}
:openshift_link_other: {openshift_openjdk_link}
:openshift:
= {openshift_openj9_name}
include::topics.adoc[]

View file

@ -1,25 +0,0 @@
<productname>{project_name_full}</productname>
<productnumber>{project_versionDoc}</productnumber>
<subtitle>For use with {project_name_full} {project_versionDoc}</subtitle>
<title>{openshift_openj9_name}</title>
<release>{project_versionDoc}</release>
<abstract>
<para>This guide consists of basic information and instructions to get started with {project_name_full} {project_versionDoc} for OpenShift on OpenJ9</para>
</abstract>
<authorgroup>
<orgname>Red Hat Customer Content Services</orgname>
</authorgroup>
<legalnotice lang="en-US" version="5.0" xmlns="http://docbook.org/ns/docbook">
<para> Copyright <trademark class="copyright"></trademark> 2021 Red Hat, Inc. </para>
<para>Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at</para>
<para>
<ulink url="http://www.apache.org/licenses/LICENSE-2.0"> http://www.apache.org/licenses/LICENSE-2.0</ulink>
</para>
<para>Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.</para>
</legalnotice>

View file

@ -1,21 +0,0 @@
:toc:
:toclevels: 3
:numbered:
:linkattrs:
include::topics/templates/document-attributes-product.adoc[]
:project_templates_url: {project_templates_base_url}/openj9
:project_templates_version: {openshift_openj9_project_templates_version}
:openshift_name: {openshift_openj9_name}
:openshift_link: {openshift_openj9_link}
:openshift_image_repository: {openshift_openj9_image_repository}
:openshift_image_stream: {openshift_openj9_image_stream}
:openshift_image_platforms: {openshift_openj9_platforms}
:openshift_name_other: {openshift_openjdk_name}
:openshift_link_other: {openshift_openjdk_link}
:openshift:
= {openshift_openj9_name}
include::topics.adoc[]

View file

@ -1,46 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.keycloak.documentation</groupId>
<artifactId>documentation-parent</artifactId>
<version>999-SNAPSHOT</version>
<relativePath>../</relativePath>
</parent>
<name>Red Hat Single Sign-On for OpenShift on OpenJ9</name>
<artifactId>openshift-openj9</artifactId>
<packaging>pom</packaging>
<build>
<plugins>
<plugin>
<groupId>org.keycloak.documentation</groupId>
<artifactId>header-maven-plugin</artifactId>
<executions>
<execution>
<id>add-file-headers</id>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<executions>
<execution>
<id>asciidoc-to-html</id>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>echo-output</id>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View file

@ -1 +0,0 @@
../openshift/topics

View file

@ -1 +0,0 @@
../openshift/topics.adoc

View file

@ -1 +0,0 @@
:imagesdir: {asciidoctorconfigdir}

View file

@ -1 +0,0 @@
../aggregation/navbar.html

View file

@ -1 +0,0 @@
../aggregation/navbar-head.html

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

View file

@ -1,19 +0,0 @@
:toc:
:toclevels: 3
:numbered:
:linkattrs:
include::topics/templates/document-attributes-product.adoc[]
:project_templates_url: {project_templates_base_url}
:project_templates_version: {openshift_openjdk_project_templates_version}
:openshift_name: {openshift_openjdk_name}
:openshift_link: {openshift_openjdk_link}
:openshift_image_platforms: {openshift_openjdk_platforms}
:openshift_name_other: {openshift_openj9_name}
:openshift_link_other: {openshift_openj9_link}
:openshift:
= {openshift_name}
include::topics.adoc[]

View file

@ -1,25 +0,0 @@
<productname>{project_name_full}</productname>
<productnumber>{project_versionDoc}</productnumber>
<subtitle>For Use with {project_name_full} {project_versionDoc}</subtitle>
<title>{openshift_openjdk_name}</title>
<release>{project_versionDoc}</release>
<abstract>
<para>This guide consists of basic information and instructions to get started with {project_name_full} {project_versionDoc} for OpenShift</para>
</abstract>
<authorgroup>
<orgname>Red Hat Customer Content Services</orgname>
</authorgroup>
<legalnotice lang="en-US" version="5.0" xmlns="http://docbook.org/ns/docbook">
<para> Copyright <trademark class="copyright"></trademark> {cdate} Red Hat, Inc. </para>
<para>Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at</para>
<para>
<ulink url="http://www.apache.org/licenses/LICENSE-2.0"> http://www.apache.org/licenses/LICENSE-2.0</ulink>
</para>
<para>Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.</para>
</legalnotice>

View file

@ -1,23 +0,0 @@
:toc:
:toclevels: 3
:numbered:
:linkattrs:
include::topics/templates/document-attributes-product.adoc[]
:project_templates_url: {project_templates_base_url}
:project_templates_version: {openshift_openjdk_project_templates_version}
:openshift_name: {openshift_openjdk_name}
:openshift_link: {openshift_openjdk_link}
:openshift_image_platforms: {openshift_openjdk_platforms}
:openshift_image_repository: {openshift_openjdk_image_repository}
:openshift_image_stream: {openshift_openjdk_image_stream}
:openshift_name_other: {openshift_openj9_name}
:openshift_link_other: {openshift_openj9_link}
:openshift:
= {openshift_name}
include::topics/templates/making-open-source-more-inclusive.adoc[]
include::topics.adoc[]

View file

@ -1,46 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.keycloak.documentation</groupId>
<artifactId>documentation-parent</artifactId>
<version>999-SNAPSHOT</version>
<relativePath>../</relativePath>
</parent>
<name>Red Hat Single Sign-On for OpenShift</name>
<artifactId>openshift</artifactId>
<packaging>pom</packaging>
<build>
<plugins>
<plugin>
<groupId>org.keycloak.documentation</groupId>
<artifactId>header-maven-plugin</artifactId>
<executions>
<execution>
<id>add-file-headers</id>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.asciidoctor</groupId>
<artifactId>asciidoctor-maven-plugin</artifactId>
<executions>
<execution>
<id>asciidoc-to-html</id>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>echo-output</id>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View file

@ -1,9 +0,0 @@
include::topics/introduction.adoc[leveloffset=+0]
include::topics/get_started.adoc[leveloffset=+0]
include::topics/advanced_concepts.adoc[leveloffset=+0]
include::topics/tutorials.adoc[leveloffset=+0]
include::topics/reference.adoc[leveloffset=+0]

View file

@ -1,757 +0,0 @@
== Performing advanced procedures
[role="_abstract"]
This chapter describes advanced procedures, such as setting up keystores and a truststore for the {project_name} server, creating an administrator account, as well as an overview of available {project_name} client registration methods, and guidance on configuring clustering.
=== Deploying passthrough TLS termination templates
You can deploy using these templates. They require HTTPS, JGroups keystores and the {project_name} server truststore to already exist, and therefore can be used to instantiate the {project_name} server pod using your custom HTTPS, JGroups keystores and {project_name} server truststore.
==== Preparing the deployment
.Procedure
. Log in to the OpenShift CLI with a user that holds the _cluster:admin_ role.
. Create a new project:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc new-project sso-app-demo
----
. Add the `view` role to the link:{ocpdocs_default_service_accounts_link}[`default`] service account. This enables the service account to view all the resources in the sso-app-demo namespace, which is necessary for managing the cluster.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
----
[[Configuring-Keystores]]
==== Creating HTTPS and JGroups Keystores, and Truststore for the {project_name} Server
In this procedure, the *_openssl_* toolkit is used to generate a CA certificate to sign the HTTPS keystore, and create a truststore for the {project_name} server. The *_keytool_*, a package *included with the Java Development Kit*, is then used to generate self-signed certificates for these keystores.
The {project_name} application templates using xref:../introduction/introduction.adoc#reencrypt-templates[re-encryption TLS termination] do not *require* or *expect* the HTTPS and JGroups keystores and {project_name} server truststore to be prepared beforehand.
====
[NOTE]
If you want to provision the {project_name} server using existing HTTPS / JGroups keystores, use some of the passthrough templates instead.
====
The re-encryption templates use OpenShift's internal link:{ocpdocs_serving_x509_secrets_link}[service serving x509 certificate secrets] to automatically create the HTTPS and JGroups keystores.
The {project_name} server truststore is also created automatically, containing the */var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt* CA certificate file, which is used to create these cluster certificates. Moreover, the truststore for the {project_name} server is pre-populated with the all known, trusted CA certificate files found in the Java system path.
.Prerequisites
The {project_name} application templates using passthrough TLS termination require the following to be deployed:
* An xref:create-https-keystore[HTTPS keystore] used for encryption of https traffic,
* The xref:create-jgroups-keystore[JGroups keystore] used for encryption of JGroups communications between nodes in the cluster, and
* xref:create-server-truststore[{project_name} server truststore] used for securing the {project_name} requests
[NOTE]
====
For production environments Red Hat recommends that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS).
See the https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.1/html-single/security_guide/index#Generate_a_SSL_Encryption_Key_and_Certificate[JBoss Enterprise Application Platform Security Guide] for more information on how to create a keystore with self-signed or purchased SSL certificates.
====
[[create-https-keystore]]
*_Create the HTTPS keystore:_*
[[generate-ca-certificate]]
.Procedure
. Generate a CA certificate. Pick and remember the password. Provide identical password, when xref:signing-csr-with-ca-certificate[signing the certificate sign request with the CA certificate] below:
+
[source,bash,subs="attributes+,macros+"]
----
$ openssl req -new -newkey rsa:4096 -x509 -keyout xpaas.key -out xpaas.crt -days 365 -subj "/CN=xpaas-sso-demo.ca"
----
. Generate a private key for the HTTPS keystore. Provide `mykeystorepass` as the keystore password:
+
[source,bash,subs="attributes+,macros+"]
----
$ keytool -genkeypair -keyalg RSA -keysize 2048 -dname "CN=secure-sso-sso-app-demo.openshift.example.com" -alias jboss -keystore keystore.jks
----
. Generate a certificate sign request for the HTTPS keystore. Provide `mykeystorepass` as the keystore password:
+
[source,bash,subs="attributes+,macros+"]
----
$ keytool -certreq -keyalg rsa -alias jboss -keystore keystore.jks -file sso.csr
----
[[signing-csr-with-ca-certificate]]
[start=4]
. Sign the certificate sign request with the CA certificate. Provide the same password that was used to xref:generate-ca-certificate[generate the CA certificate]:
+
[source,bash,subs="attributes+,macros+"]
----
$ openssl x509 -req <(printf "subjectAltName=DNS:secure-sso-sso-app-demo.openshift.example.com") -CA xpaas.crt -CAkey xpaas.key -in sso.csr -out sso.crt -days 365 -CAcreateserial
----
+
[NOTE]
====
To make the preceding command work on one line, the command includes the process substitution (`<() syntax`). Be sure that your current shell environment supports such syntax. Otherwise, you can encounter a `syntax error near unexpected token `('` message.
====
. Import the CA certificate into the HTTPS keystore. Provide `mykeystorepass` as the keystore password. Reply `yes` to `Trust this certificate? [no]:` question:
+
[source,bash,subs="attributes+,macros+"]
----
$ keytool -import -file xpaas.crt -alias xpaas.ca -keystore keystore.jks
----
. Import the signed certificate sign request into the HTTPS keystore. Provide `mykeystorepass` as the keystore password:
+
[source,bash,subs="attributes+,macros+"]
----
$ keytool -import -file sso.crt -alias jboss -keystore keystore.jks
----
[[create-jgroups-keystore]]
*_Generate a secure key for the JGroups keystore:_*
Provide `password` as the keystore password:
[source,bash,subs="attributes+,macros+"]
----
$ keytool -genseckey -alias secret-key -storetype JCEKS -keystore jgroups.jceks
----
[[create-server-truststore]]
*_Import the CA certificate into a new {project_name} server truststore:_*
Provide `mykeystorepass` as the truststore password. Reply `yes` to `Trust this certificate? [no]:` question:
[source,bash,subs="attributes+,macros+"]
----
$ keytool -import -file xpaas.crt -alias xpaas.ca -keystore truststore.jks
----
[[Configuring-Secrets]]
==== Creating secrets
.Procedure
You create objects called secrets that OpenShift uses to hold sensitive information, such as passwords or keystores.
. Create the secrets for the HTTPS and JGroups keystores, and {project_name} server truststore, generated in the xref:Configuring-Keystores[previous section].
+
[source,bash,subs="attributes+,macros+"]
----
$ oc create secret generic sso-app-secret --from-file=keystore.jks --from-file=jgroups.jceks --from-file=truststore.jks
----
. Link these secrets to the default service account, which is used to run {project_name} pods.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc secrets link default sso-app-secret
----
[role="_additional-resources"]
.Additional resources
* link:{ocpdocs_secrets_link}[What is a secret?]
* link:{ocpdocs_default_service_accounts_link}[Default project service accounts and roles]
==== Deploying a Passthrough TLS template using the OpenShift CLI
After you create xref:Configuring-Keystores[keystores] and xref:Configuring-Secrets[secrets], deploy a passthrough TLS termination template by using the `oc` command.
===== `oc` command guidelines
In the following `oc` command, the values of *_SSO_ADMIN_USERNAME_*, *_SSO_ADMIN_PASSWORD_*, *_HTTPS_PASSWORD_*, *_JGROUPS_ENCRYPT_PASSWORD_*, and *_SSO_TRUSTSTORE_PASSWORD_* variables match the default values in the *_{project_templates_version}-https_* {project_name} application template.
For production environments, Red Hat recommends that you consult the on-site policy for your organization for guidance on generating a strong user name and password for the administrator user account of the {project_name} server, and passwords for the HTTPS and JGroups keystores, and the truststore of the {project_name} server.
Also, when you create the template, make the passwords match the passwords provided when you created the keystores. If you used a different username or password, modify the values of the parameters in your template to match your environment.
[NOTE]
====
You can determine the alias names associated with the certificate by using the following *_keytool_* commands. The *_keytool_* is a package included with the Java Development Kit.
[source,bash,subs="attributes+,macros+"]
----
$ keytool -v -list -keystore keystore.jks | grep Alias
Enter keystore password: mykeystorepass
Alias name: xpaas.ca
Alias name: jboss
----
[source,bash,subs="attributes+,macros+"]
----
$ keytool -v -list -keystore jgroups.jceks -storetype jceks | grep Alias
Enter keystore password: password
Alias name: secret-key
----
The *_SSO_ADMIN_USERNAME_*, *_SSO_ADMIN_PASSWORD_*, and the *_SSO_REALM_* template parameters in the following command are optional.
====
===== Sample `oc` command
[source,bash,subs="attributes+,macros+"]
----
$ oc new-app --template={project_templates_version}-https \
-p HTTPS_SECRET="sso-app-secret" \
-p HTTPS_KEYSTORE="keystore.jks" \
-p HTTPS_NAME="jboss" \
-p HTTPS_PASSWORD="mykeystorepass" \
-p JGROUPS_ENCRYPT_SECRET="sso-app-secret" \
-p JGROUPS_ENCRYPT_KEYSTORE="jgroups.jceks" \
-p JGROUPS_ENCRYPT_NAME="secret-key" \
-p JGROUPS_ENCRYPT_PASSWORD="password" \
-p SSO_ADMIN_USERNAME="admin" \
-p SSO_ADMIN_PASSWORD="redhat" \
-p SSO_REALM="demorealm" \
-p SSO_TRUSTSTORE="truststore.jks" \
-p SSO_TRUSTSTORE_PASSWORD="mykeystorepass" \
-p SSO_TRUSTSTORE_SECRET="sso-app-secret"
--> Deploying template "openshift/{project_templates_version}-https" to project sso-app-demo
{project_name} {project_version} (Ephemeral with passthrough TLS)
---------
An example {project_name} 7 application. For more information about using this template, see \https://github.com/jboss-openshift/application-templates.
A new {project_name} service has been created in your project. The admin username/password for accessing the master realm via the {project_name} console is admin/redhat. Please be sure to create the following secrets: "sso-app-secret" containing the keystore.jks file used for serving secure content; "sso-app-secret" containing the jgroups.jceks file used for securing JGroups communications; "sso-app-secret" containing the truststore.jks file used for securing {project_name} requests.
* With parameters:
* Application Name=sso
* Custom http Route Hostname=
* Custom https Route Hostname=
* Server Keystore Secret Name=sso-app-secret
* Server Keystore Filename=keystore.jks
* Server Keystore Type=
* Server Certificate Name=jboss
* Server Keystore Password=mykeystorepass
* Datasource Minimum Pool Size=
* Datasource Maximum Pool Size=
* Datasource Transaction Isolation=
* JGroups Secret Name=sso-app-secret
* JGroups Keystore Filename=jgroups.jceks
* JGroups Certificate Name=secret-key
* JGroups Keystore Password=password
* JGroups Cluster Password=yeSppLfp # generated
* ImageStream Namespace=openshift
* {project_name} Administrator Username=admin
* {project_name} Administrator Password=redhat
* {project_name} Realm=demorealm
* {project_name} Service Username=
* {project_name} Service Password=
* {project_name} Trust Store=truststore.jks
* {project_name} Trust Store Password=mykeystorepass
* {project_name} Trust Store Secret=sso-app-secret
* Container Memory Limit=1Gi
--> Creating resources ...
service "sso" created
service "secure-sso" created
service "sso-ping" created
route "sso" created
route "secure-sso" created
deploymentconfig "sso" created
--> Success
Run 'oc status' to view your app.
----
[role="_additional-resources"]
.Additional resources
* xref:../introduction/introduction.adoc#passthrough-templates[Passthrough TLS Termination]
[[advanced-concepts-sso-hostname-spi-setup]]
=== Customizing the Hostname for the {project_name} Server
The hostname SPI introduced a flexible way to configure the hostname for the {project_name} server.
The default hostname provider one is `default`. This provider provides enhanced functionality over the original `request` provider which is now deprecated. Without additional settings, it uses the request headers to determine the hostname similarly to the original `request` provider.
For configuration options of the `default` provider, refer to the {installguide_link}#_hostname[{installguide_name}].
The `frontendUrl` option can be configured via `SSO_FRONTEND_URL` environment variable.
[NOTE]
For backward compatibility, `SSO_FRONTEND_URL` settings is ignored if `SSO_HOSTNAME` is also set.
Another option of hostname provider is `fixed`, which allows configuring a fixed hostname. The latter makes sure that only valid hostnames can be used and allows internal applications to invoke the {project_name} server through an alternative URL.
.Procedure
Run the following commands to set the `fixed` hostname SPI provider for the {project_name} server:
. Deploy the {project_openshift_product_name} image with *_SSO_HOSTNAME_* environment variable set to the desired hostname of the {project_name} server.
+
[source,yaml,subs="verbatim,macros,attributes"]
----
$ oc new-app --template={project_templates_version}-x509-https \
-p SSO_HOSTNAME="rh-sso-server.openshift.example.com"
----
. Identify the name of the route for the {project_name} service.
+
[source,yaml,subs="verbatim,macros,attributes"]
----
$ oc get routes
NAME HOST/PORT
sso sso-sso-app-demo.openshift.example.com
----
. Change the `host:` field to match the hostname specified as the value of the *_SSO_HOSTNAME_* environment variable above.
+
[NOTE]
====
Adjust the `rh-sso-server.openshift.example.com` value in the following command as necessary.
====
+
----
$ oc patch route/sso --type=json -p '[{"op": "replace", "path": "/spec/host", "value": "rh-sso-server.openshift.example.com"}]'
----
+
If successful, the previous command will return the following output:
+
----
route "sso" patched
----
[[sso-connecting-to-an-external-database]]
=== Connecting to an external database
{project_name} can be configured to connect to an external (to OpenShift cluster) database. In order to achieve this, you need to modify the `sso-{database name}` Endpoints object to point to the proper address. The procedure is described in the link:{ocpdocs_ingress_service_external_ip_link}[OpenShift manual].
The easiest way to get started is to deploy {project_name} from a template and then modify the Endpoints object. You might also need to update some of the datasource configuration variables in the DeploymentConfig. Once you're done, just roll a new deployment out.
[[sso-using-custom-jdbc-driver]]
=== Using Custom JDBC Driver
To connect to any database, the JDBC driver for that database must be present and {project_name} configured
properly. Currently, the only JDBC driver available in the image is the PostgreSQL JDBC driver. For any other
database, you need to extend the {project_name} image with a custom JDBC driver and a CLI script to register
it and set up the connection properties. The following steps illustrate how to do that, taking MariaDB driver
as an example. Update the example for other database drivers accordingly.
.Procedure
. Create an empty directory.
. Download the JDBC driver binaries into this directory.
. Create a new `Dockerfile` file in this directory with the following contents. For other databases, replace `mariadb-java-client-2.5.4.jar` with the filename of the respective driver:
+
[source,subs="attributes+,macros+,+quotes"]
----
*FROM* {openshift_image_repository}:latest
*COPY* sso-extensions.cli /opt/eap/extensions/
*COPY* mariadb-java-client-2.5.4.jar /opt/eap/extensions/jdbc-driver.jar
----
. Create a new `sso-extensions.cli` file in this directory with the following contents. Update the values of the variables in italics according to the deployment needs:
+
[source,bash,subs="attributes+,macros+,+quotes"]
----
batch
set DB_DRIVER_NAME=_mariadb_
set DB_USERNAME=_username_
set DB_PASSWORD=_password_
set DB_DRIVER=_org.mariadb.jdbc.Driver_
set DB_XA_DRIVER=_org.mariadb.jdbc.MariaDbDataSource_
set DB_JDBC_URL=_jdbc:mariadb://jdbc-host/keycloak_
set DB_EAP_MODULE=_org.mariadb_
set FILE=/opt/eap/extensions/jdbc-driver.jar
module add --name=$DB_EAP_MODULE --resources=$FILE --dependencies=javax.api,javax.resource.api
/subsystem=datasources/jdbc-driver=$DB_DRIVER_NAME:add( \
driver-name=$DB_DRIVER_NAME, \
driver-module-name=$DB_EAP_MODULE, \
driver-class-name=$DB_DRIVER, \
driver-xa-datasource-class-name=$DB_XA_DRIVER \
)
/subsystem=datasources/data-source=KeycloakDS:remove()
/subsystem=datasources/data-source=KeycloakDS:add( \
jndi-name=java:jboss/datasources/KeycloakDS, \
enabled=true, \
use-java-context=true, \
connection-url=$DB_JDBC_URL, \
driver-name=$DB_DRIVER_NAME, \
user-name=$DB_USERNAME, \
password=$DB_PASSWORD \
)
run-batch
----
. In this directory, build your image by typing the following command, replacing the `project/name:tag` with arbitrary name. `docker` can be used instead of `podman`.
+
[source,bash,subs="attributes+,macros+,+quotes"]
----
podman build -t docker-registry-default/project/name:tag .
----
. After the build finishes, push your image to the registry used by OpenShift to deploy your image. Refer to the link:{ocpdocs_cluster_local_registry_access_link}[OpenShift guide] for details.
[[sso-administrator-setup]]
=== Creating the Administrator Account for {project_name} Server
{project_name} does not provide any pre-configured management account out of the box. This administrator account is necessary for logging into the `master` realm's management console and performing server maintenance operations such as creating realms or users or registering applications intended to be secured by {project_name}.
The administrator account can be created:
* By providing values for the xref:sso-admin-template-parameters[*_SSO_ADMIN_USERNAME_* and *_SSO_ADMIN_PASSWORD_* parameters], when deploying the {project_name} application template, or
* By xref:sso-admin-remote-shell[a remote shell session to particular {project_name} pod], if the {project_openshift_product_name} image is deployed without an application template.
[NOTE]
====
{project_name} allows an initial administrator account to be created by the link:{project_doc_base_url}/getting_started_guide/index#create-admin_[Welcome Page] web form, but only if the Welcome Page is accessed from localhost; this method of administrator account creation is not applicable for the {project_openshift_product_name} image.
====
[[sso-admin-template-parameters]]
==== Creating the Administrator Account using template parameters
When deploying {project_name} application template, the *_SSO_ADMIN_USERNAME_* and *_SSO_ADMIN_PASSWORD_* parameters denote the username and password of the {project_name} server's administrator account to be created for the `master` realm.
*Both of these parameters are required.* If not specified, they are auto generated and displayed as an OpenShift instructional message when the template is instantiated.
The lifespan of the {project_name} server's administrator account depends upon the storage type used to store the {project_name} server's database:
* For an in-memory database mode (*_{project_templates_version}-https_* and *_{project_templates_version}-x509-https_* templates), the account exists throughout the lifecycle of the particular {project_name} pod (stored account data is lost upon pod destruction),
* For an ephemeral database mode *_{project_templates_version}-postgresql_* templates), the account exists throughout the lifecycle of the database pod. Even if the {project_name} pod is destructed, the stored account data is preserved under the assumption that the database pod is still running,
* For persistent database mode (*_{project_templates_version}-postgresql-persistent_*, and *_{project_templates_version}-x509-postgresql-persistent_* templates), the account exists throughout the lifecycle of the persistent medium used to hold the database data. This means that the stored account data is preserved even when both the {project_name} and the database pods are destructed.
It is a common practice to deploy an {project_name} application template to get the corresponding OpenShift deployment config for the application, and then reuse that deployment config multiple times (every time a new {project_name} application needs to be instantiated).
In the case of *ephemeral or persistent database mode*, after creating the RH_SSO server's administrator account, remove the *_SSO_ADMIN_USERNAME_* and *_SSO_ADMIN_PASSWORD_* variables from the deployment config before deploying new {project_name} applications.
.Procedure
Run the following commands to prepare the previously created deployment config of the {project_name} application for reuse after the administrator account has been created:
. Identify the deployment config of the {project_name} application.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc get dc -o name
deploymentconfig/sso
deploymentconfig/sso-postgresql
----
. Clear the *_SSO_ADMIN_USERNAME_* and *_SSO_ADMIN_PASSWORD_* variables setting.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc set env dc/sso \
-e SSO_ADMIN_USERNAME="" \
-e SSO_ADMIN_PASSWORD=""
----
[[sso-admin-remote-shell]]
==== Creating the Administrator Account via a remote shell session to {project_name} Pod
You use the following commands to create an administrator account for the `master` realm of the {project_name} server, when deploying the {project_openshift_product_name} image directly from the image stream without using a template.
.Prerequisite
* {project_name} application pod has been started.
.Procedure
. Identify the {project_name} application pod.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc get pods
NAME READY STATUS RESTARTS AGE
sso-12-pt93n 1/1 Running 0 1m
sso-postgresql-6-d97pf 1/1 Running 0 2m
----
. Open a remote shell session to the {project_openshift_product_name} container.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc rsh sso-12-pt93n
sh-4.2$
----
. Create the {project_name} server administrator account for the `master` realm at the command line with the `add-user-keycloak.sh` script.
+
[source,bash,subs="attributes+,macros+"]
----
sh-4.2$ cd /opt/eap/bin/
sh-4.2$ ./add-user-keycloak.sh \
-r master \
-u sso_admin \
-p sso_password
Added 'sso_admin' to '/opt/eap/standalone/configuration/keycloak-add-user.json', restart server to load user
----
+
[NOTE]
====
The 'sso_admin' / 'sso_password' credentials in the example above are for demonstration purposes only. Refer to the password policy applicable within your organization for guidance on how to create a secure user name and password.
====
. Restart the underlying JBoss EAP server instance to load the newly added user account. Wait for the server to restart properly.
+
[source,bash,subs="attributes+,macros+"]
----
sh-4.2$ ./jboss-cli.sh --connect ':reload'
{
"outcome" => "success",
"result" => undefined
}
----
+
[WARNING]
====
When restarting the server it is important to restart just the JBoss EAP process within the running {project_name} container, and not the whole container. This is because restarting the whole container will recreate it from scratch, without the {project_name} server administration account for the `master` realm.
====
. Log in to the `master` realm's Admin Console of the {project_name} server using the credentials created in the steps above. In the browser, navigate to *\http://sso-<project-name>.<hostname>/auth/admin* for the {project_name} web server, or to *\https://secure-sso-<project-name>.<hostname>/auth/admin* for the encrypted {project_name} web server, and specify the user name and password used to create the administrator user.
[role="_additional-resources"]
.Additional resources
* xref:../introduction/introduction.adoc#sso-templates[Templates for use with this software]
=== Customizing the default behavior of the {project_name} image
You can change the default behavior of the {project_name} image such as enabling TechPreview features or enabling debugging. This section describes how to make this change by using the JAVA_OPTS_APPEND variable.
.Prerequisites
This procedure assumes that the {project_openshift_product_name} image has been previously xref:Example-Deploying-SSO[deployed using one of the following templates:]
* *_{project_templates_version}-postgresql_*
* *_{project_templates_version}-postgresql-persistent_*
* *_{project_templates_version}-x509-postgresql-persistent_*
.Procedure
You can use the OpenShift web console or the CLI to change the default behavior.
If you use the OpenShift web console, you add the JAVA_OPTS_APPEND variable to the sso deployment config. For example, to enable TechPreview features, you set the variable as follows:
[source,bash,subs=+attributes]
----
JAVA_OPTS_APPEND="-Dkeycloak.profile=preview"
----
If you use the CLI, use the following commands to enable TechPreview features when the {project_name} pod was deployed using a template that is mentioned under Prerequisites.
. Scale down the {project_name} pod:
+
[source,bash,subs=+attributes]
----
$ oc get dc -o name
deploymentconfig/sso
deploymentconfig/sso-postgresql
$ oc scale --replicas=0 dc sso
deploymentconfig "sso" scaled
----
+
[NOTE]
====
In the preceding command, ``sso-postgresql`` appears because a PostgreSQL template was used to deploy the {project_openshift_product_name} image.
====
. Edit the deployment config to set the JAVA_OPTS_APPEND variable. For example, to enable TechPreview features, you set the variable as follows:
+
[source,bash,subs=+attributes]
----
oc env dc/sso -e "JAVA_OPTS_APPEND=-Dkeycloak.profile=preview"
----
. Scale up the {project_name} pod:
+
[source,bash,subs=+attributes]
----
$ oc scale --replicas=1 dc sso
deploymentconfig "sso" scaled
----
. Test a TechPreview feature of your choice.
=== Deployment process
Once deployed, the *_{project_templates_version}-https_* and *_{project_templates_version}-x509-https_* templates create a single pod that contains both the database and the {project_name} servers. The *_{project_templates_version}-postgresql_*, *_{project_templates_version}-postgresql-persistent_*, and *_{project_templates_version}-x509-postgresql-persistent_* templates create two pods, one for the database server and one for the {project_name} web server.
After the {project_name} web server pod has started, it can be accessed at its custom configured hostnames, or at the default hostnames:
* *\http://sso-_<project-name>_._<hostname>_/auth/admin*: for the {project_name} web server, and
* *\https://secure-sso-_<project-name>_._<hostname>_/auth/admin*: for the encrypted {project_name} web server.
Use the xref:sso-administrator-setup[administrator user credentials] to log in into the `master` realm's Admin Console.
[[SSO-Clients]]
=== {project_name} clients
Clients are {project_name} entities that request user authentication. A client can be an application requesting {project_name} to provide user authentication, or it can be making requests for access tokens to start services on behalf of an authenticated user. See the link:{project_doc_base_url}/server_administration_guide/index#assembly-managing-clients_server_administration_guide[Managing Clients chapter of the {project_name} documentation] for more information.
{project_name} provides link:{project_doc_base_url}/server_administration_guide/clients#oidc_clients[OpenID-Connect] and link:{project_doc_base_url}/server_administration_guide/index#client-saml-configuration[SAML] client protocols.
OpenID-Connect is the preferred protocol and uses three different access types:
- *public*: Useful for JavaScript applications that run directly in the browser and require no server configuration.
- *confidential*: Useful for server-side clients, such as EAP web applications, that need to perform a browser login.
- *bearer-only*: Useful for back-end services that allow bearer token requests.
It is required to specify the client type in the *<auth-method>* key of the application *web.xml* file. This file is read by the image at deployment. Set the value of *<auth-method>* element to:
* *KEYCLOAK* for the OpenID Connect client.
* *KEYCLOAK-SAML* for the SAML client.
The following is an example snippet for the application *web.xml* to configure an OIDC client:
[source,bash,subs="attributes+,macros+"]
----
...
<login-config>
<auth-method>KEYCLOAK</auth-method>
</login-config>
...
----
[[Auto-Man-Client-Reg]]
==== Automatic and manual {project_name} client registration methods
A client application can be automatically registered to an {project_name} realm by using credentials passed in variables specific to the *_eap64-sso-s2i_*, *_eap71-sso-s2i_*, and *_datavirt63-secure-s2i_* templates.
Alternatively, you can manually register the client application by configuring and exporting the {project_name} client adapter and including it in the client application configuration.
===== Automatic {project_name} client registration
Automatic {project_name} client registration is determined by {project_name} environment variables specific to the *_eap64-sso-s2i_*, *_eap71-sso-s2i_*, and *_datavirt63-secure-s2i_* templates. The {project_name} credentials supplied in the template are then used to register the client to the {project_name} realm during deployment of the client application.
The {project_name} environment variables included in the *_eap64-sso-s2i_*, *_eap71-sso-s2i_*, and *_datavirt63-secure-s2i_* templates are:
[cols="2*", options="header"]
|===
|Variable
|Description
|*_HOSTNAME_HTTP_*
|Custom hostname for http service route. Leave blank for default hostname of <application-name>.<project>.<default-domain-suffix>
|*_HOSTNAME_HTTPS_*
|Custom hostname for https service route. Leave blank for default hostname of <application-name>.<project>.<default-domain-suffix>
|*_SSO_URL_*
|The {project_name} web server authentication address: $$https://secure-sso-$$_<project-name>_._<hostname>_/auth
|*_SSO_REALM_*
|The {project_name} realm created for this procedure.
|*_SSO_USERNAME_*
|The name of the _realm management user_.
|*_SSO_PASSWORD_*
| The password of the user.
|*_SSO_PUBLIC_KEY_*
|The public key generated by the realm. It is located in the *Keys* tab of the *Realm Settings* in the {project_name} console.
|*_SSO_BEARER_ONLY_*
|If set to *true*, the OpenID Connect client is registered as bearer-only.
|*_SSO_ENABLE_CORS_*
|If set to *true*, the {project_name} adapter enables Cross-Origin Resource Sharing (CORS).
|===
If the {project_name} client uses the SAML protocol, the following additional variables need to be configured:
[cols="2*", options="header"]
|===
|Variable
|Description
|*_SSO_SAML_KEYSTORE_SECRET_*
|Secret to use for access to SAML keystore. The default is _sso-app-secret_.
|*_SSO_SAML_KEYSTORE_*
|Keystore filename in the SAML keystore secret. The default is _keystore.jks_.
|*_SSO_SAML_KEYSTORE_PASSWORD_*
|Keystore password for SAML. The default is _mykeystorepass_.
|*_SSO_SAML_CERTIFICATE_NAME_*
|Alias for keys/certificate to use for SAML. The default is _jboss_.
|===
See xref:Example-EAP-Auto[Example Workflow: Automatically Registering EAP Application in {project_name} with OpenID-Connect Client] for an end-to-end example of the automatic client registration method using an OpenID-Connect client.
===== Manual {project_name} client registration
Manual {project_name} client registration is determined by the presence of a deployment file in the client application's _../configuration/_ directory. These files are exported from the client adapter in the {project_name} web console. The name of this file is different for OpenID-Connect and SAML clients:
[horizontal]
*OpenID-Connect*:: _../configuration/secure-deployments_
*SAML*:: _../configuration/secure-saml-deployments_
These files are copied to the {project_name} adapter configuration section in the _standalone-openshift.xml_ at when the application is deployed.
There are two methods for passing the {project_name} adapter configuration to the client application:
* Modify the deployment file to contain the {project_name} adapter configuration so that it is included in the _standalone-openshift.xml_ file at deployment, or
* Manually include the OpenID-Connect _keycloak.json_ file, or the SAML _keycloak-saml.xml_ file in the client application's *../WEB-INF* directory.
See xref:Example-EAP-Manual[Example Workflow: Manually Configure an Application to Use {project_name} Authentication, Using SAML Client] for an end-to-end example of the manual {project_name} client registration method using a SAML client.
=== Using {project_name} vault with OpenShift secrets
Several fields in the {project_name} administration support obtaining the value
of a secret from an external vault, see link:{adminguide_link}#_vault-administration[{adminguide_name}].
The following example shows how to set up the file-based plaintext vault in OpenShift
and set it up to be used for obtaining an SMTP password.
.Procedure
. Specify a directory for the vault using the *_SSO_VAULT_DIR_* environment variable.
You can introduce the *_SSO_VAULT_DIR_* environment variable directly in the environment in your deployment configuration. It can also be included in the template by addding the following snippets at the appropriate places in the template:
+
[source,json,subs="attributes+,macros+"]
----
"parameters": [
...
{
"displayName": "RH-SSO Vault Secret directory",
"description": "Path to the RH-SSO Vault directory.",
"name": "SSO_VAULT_DIR",
"value": "",
"required": false
}
...
]
env: [
...
{
"name": "SSO_VAULT_DIR",
"value": "${SSO_VAULT_DIR}"
}
...
]
----
+
[NOTE]
====
The files plaintext vault provider will be configured only when you set
*_SSO_VAULT_DIR_* environment variable.
====
. Create a secret in your OpenShift cluster:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc create secret generic rhsso-vault-secrets --from-literal=master_smtp-password=mySMTPPsswd
----
. Mount a volume to your deployment config using the `${SSO_VAULT_DIR}` as the path.
For a deployment that is already running:
+
[source,bash,subs="attributes+,macros+"]
----
oc set volume dc/sso --add --mount-path=${SSO_VAULT_DIR} --secret-name=rhsso-vault-secrets
----
. After a pod is created you can use a customized string within your {project_name}
configuration to refer to the secret. For example, for using `mySMTPPsswd` secret
created in this tutorial, you can use `${vault.smtp-password}` within the `master`
realm in the configuration of the smtp password and it will be replaced by `mySMTPPsswd` when used.
=== Limitations
OpenShift does not currently accept OpenShift role mapping from external providers. If {project_name} is used as an authentication gateway for OpenShift, users created in {project_name} must have the roles added using the OpenShift Administrator `oc adm policy` command.
For example, to allow an {project_name}-created user to view a project namespace in OpenShift:
[source,bash,subs="attributes+,macros+"]
----
$ oc adm policy add-role-to-user view <pass:quotes[_user-name_]> -n <pass:quotes[_project-name_]>
----

View file

@ -1,228 +0,0 @@
== Configuring {project_openshift_product_name}
[id="image-streams-applications-templates"]
=== Using the {project_openshift_product_name} Image Streams and application templates
[role="_abstract"]
Red Hat JBoss Middleware for OpenShift images are pulled on demand from the secured Red Hat Registry: link:https://catalog.redhat.com/[registry.redhat.io], which requires authentication. To retrieve content, you will need to log into the registry using the Red Hat account.
To consume container images from *_registry.redhat.io_* in shared environments such as OpenShift, it is recommended for an administrator to use a Registry Service Account, also referred to as authentication tokens, in place of an individual person's Red Hat Customer Portal credentials.
.Procedure
. To create a Registry Service Account, navigate to the link:https://access.redhat.com/terms-based-registry/[Registry Service Account Management Application], and log in if necessary.
. From the *_Registry Service Accounts_* page, click *_Create Service Account_*.
. Provide a name for the Service Account, for example *_registry.redhat.io-sa_*. It will be prepended with a fixed, random string.
.. Enter a description for the Service Account, for example *_Service account to consume container images from registry.redhat.io._*.
.. Click *_Create_*.
. After the Service Account was created, click the *_registry.redhat.io-sa_* link in the *_Account name_* column of the table presented on the *_Registry Service Accounts_* page.
. Finally, click the *_OpenShift Secret_* tab, and perform all steps listed on that page.
See the link:https://access.redhat.com/RegistryAuthentication[Red Hat Container Registry Authentication] article for more information.
.Procedure
. Ensure that you are logged in as a cluster administrator or a user with project administrator access to the global `openshift` project:
. Choose a command based on your version of OpenShift Container Platform.
.. If you are running an OpenShift Container Platform v3 based cluster instance on (some) of your master host(s), perform the following:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc login -u system:admin
----
.. If you are running an OpenShift Container Platform v4 based cluster instance, link:https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html#cli-logging-in_cli-developer-commands[log in to the CLI] as the link:https://docs.openshift.com/container-platform/latest/authentication/remove-kubeadmin.html#understanding-kubeadmin_removing-kubeadmin[kubeadmin] user:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc login -u kubeadmin -p password \https://openshift.example.com:6443
----
. Run the following commands to update the core set of {project_name} {project_version} resources for OpenShift in the `openshift` project:
+
[source,bash,subs="attributes+,macros+"]
----
$ for resource in {project_templates_version}-image-stream.json \
{project_templates_version}-https.json \
{project_templates_version}-postgresql.json \
{project_templates_version}-postgresql-persistent.json \
{project_templates_version}-x509-https.json \
{project_templates_version}-x509-postgresql-persistent.json
do
oc replace -n openshift --force -f \
{project_templates_url}/$\{resource}
done
----
. Run the following command to install the {project_name} {project_version} OpenShift image streams in the `openshift` project:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc -n openshift import-image {openshift_image_repository}:{project_latest_image_tag} --from=registry.redhat.io/{openshift_image_repository}:{project_latest_image_tag} --confirm
----
[[Example-Deploying-SSO]]
=== Deploying the {project_name} Image
[[Preparing-SSO-Authentication-for-OpenShift-Deployment]]
==== Preparing for the deployment
.Procedure
. Log in to the OpenShift CLI with a user that holds the _cluster:admin_ role.
. Create a new project:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc new-project sso-app-demo
----
. Add the `view` role to the link:{ocpdocs_default_service_accounts_link}[`default`] service account. This enables the service account to view all the resources in the *sso-app-demo* namespace, which is necessary for managing the cluster.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
----
==== Deploying the {project_name} Image using the application template
You can deploy the template using one of these interfaces:
* xref:deploy_cli[OpenShift CLI]
* xref:deploy3x[OpenShift 3.x web console]
* xref:deploy4x[OpenShift 4.x web console]
[id="deploy_cli"]
===== Deploying the Template using OpenShift CLI
.Prerequisites
* Perform the steps described in xref:image-streams-applications-templates[Using the {project_openshift_product_name} Image Streams and application templates].
.Procedure
. List the available {project_name} application templates:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc get templates -n openshift -o name | grep -o '{project_templates_version}.\+'
{project_templates_version}-https
{project_templates_version}-postgresql
{project_templates_version}-postgresql-persistent
{project_templates_version}-x509-https
{project_templates_version}-x509-postgresql-persistent
----
. Deploy the selected one:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc new-app --template={project_templates_version}-x509-https
--> Deploying template "openshift/{project_templates_version}-x509-https" to project sso-app-demo
{project_name} {project_versionDoc} (Ephemeral)
---------
An example {project_name} 7 application. For more information about using this template, see https://github.com/jboss-openshift/application-templates.
A new {project_name} service has been created in your project. The admin username/password for accessing the master realm using the {project_name} console is IACfQO8v/nR7llVSVb4Dye3TNRbXoXhRpAKTmiCRc. The HTTPS keystore used for serving secure content, the JGroups keystore used for securing JGroups communications, and server truststore used for securing {project_name} requests were automatically created using OpenShift's service serving x509 certificate secrets.
* With parameters:
* Application Name=sso
* JGroups Cluster Password=jg0Rssom0gmHBnooDF3Ww7V4Mu5RymmB # generated
* Datasource Minimum Pool Size=
* Datasource Maximum Pool Size=
* Datasource Transaction Isolation=
* ImageStream Namespace=openshift
* {project_name} Administrator Username=IACfQO8v # generated
* {project_name} Administrator Password=nR7llVSVb4Dye3TNRbXoXhRpAKTmiCRc # generated
* {project_name} Realm=
* {project_name} Service Username=
* {project_name} Service Password=
* Container Memory Limit=1Gi
--> Creating resources ...
service "sso" created
service "secure-sso" created
service "sso-ping" created
route "sso" created
route "secure-sso" created
deploymentconfig "sso" created
--> Success
Run 'oc status' to view your app.
----
[id="deploy3x"]
===== Deploying the Template using the OpenShift 3.x Web Console
.Prerequisites
* Perform the steps described in xref:image-streams-applications-templates[Using the {project_openshift_product_name} Image Streams and application templates].
.Procedure
. Log in to the OpenShift web console and select the *sso-app-demo* project space.
. Click *Add to Project*, then *Browse Catalog* to list the default image streams and templates.
. Use the *Filter by Keyword* search bar to limit the list to those that match _sso_. You may need to click *Middleware*, then *Integration* to show the desired application template.
. Select an {project_name} application template. This example uses *_{project_name} {project_versionDoc} (Ephemeral)_*.
. Click *Next* in the *Information* step.
. From the *Add to Project* drop-down menu, select the _sso-app-demo_ project space. Then click *Next*.
. Select *Do not bind at this time* radio button in the *Binding* step. Click *Create* to continue.
. In the *Results* step, click the *Continue to the project overview* link to verify the status of the deployment.
[id="deploy4x"]
===== Deploying the Template using the OpenShift 4.x Web Console
.Prerequisites
* Perform the steps described in xref:image-streams-applications-templates[Using the {project_openshift_product_name} Image Streams and application templates].
.Procedure
. Log in to the OpenShift web console and select the _sso-app-demo_ project space.
. On the left sidebar, click the *Administrator* tab and then click *</> Developer*.
+
image:images/choose_developer_role.png[]
. Click *From Catalog*.
+
image:images/add_from_catalog.png[]
. Search for *sso*.
+
image:images/sso_keyword.png[]
. Choose a template such as *Red Hat Single Sign-On {project_version_base} on OpenJDK (Ephemeral)*.
+
image:images/choose_template.png[]
. Click *Instantiate Template*.
+
image:images/instantiate_template.png[]
. Adjust the template parameters if necessary and click *Create*.
. Verify the Red Hat Single Sign-On for OpenShift image was deployed.
+
image:images/verify_deployment.png[]
=== Accessing the Administrator Console of the {project_name} Pod
.Procedure
. After the template is deployed, identify the available routes.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc get routes
NAME HOST/PORT
sso sso-sso-app-demo.openshift.example.com
----
. Access the {project_name} Admin Console.
+
[source,bash,subs="attributes+,macros+"]
----
\https://sso-sso-app-demo.openshift.example.com/auth/admin
----
. Provide the login credentials for the xref:sso-administrator-setup[administrator account].

View file

@ -1,65 +0,0 @@
== Introduction to {project_openshift_product_name}
=== What is {project_name}?
[role="_abstract"]
{project_name} is an integrated sign-on solution available as a Red Hat JBoss Middleware for OpenShift containerized image. The {project_openshift_product_name} image provides an authentication server for users to centrally log in, log out, register, and manage user accounts for web applications, mobile applications, and RESTful web services.
{openshift_name} is available on the following platforms: x86_64, IBM Z, and IBM Power Systems.
=== Comparison: {project_openshift_product_name} Image versus Red Hat Single Sign-On
The {project_openshift_product_name} image version number {project_version} is based on {project_name} {project_version}. There are some important differences in functionality between the {project_openshift_product_name} image and {project_name} that should be considered:
The {project_openshift_product_name} image includes all of the functionality of {project_name}. In addition, the {project_name}-enabled JBoss EAP image automatically handles OpenID Connect or SAML client registration and configuration for *_.war_* deployments that contain *<auth-method>KEYCLOAK</auth-method>* or *<auth-method>KEYCLOAK-SAML</auth-method>* in their respective *web.xml* files.
[[sso-templates]]
=== Templates for use with this software
Red Hat offers multiple OpenShift application templates using the {project_openshift_product_name} image version number {project_version}. These templates define the resources needed to develop {project_name} {project_version} server based deployment. The templates can mainly be split into two categories: passthrough templates and reencryption templates. Some other miscellaneous templates also exist.
[[passthrough-templates]]
==== Passthrough templates
These templates require that HTTPS, JGroups keystores, and a truststore for the {project_name} server exist beforehand. They secure the TLS communication using passthrough TLS termination.
* *_{project_templates_version}-https_*: {project_name} {project_version} backed by internal H2 database on the same pod.
* *_{project_templates_version}-postgresql_*: {project_name} {project_version} backed by ephemeral PostgreSQL database on a separate pod.
* *_{project_templates_version}-postgresql-persistent_*: {project_name} {project_version} backed by persistent PostgreSQL database on a separate pod.
[NOTE]
Templates for using {project_name} with MySQL / MariaDB databases have been removed and are not available since {project_name} version 7.4.
==== Re-encryption templates
[[reencrypt-templates]]
These templates use OpenShift's internal link:{ocpdocs_serving_x509_secrets_link}[service serving x509 certificate secrets] to automatically create the HTTPS keystore used for serving secure content. The JGroups cluster traffic is authenticated using the `AUTH` protocol and encrypted using the `ASYM_ENCRYPT` protocol. The {project_name} server truststore is also created automatically, containing the */var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt* CA certificate file, which is used to sign the certificate for HTTPS keystore.
Moreover, the truststore for the {project_name} server is pre-populated with the all known, trusted CA certificate files found in the Java system path. These templates secure the TLS communication using re-encryption TLS termination.
* *_{project_templates_version}-x509-https_*: {project_name} {project_version} with auto-generated HTTPS keystore and {project_name} truststore, backed by internal H2 database. The `ASYM_ENCRYPT` JGroups protocol is used for encryption of cluster traffic.
* *_{project_templates_version}-x509-postgresql-persistent_*: {project_name} {project_version} with auto-generated HTTPS keystore and {project_name} truststore, backed by persistent PostgreSQL database. The `ASYM_ENCRYPT` JGroups protocol is used for encryption of cluster traffic.
==== Other templates
Other templates that integrate with {project_name} are also available:
* *_eap64-sso-s2i_*: {project_name}-enabled Red Hat JBoss Enterprise Application Platform 6.4.
* *_eap71-sso-s2i_*: {project_name}-enabled Red Hat JBoss Enterprise Application Platform 7.1.
* *_datavirt63-secure-s2i_*: {project_name}-enabled Red Hat JBoss Data Virtualization 6.3.
These templates contain environment variables specific to {project_name} that enable automatic {project_name} client registration when deployed.
[role="_additional-resources"]
.Additional resources
* xref:Auto-Man-Client-Reg[Automatic and Manual {project_name} Client Registration Methods]
* link:{ocp311docs_passthrough_route_link}[Passthrough TLS termination]
* link:{ocp311docs_reencrypt_route_link}[Re-encryption TLS termination]
=== Version compatibility and support
For details about OpenShift image version compatibility, see the https://access.redhat.com/articles/2342861[Supported Configurations] page.
NOTE: The {project_openshift_product_name} image version number between 7.0 and 7.4 are deprecated and they will no longer receive updates of image and application templates.
To deploy new applications, use the version 7.5 or {project_version} of the {project_openshift_product_name} image along with the application templates specific to these image versions.

View file

@ -1,642 +0,0 @@
== Reference
[[sso-artifact-repository-mirrors-section]]
=== Artifact repository mirrors
// This page describes MAVEN_MIRROR_URL variable usage
// It requires 'bcname' attribute to be set to the name of the product
[role="_abstract"]
A repository in Maven holds build artifacts and dependencies of various types
(all the project jars, library jar, plugins or any other project specific
artifacts). It also specifies locations from where to download artifacts from,
while performing the S2I build. Besides using central repositories, it is a
common practice for organizations to deploy a local custom repository (mirror).
Benefits of using a mirror are:
* Availability of a synchronized mirror, which is geographically closer and
faster.
* Ability to have greater control over the repository content.
* Possibility to share artifacts across different teams (developers, CI),
without the need to rely on public servers and repositories.
* Improved build times.
Often, a repository manager can serve as local cache to a mirror. Assuming that
the repository manager is already deployed and reachable externally at
*_pass:[http://10.0.0.1:8080/repository/internal/]_*, the S2I build can then use this
manager by supplying the `MAVEN_MIRROR_URL` environment variable to the
build configuration of the application using the following procedure:
.Procedure
. Identify the name of the build configuration to apply `MAVEN_MIRROR_URL`
variable against.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc get bc -o name
buildconfig/sso
----
. Update build configuration of `sso` with a `MAVEN_MIRROR_URL` environment variable.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc set env bc/sso \
-e MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/"
buildconfig "sso" updated
----
. Verify the setting.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc set env bc/sso --list
# buildconfigs sso
MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/
----
. Schedule new build of the application.
NOTE: During application build, you will notice that Maven dependencies are
pulled from the repository manager, instead of the default public repositories.
Also, after the build is finished, you will see that the mirror is filled with
all the dependencies that were retrieved and used during the build.
[[env_vars]]
=== Environment variables
==== Information environment variables
The following information environment variables are designed to convey
information about the image and should not be modified by the user:
.Information Environment Variables
[cols="3",options="header"]
|===
|Variable Name |Description |Example Value
|*_AB_JOLOKIA_AUTH_OPENSHIFT_*
|-
|*_true_*
|*_AB_JOLOKIA_HTTPS_*
|-
|*_true_*
|*_AB_JOLOKIA_PASSWORD_RANDOM_*
|-
|*_true_*
|*_JBOSS_IMAGE_NAME_*
|Image name, same as "name" label.
|*_{openshift_image_repository}_*
|*_JBOSS_IMAGE_VERSION_*
|Image version, same as "version" label.
|*_{project_versionDoc}_*
|*_JBOSS_MODULES_SYSTEM_PKGS_*
|-
|*_org.jboss.logmanager,jdk.nashorn.api_*
|===
==== Configuration environment variables
Configuration environment variables are designed to conveniently adjust the
image without requiring a rebuild, and should be set by the user as desired.
[[conf_env_vars]]
.Configuration Environment Variables
[cols="3",options="header"]
|===
|Variable Name |Description |Example Value
|*_AB_JOLOKIA_AUTH_OPENSHIFT_*
|Switch on client authentication for OpenShift TLS communication. The value of
this parameter can be a relative distinguished name which must be contained in
a presented client's certificate. Enabling this parameter will automatically
switch Jolokia into https communication mode. The default CA cert is set to
`/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`.
|*_true_*
|*_AB_JOLOKIA_CONFIG_*
|If set uses this file (including path) as Jolokia JVM agent properties (as
described in Jolokia's
link:https://jolokia.org/reference/html/agents.html#agents-jvm[reference
manual]). If not set, the `/opt/jolokia/etc/jolokia.properties` file will be
created using the settings as defined in this document, otherwise the rest of
the settings in this document are ignored.
|*_/opt/jolokia/custom.properties_*
|*_AB_JOLOKIA_DISCOVERY_ENABLED_*
|Enable Jolokia discovery. Defaults to *_false_*.
|*_true_*
|*_AB_JOLOKIA_HOST_*
|Host address to bind to. Defaults to *_0.0.0.0_*.
|*_127.0.0.1_*
|*_AB_JOLOKIA_HTTPS_*
|Switch on secure communication with https. By default self-signed server
certificates are generated if no serverCert configuration is given in
*_AB_JOLOKIA_OPTS_*. _NOTE: If the values is set to an empty string, https is
turned `off`. If the value is set to a non empty string, https is turned `on`._
|*_true_*
|*_AB_JOLOKIA_ID_*
|Agent ID to use ($HOSTNAME by default, which is the container id).
|*_openjdk-app-1-xqlsj_*
|*_AB_JOLOKIA_OFF_*
|If set disables activation of Jolokia (i.e. echos an empty value). By default,
Jolokia is enabled. _NOTE: If the values is set to an empty string, https is
turned `off`. If the value is set to a non empty string, https is turned `on`._
|*_true_*
|*_AB_JOLOKIA_OPTS_*
|Additional options to be appended to the agent configuration. They should be
given in the format `"key=value, key=value, …<200b> "`
|*_backlog=20_*
|*_AB_JOLOKIA_PASSWORD_*
|Password for basic authentication. By default authentication is switched off.
|*_mypassword_*
|*_AB_JOLOKIA_PASSWORD_RANDOM_*
|If set, a random value is generated for *_AB_JOLOKIA_PASSWORD_*, and it is
saved in the *_/opt/jolokia/etc/jolokia.pw_* file.
|*_true_*
|*_AB_JOLOKIA_PORT_*
|Port to use (Default: *_8778_*).
|*_5432_*
|*_AB_JOLOKIA_USER_*
|User for basic authentication. Defaults to *_jolokia_*.
|*_myusername_*
|*_CONTAINER_CORE_LIMIT_*
|A calculated core limit as described in
link:https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt[CFS
Bandwidth Control.]
|*_2_*
|*_GC_ADAPTIVE_SIZE_POLICY_WEIGHT_*
|The weighting given to the current Garbage Collection (GC) time versus previous
GC times.
|*_90_*
|*_GC_MAX_HEAP_FREE_RATIO_*
|Maximum percentage of heap free after GC to avoid shrinking.
|*_40_*
|*_GC_MAX_METASPACE_SIZE_*
|The maximum metaspace size.
|*_100_*
|*_GC_TIME_RATIO_MIN_HEAP_FREE_RATIO_*
|Minimum percentage of heap free after GC to avoid expansion.
|*_20_*
|*_GC_TIME_RATIO_*
|Specifies the ratio of the time spent outside the garbage collection (for
example, the time spent for application execution) to the time spent in the
garbage collection.
|*_4_*
|*_JAVA_DIAGNOSTICS_*
|Set this to get some diagnostics information to standard out when things are
happening.
|*_true_*
|*_JAVA_INITIAL_MEM_RATIO_*
|This is used to calculate a default initial heap memory based the maximal
heap memory. The default is 100 which means 100% of the maximal heap is used
for the initial heap size. You can skip this mechanism by setting this value
to 0 in which case no `-Xms` option is added.
|*_100_*
|*_JAVA_MAX_MEM_RATIO_*
|It is used to calculate a default maximal heap memory based on a containers
restriction. If used in a Docker container without any memory constraints for
the container then this option has no effect. If there is a memory constraint
then `-Xmx` is set to a ratio of the container available memory as set here.
The default is 50 which means 50% of the available memory is used as an upper
boundary. You can skip this mechanism by setting this value to 0 in which case
no `-Xmx` option is added.
|*_40_*
|*_JAVA_OPTS_APPEND_*
|Server startup options.
|*_-Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp_*
|*_MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION_*
|For backwards compatability, set to true to use `MyQueue` and `MyTopic` as
physical destination name defaults instead of `queue/MyQueue` and `topic/MyTopic`.
|*_false_*
|*_OPENSHIFT_KUBE_PING_LABELS_*
|Clustering labels selector.
|*_app=sso-app_*
|*_OPENSHIFT_KUBE_PING_NAMESPACE_*
|Clustering project namespace.
|*_myproject_*
|*_SCRIPT_DEBUG_*
|If set to `true`, ensurses that the bash scripts are executed with the `-x`
option, printing the commands and their arguments as they are executed.
|*_true_*
|*_SSO_ADMIN_PASSWORD_*
|Password of the administrator account for the `master` realm of the {project_name}
server. *Required.* If no value is specified, it is auto generated and
displayed as an OpenShift Instructional message when the template is
instantiated.
|*_adm-password_*
|*_SSO_ADMIN_USERNAME_*
|Username of the administrator account for the `master` realm of the {project_name}
server. *Required.* If no value is specified, it is auto generated and
displayed as an OpenShift Instructional message when the template is
instantiated.
|*_admin_*
|*_SSO_HOSTNAME_*
|Custom hostname for the {project_name} server. *Not set by default*. If not
set, the `request` hostname SPI provider, which uses the request headers to
determine the hostname of the {project_name} server is used. If set, the
`fixed` hostname SPI provider, with the hostname of the {project_name} server
set to the provided variable value, is used. See dedicated
xref:../content/advanced_concepts/advanced_concepts.adoc#advanced-concepts-sso-hostname-spi-setup[Customizing Hostname for the {project_name} Server]
section for additional steps to be performed, when *_SSO_HOSTNAME_* variable
is set.
|*_rh-sso-server.openshift.example.com_*
|*_SSO_REALM_*
|Name of the realm to be created in the {project_name} server if this environment variable
is provided.
|*_demo_*
|*_SSO_SERVICE_PASSWORD_*
|The password for the {project_name} service user.
|*_mgmt-password_*
|*_SSO_SERVICE_USERNAME_*
|The username used to access the {project_name} service. This is used by clients to create
the application client(s) within the specified {project_name} realm. This user is created
if this environment variable is provided.
|*_sso-mgmtuser_*
|*_SSO_TRUSTSTORE_*
|The name of the truststore file within the secret.
|*_truststore.jks_*
|*_SSO_TRUSTSTORE_DIR_*
|Truststore directory.
|*_/etc/sso-secret-volume_*
|*_SSO_TRUSTSTORE_PASSWORD_*
|The password for the truststore and certificate.
|*_mykeystorepass_*
|*_SSO_TRUSTSTORE_SECRET_*
|The name of the secret containing the truststore file. Used for
_sso-truststore-volume_ volume.
|*_truststore-secret_*
|===
Available link:{ocpdocs_templates_link}[application templates]
for {project_openshift_product_name} can combine the xref:conf_env_vars[aforementioned
configuration variables] with common OpenShift variables (for example
*_APPLICATION_NAME_* or *_SOURCE_REPOSITORY_URL_*), product specific variables
(e.g. *_HORNETQ_CLUSTER_PASSWORD_*), or configuration variables typical to
database images (e.g. *_POSTGRESQL_MAX_CONNECTIONS_*) yet. All of these different
types of configuration variables can be adjusted as desired to achieve the
deployed {project_name}-enabled application will align with the intended use case as much
as possible. The list of configuration variables, available for each category
of application templates for {project_name}-enabled applications, is described below.
==== Template variables for all {project_name} images
.Configuration Variables Available For All {project_name} Images
[cols="2*", options="header"]
|===
|Variable
|Description
|*_APPLICATION_NAME_*
|The name for the application.
|*_DB_MAX_POOL_SIZE_*
|Sets xa-pool/max-pool-size for the configured datasource.
|*_DB_TX_ISOLATION_*
|Sets transaction-isolation for the configured datasource.
|*_DB_USERNAME_*
|Database user name.
|*_HOSTNAME_HTTP_*
|Custom hostname for http service route. Leave blank for default hostname,
e.g.: _<application-name>.<project>.<default-domain-suffix>_.
|*_HOSTNAME_HTTPS_*
|Custom hostname for https service route. Leave blank for default hostname,
e.g.: _<application-name>.<project>.<default-domain-suffix>_.
|*_HTTPS_KEYSTORE_*
|The name of the keystore file within the secret. If defined along with
*_HTTPS_PASSWORD_* and *_HTTPS_NAME_*, enable HTTPS and set the SSL certificate
key file to a relative path under _$JBOSS_HOME/standalone/configuration_.
|*_HTTPS_KEYSTORE_TYPE_*
|The type of the keystore file (JKS or JCEKS).
|*_HTTPS_NAME_*
|The name associated with the server certificate (e.g. _jboss_). If defined
along with *_HTTPS_PASSWORD_* and *_HTTPS_KEYSTORE_*, enable HTTPS and set the
SSL name.
|*_HTTPS_PASSWORD_*
|The password for the keystore and certificate (e.g. _mykeystorepass_). If
defined along with *_HTTPS_NAME_* and *_HTTPS_KEYSTORE_*, enable HTTPS and set
the SSL key password.
|*_HTTPS_SECRET_*
|The name of the secret containing the keystore file.
|*_IMAGE_STREAM_NAMESPACE_*
|Namespace in which the ImageStreams for Red Hat Middleware images are
installed. These ImageStreams are normally installed in the _openshift_
namespace. You should only need to modify this if you've installed the
ImageStreams in a different namespace/project.
|*_JGROUPS_CLUSTER_PASSWORD_*
|JGroups cluster password.
|*_JGROUPS_ENCRYPT_KEYSTORE_*
|The name of the keystore file within the secret.
|*_JGROUPS_ENCRYPT_NAME_*
|The name associated with the server certificate (e.g. _secret-key_).
|*_JGROUPS_ENCRYPT_PASSWORD_*
|The password for the keystore and certificate (e.g. _password_).
|*_JGROUPS_ENCRYPT_SECRET_*
|The name of the secret containing the keystore file.
|*_SSO_ADMIN_USERNAME_*
|Username of the administrator account for the `master` realm of the {project_name}
server. *Required.* If no value is specified, it is auto generated and
displayed as an OpenShift instructional message when the template is
instantiated.
|*_SSO_ADMIN_PASSWORD_*
|Password of the administrator account for the `master` realm of the {project_name}
server. *Required.* If no value is specified, it is auto generated and
displayed as an OpenShift instructional message when the template is
instantiated.
|*_SSO_REALM_*
|Name of the realm to be created in the {project_name} server if this environment variable
is provided.
|*_SSO_SERVICE_USERNAME_*
|The username used to access the {project_name} service. This is used by clients to create
the application client(s) within the specified {project_name} realm. This user is created
if this environment variable is provided.
|*_SSO_SERVICE_PASSWORD_*
|The password for the {project_name} service user.
|*_SSO_TRUSTSTORE_*
|The name of the truststore file within the secret.
|*_SSO_TRUSTSTORE_SECRET_*
|The name of the secret containing the truststore file. Used for
*_sso-truststore-volume_* volume.
|*_SSO_TRUSTSTORE_PASSWORD_*
|The password for the truststore and certificate.
|===
==== Template variables specific to *_{project_templates_version}-postgresql_*, *_{project_templates_version}-postgresql-persistent_*, and *_{project_templates_version}-x509-postgresql-persistent_*
.Configuration Variables Specific To {project_name}-enabled PostgreSQL Applications With Ephemeral Or Persistent Storage
[cols="2*", options="header"]
|===
|Variable
|Description
|*_DB_USERNAME_*
|Database user name.
|*_DB_PASSWORD_*
|Database user password.
|*_DB_JNDI_*
|Database JNDI name used by application to resolve the datasource,
e.g. _java:/jboss/datasources/postgresql_
|*_POSTGRESQL_MAX_CONNECTIONS_*
|The maximum number of client connections allowed. This also sets the maximum
number of prepared transactions.
|*_POSTGRESQL_SHARED_BUFFERS_*
|Configures how much memory is dedicated to PostgreSQL for caching data.
|===
==== Template variables for general *eap64* and *eap71* S2I images
.Configuration Variables For EAP 6.4 and EAP 7 Applications Built Via S2I
[cols="2*", options="header"]
|===
|Variable
|Description
|*_APPLICATION_NAME_*
|The name for the application.
|*_ARTIFACT_DIR_*
|Artifacts directory.
|*_AUTO_DEPLOY_EXPLODED_*
|Controls whether exploded deployment content should be automatically deployed.
|*_CONTEXT_DIR_*
|Path within Git project to build; empty for root project directory.
|*_GENERIC_WEBHOOK_SECRET_*
|Generic build trigger secret.
|*_GITHUB_WEBHOOK_SECRET_*
|GitHub trigger secret.
|*_HORNETQ_CLUSTER_PASSWORD_*
|HornetQ cluster administrator password.
|*_HORNETQ_QUEUES_*
|Queue names.
|*_HORNETQ_TOPICS_*
|Topic names.
|*_HOSTNAME_HTTP_*
|Custom host name for http service route. Leave blank for default host name,
e.g.: _<application-name>.<project>.<default-domain-suffix>_.
|*_HOSTNAME_HTTPS_*
|Custom host name for https service route. Leave blank for default host name,
e.g.: _<application-name>.<project>.<default-domain-suffix>_.
|*_HTTPS_KEYSTORE_TYPE_*
|The type of the keystore file (JKS or JCEKS).
|*_HTTPS_KEYSTORE_*
|The name of the keystore file within the secret. If defined along with
*_HTTPS_PASSWORD_* and *_HTTPS_NAME_*, enable HTTPS and set the SSL certificate
key file to a relative path under _$JBOSS_HOME/standalone/configuration_.
|*_HTTPS_NAME_*
|The name associated with the server certificate (e.g. _jboss_). If defined
along with *_HTTPS_PASSWORD_* and *_HTTPS_KEYSTORE_*, enable HTTPS and set the
SSL name.
|*_HTTPS_PASSWORD_*
|The password for the keystore and certificate (e.g. _mykeystorepass_). If
defined along with *_HTTPS_NAME_* and *_HTTPS_KEYSTORE_*, enable HTTPS and set
the SSL key password.
|*_HTTPS_SECRET_*
|The name of the secret containing the keystore file.
|*_IMAGE_STREAM_NAMESPACE_*
|Namespace in which the ImageStreams for Red Hat Middleware images are
installed. These ImageStreams are normally installed in the _openshift_
namespace. You should only need to modify this if you've installed the
ImageStreams in a different namespace/project.
|*_JGROUPS_CLUSTER_PASSWORD_*
|JGroups cluster password.
|*_JGROUPS_ENCRYPT_KEYSTORE_*
|The name of the keystore file within the secret.
|*_JGROUPS_ENCRYPT_NAME_*
|The name associated with the server certificate (e.g. _secret-key_).
|*_JGROUPS_ENCRYPT_PASSWORD_*
|The password for the keystore and certificate (e.g. _password_).
|*_JGROUPS_ENCRYPT_SECRET_*
|The name of the secret containing the keystore file.
|*_SOURCE_REPOSITORY_REF_*
|Git branch/tag reference.
|*_SOURCE_REPOSITORY_URL_*
|Git source URI for application.
|===
==== Template variables specific to *_eap64-sso-s2i_* and *_eap71-sso-s2i_* for automatic client registration
.Configuration Variables For EAP 6.4 and EAP 7 {project_name}-enabled Applications Built Via S2I
[cols="2*", options="header"]
|===
|Variable
|Description
|*_SSO_URL_*
|{project_name} server location.
|*_SSO_REALM_*
|Name of the realm to be created in the {project_name} server if this environment variable
is provided.
|*_SSO_USERNAME_*
|The username used to access the {project_name} service. This is used to create the
application client(s) within the specified {project_name} realm. This should match the
*_SSO_SERVICE_USERNAME_* specified through one of the *{project_templates_version}-* templates.
|*_SSO_PASSWORD_*
|The password for the {project_name} service user.
|*_SSO_PUBLIC_KEY_*
|{project_name} public key. Public key is recommended to be passed into the template to
avoid man-in-the-middle security attacks.
|*_SSO_SECRET_*
|The {project_name} client secret for confidential access.
|*_SSO_SERVICE_URL_*
|{project_name} service location.
|*_SSO_TRUSTSTORE_SECRET_*
|The name of the secret containing the truststore file. Used for
*_sso-truststore-volume_* volume.
|*_SSO_TRUSTSTORE_*
|The name of the truststore file within the secret.
|*_SSO_TRUSTSTORE_PASSWORD_*
|The password for the truststore and certificate.
|*_SSO_BEARER_ONLY_*
|{project_name} client access type.
|*_SSO_DISABLE_SSL_CERTIFICATE_VALIDATION_*
|If true SSL communication between EAP and the {project_name} Server is insecure
(i.e. certificate validation is disabled with curl)
|*_SSO_ENABLE_CORS_*
|Enable CORS for {project_name} applications.
|===
==== Template variables specific to *_eap64-sso-s2i_* and *_eap71-sso-s2i_* for automatic client registration with SAML clients
.Configuration Variables For EAP 6.4 and EAP 7 {project_name}-enabled Applications Built Via S2I Using SAML Protocol
[cols="2*", options="header"]
|===
|Variable
|Description
|*_SSO_SAML_CERTIFICATE_NAME_*
|The name associated with the server certificate.
|*_SSO_SAML_KEYSTORE_PASSWORD_*
|The password for the keystore and certificate.
|*_SSO_SAML_KEYSTORE_*
|The name of the keystore file within the secret.
|*_SSO_SAML_KEYSTORE_SECRET_*
|The name of the secret containing the keystore file.
|*_SSO_SAML_LOGOUT_PAGE_*
|{project_name} logout page for SAML applications.
|===
=== Exposed ports
[cols="2",options="header"]
|===
|Port Number | Description
|*_8443_* | HTTPS
|*_8778_* | Jolokia monitoring
|===
////
=== Labels
=== Datasources
=== Clustering
=== Security domains
=== HTTPS
=== Source-to-Image (S2I)
=== Known issues
* There is a known issue with the EAP6 Adapter _HttpServletRequest.logout()_ in which the adapter does not log out from the application, which can create a login loop. The workaround is to call _HttpSession.invalidate();_ after _request.logout()_ to clear the Keycloak token from the session. For more information, see https://issues.redhat.com/browse/KEYCLOAK-2665[KEYCLOAK-2665].
* The SSO logs throw a duplication error if the SSO pod is restarted while backed by a database pod. This error can be safely ignored.
* Setting _adminUrl_ to a "https://..." address in an OpenID Connect client will cause *javax.net.ssl.SSLHandshakeException* exceptions on the SSO server if the default secrets (*sso-app-secret* and *eap-app-secret*) are used. The application server must use either CA-signed certificates or configure the SSO trust store to trust the self-signed certificates.
* If the client route uses a different domain suffix to the SSO service, the client registration script will erroneously configure the client on the SSO side, causing bad redirection.
* The SSO-enabled JBoss EAP image does not properly set the *adminUrl* property during automatic client registration. As a workaround, log in to the SSO console after the application has started and manually modify the client registration *adminUrl* property to *http://_<application-name>_-_<project-name>_._<hostname>_/_<app-context>_*.
////

View file

@ -1 +0,0 @@
../../topics/templates

File diff suppressed because it is too large Load diff

View file

@ -36,7 +36,6 @@
<module>securing_apps</module>
<module>server_admin</module>
<module>server_development</module>
<module>server_installation</module>
<module>release_notes</module>
<module>upgrading</module>
<module>aggregation</module>
@ -55,11 +54,6 @@
<masterFile>master</masterFile>
<imagesDir>rhsso-images</imagesDir>
</properties>
<modules>
<module>getting_started</module>
<module>openshift</module>
<module>openshift-openj9</module>
</modules>
</profile>
<profile>
<id>tests</id>

View file

@ -41,5 +41,4 @@ Thanks to https://github.com/rhyamada[Ronaldo Yamada] for the contribution.
== Deprecated features in the {project_operator}
With this release, we have deprecated and/or marked as unsupported some features in the {project_operator}. This
concerns the Backup CRD and the operator managed Postgres Database. For more details, please see the
link:{installguide_link}#_operator_production_usage[related chapter in the {installguide_name}].
concerns the Backup CRD and the operator managed Postgres Database.

View file

@ -1,7 +1,7 @@
== Configuring a Docker registry to use {project_name}
NOTE: Docker authentication is disabled by default. To enable see link:{installguide_profile_link}[{installguide_profile_name}].
NOTE: Docker authentication is disabled by default. To enable see the https://www.keycloak.org/server/features[Enabling and disabling features] guide.
This section describes how you can configure a Docker registry to use {project_name} as its authentication server.

View file

@ -45,5 +45,5 @@ the cipher suites and TLS protocol versions used. To match these requirements, y
key-manager="kcKeyManager" trust-manager="kcTrustManager" \
cipher-suite-filter="TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384" protocols="TLSv1.2" />
The references to `kcKeyManager` and `kcTrustManager` refers to the corresponding Keystore and Truststore. See the documentation of Wildfly Elytron subsystem for more details and also
other parts of {project_name} documentation such as link:{installguide_link}#_network[Network Setup Section] or link:{adminguide_link}#_x509[X.509 Authentication Section].
As confidential information is being exchanged, all interactions shall be encrypted with TLS (HTTPS). Moreover, there are some requirements in the FAPI specification for
the cipher suites and TLS protocol versions used. To match these requirements, you can consider configuring allowed ciphers. This configuration can be done by setting the `https-protocols` and `https-cipher-suites` options. For more details, see https://www.keycloak.org/server/enabletls[Configuring TLS] guide.

View file

@ -336,7 +336,7 @@ the browser is restrictive regarding cookies. The adapter tries to detect this s
===== Browsers with "SameSite=Lax by Default" Policy
All features are supported if SSL / TLS connection is configured on the {project_name} side as well as on the application
side. See link:{installguide_link}#_setting_up_ssl[configuring the SSL / TLS]. Affected is for example Chrome starting with
side. Affected is for example Chrome starting with
version 84.
===== Browsers with Blocked Third-Party Cookies

View file

@ -9,7 +9,7 @@ include::../templates/techpreview.adoc[]
[NOTE]
====
In order to use token exchange you should also enable the `token_exchange` feature. Please, take a look at link:{installguide_profile_link}[{installguide_profile_name}].
In order to use token exchange you should also enable the `token_exchange` feature. Please, take a look at the https://www.keycloak.org/server/features[Enabling and disabling features] guide.
====
=== How token exchange works

View file

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View file

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View file

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View file

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 37 KiB

View file

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 55 KiB

View file

@ -6,9 +6,7 @@ include::topics/assembly-creating-first-admin.adoc[]
include::topics/admin-console.adoc[]
include::topics/user-federation.adoc[]
include::topics/user-federation/ldap.adoc[]
ifeval::["{kc_dist}" == "wildfly"]
include::topics/user-federation/sssd.adoc[]
endif::[]
include::topics/user-federation/custom.adoc[]
include::topics/assembly-managing-users.adoc[]
include::topics/sessions.adoc[]
@ -22,20 +20,10 @@ include::topics/assembly-roles-groups.adoc[]
include::topics/authentication.adoc[]
include::topics/authentication/password-policies.adoc[]
include::topics/authentication/otp-policies.adoc[]
ifeval::[{project_community}==true]
include::topics/authentication/flows.adoc[]
endif::[]
ifeval::[{project_product}==true]
include::topics/authentication/flows-prod.adoc[]
endif::[]
include::topics/authentication/kerberos.adoc[]
include::topics/authentication/x509.adoc[]
ifeval::[{project_community}==true]
include::topics/authentication/webauthn.adoc[]
endif::[]
ifeval::[{project_product}==true]
include::topics/authentication/webauthn-prod.adoc[]
endif::[]
include::topics/authentication/recovery-codes.adoc[]
include::topics/authentication/conditions.adoc[]
include::topics/identity-broker.adoc[]
@ -71,14 +59,9 @@ include::topics/admin-console-permissions/fine-grain.adoc[]
include::topics/assembly-managing-clients.adoc[]
include::topics/vault.adoc[]
include::topics/events.adoc[]
ifeval::["{kc_dist}" == "wildfly"]
include::topics/export-import.adoc[]
endif::[]
include::topics/threat.adoc[]
include::topics/threat/host.adoc[]
ifeval::["{kc_dist}" == "wildfly"]
include::topics/threat/admin.adoc[]
endif::[]
include::topics/threat/brute-force.adoc[]
include::topics/threat/read-only-attributes.adoc[]
include::topics/threat/clickjacking.adoc[]

View file

@ -1,206 +0,0 @@
// MOVED TO SEPARATE UPGRADE GUIDE!
// Add Keycloak migration changes to upgrading/topics/keycloak/changes.adoc
== Migration from older versions
To upgrade to a new version of Keycloak first download and install the new version of Keycloak. Once the new version
is installed migrate the config files, database, keycloak-server.json, providers, themes and applications to the new applications.
This chapter contains some general migration details which are applicable to all versions. There are instructions for
migration that is only applicable to a specific release. If you are upgrading from a very old version you need to go
through all intermediate version specific migration.
It's highly recommended that you backup your database prior to upgrading Keycloak.
Migration from a candidate release (CR) to a Final release is not supported. We do however recommend that you test
migration for a CR so we can resolve any potential issues before the Final is released.
=== Migration of config files using migration scripts
As part of your migration, you should use your old versions of config files `standalone.xml`, `standalone-ha.xml`, and/or `domain.xml`.
These files typically contain configuration that is unique to your own environment. So, the first thing to do as part
of a migration is to copy those files to the new Keycloak server installation, replacing the default versions.
If migrating from Keycloak version 2.1.0 or older, you should also copy `keycloak-server.json` to `standalone/configuration`
and/or `domain/configuration`.
There will be configuration in those config files that pertains to Keycloak, which will need to be upgraded. For that,
you should run one or more of the appropriate upgrade scripts. They are `migrate-standalone.cli`, `migrate-standalone-ha.cli`,
`migrate-domain-standalone.cli` and `migrate-domain-clustered.cli`.
The server should not be running when you execute a migration script.
.Example for running migrate-standalone.cli
[source]
----
$ .../bin/jboss-cli.sh --file=migrate-standalone.cli
----
Note that for upgrading domain.xml, there are two migration scripts, `migrate-domain-standalone.cli` and
`migrate-domain-clustered.cli`. These scripts migrate separate profiles within your domain.xml that were
originally shipped with Keycloak. If you have changed the name of the profile there is a variable near
the top of the script that you will need to change.
If you are migrating `keycloak-server.json`, this will also be migrated as needed. If you prefer,
you can migrate `keycloak-server.json` beforehand using the instructions in the next section.
One thing to note is that the migration scripts only work for Keycloak versions 1.8.1 forward. If migrating an older
version you will need to manually upgrade your config files to at least be 1.8.1 compliant.
Lastly, you may want to examine the contents of the scripts before running. They show exactly what will be changed
for each version. They also have values at the top of the script that you may need to change based on your
environment.
=== Migrate and convert keycloak-server.json
If you ran one of the migration scripts from the previous section then you have probably already migrated
your keycloak-server.json. This section is kept here for reference. If you prefer, you can follow the
instructions in this section before running the migration scripts above.
You should copy `standalone/configuration/keycloak-server.json` from the old version to make sure any configuration changes you've done are added to the new installation.
The version specific section below will list any changes done to this file that you have to do when upgrading from one version to another.
Keycloak is moving away from the use of keycloak-server.json. For this release, the server will still work
if this file is in `standalone/configuration/keycloak-server.json`, but it is highly recommended that
you convert to using standalone.xml, standalone-ha.xml, or domain.xml for configuration. We may soon remove
support for keycloak-server.json.
To convert your keycloak-server.json, you will use a jboss-cli operation called `migrate-json`.
It is recommended that you run this operation while the server is not running.
The `migrate-json` operation assumes you are migrating with an xml configuration file from an old version. For example,
you should not try to migrate your keycloak-server.json to a standalone.xml file that already contains <spi> and <theme>
tags under the keycloak subsystem. Before migration, your keycloak subsystem should look like the one below:
.standalone.xml
[source,xml]
----
<subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">
<web-context>auth</web-context>
</subsystem>
----
The jboss-cli tool is discussed in detail in link:{installguide_link}[{installguide_name}].
==== migrate-json in Standalone Mode
For standalone, you will issue the `migrate-json` operation in `embed` mode without
the server running.
.Standalone keycloak-server.json migration
[source]
----
$ .../bin/jboss-cli.sh
[disconnected /] embed-server --server-config=standalone.xml
[standalone@embedded /] /subsystem=keycloak-server/:migrate-json
----
The `migrate-json` operation will look for your keycloak-server.json file in
the `standalone/configuration` directory. You also have the option of using
the `file` argument as shown in the domain mode example below.
==== migrate-json in Domain Mode
For a domain, you will stop the Keycloak server and
issue the `migrate-json` operation against the running domain controller.
If you choose not to stop the Keycloak server, the operation will still work,
but your changes will not take affect until the Keycloak server is restarted.
Domain mode migration requires that you use the `file` parameter to upload your
keycloak-server.json from a local directory. The example below shows connecting
to localhost. You will need to substitute the address of your domain controller.
.Domain mode keycloak-server.json migration
[source]
----
$ .../bin/jboss-cli.sh -c --controller=localhost:9990
[domain@localhost:9990 /] cd profile=auth-server-clustered
[domain@localhost:9990 profile=auth-server-clustered] cd subsystem=keycloak-server
[domain@localhost:9990 subsystem=keycloak-server] :migrate-json(file="./keycloak-server.json")
----
You will need to repeat the `migrate-json` operation for each profile containing a `keycloak-server` subsystem.
=== Migrate database
Keycloak can automatically migrate the database schema, or you can choose to do it manually.
==== Relational database
To enable automatic upgrading of the database schema set the `migrationStrategy` property to `update` for
the default `connectionsJpa` provider:
.Edit xml
[source,xml]
----
<spi name="connectionsJpa">
<provider name="default" enabled="true">
<properties>
...
<property name="migrationStrategy" value="update"/>
</properties>
</provider>
</spi>
----
.Equivalent CLI command for above
[source]
----
/subsystem=keycloak-server/spi=connectionsJpa/provider=default/:map-put(name=properties,key=migrationStrategy,value=update)
----
When you start the server with this setting your database will automatically be migrated if the database schema has
changed in the new version.
To manually migrate the database set the `migrationStrategy` to `manual`. When you start the server with this
configuration it will check if the database needs migration. If changes are needed the required changes are written
to an SQL file that you can review and manually run against the database.
There's also the option to disable migration by setting the `migrationStrategy` to `validate`. With this configuration
the database will be checked at startup and if it is not migrated the server will exit.
==== Mongo
Mongo doesn't have a schema, but there may still be things like collections and indexes that are added to new releases.
To enable automatic creation of these set the `migrationStrategy` property to `update` for the default `connectionsMongo`
provider:
.Edit xml
[source,xml]
----
<spi name="connectionsMongo">
<provider name="default" enabled="true">
<properties>
...
<property name="migrationStrategy" value="update"/>
</properties>
</provider>
</spi>
----
.Equivalent CLI command for above
[source]
----
/subsystem=keycloak-server/spi=connectionsMongo/provider=default/:map-put(name=properties,key=migrationStrategy,value=update)
----
The Mongo provider does not have the option to manually apply the required changes.
There's also the option to disable migration by setting the `migrationStrategy` to `validate`. With this configuration
the database will be checked at startup and if it is not migrated the server will exit.
=== Migrate providers
If you have implemented any SPI providers you need to copy them to the new server.
The version specific section below will mention if any of the SPI's have changed.
If they have you may have to update your code accordingly.
=== Migrate themes
If you have created a custom theme you need to copy them to the new server.
The version specific section below will mention if changes have been made to themes.
If there is you may have to update your themes accordingly.
=== Migrate application
If you deploy applications directly to the Keycloak server you should copy them to the new server.
For any applications including those not deployed directly to the Keycloak server you should upgrade the adapter.
The version specific section below will mention if any changes are required to applications.

View file

@ -19,7 +19,6 @@ image:{project_images}/initial-welcome-page.png[Welcome page]
=== Creating the account remotely
ifeval::["{kc_dist}" == "quarkus"]
If you cannot access the server from a `localhost` address or just want to start {project_name} from the command line, use the `KEYCLOAK_ADMIN` and `KEYCLOAK_ADMIN_PASSWORD` environment variables to create an initial admin account.
For example:
@ -30,39 +29,3 @@ export KEYCLOAK_ADMIN_PASSWORD=<password>
bin/kc.[sh|bat] start
----
endif::[]
ifeval::["{kc_dist}" == "wildfly"]
If you cannot access the server from a `localhost` address, or just want to start {project_name} from the command line, use the `.../bin/add-user-keycloak` script.
.Add-user-keycloak script
image:{project_images}/add-user-script.png[]
The parameters differ based on if you use the standalone operation mode or domain operation mode. For standalone mode, here is how you use the script.
.Linux/Unix
[source]
----
$ .../bin/add-user-keycloak.sh -r master -u <username> -p <password>
----
.Windows
[source]
----
> ...\bin\add-user-keycloak.bat -r master -u <username> -p <password>
----
For domain mode, you have to point the script to one of your server hosts using the `-sc` switch.
.Linux/Unix
[source]
----
$ .../bin/add-user-keycloak.sh --sc domain/servers/server-one/configuration -r master -u <username> -p <password>
----
.Windows
[source]
----
> ...\bin\add-user-keycloak.bat --sc domain/servers/server-one/configuration -r master -u <username> -p <password>
----
endif::[]

View file

@ -10,12 +10,7 @@ of client is one that is requesting an access token so that it can invoke other
This section discusses various aspects around configuring clients and various ways to do it.
include::clients/assembly-client-oidc.adoc[leveloffset=+2]
ifeval::[{project_community}==true]
include::clients/saml/proc-creating-saml-client.adoc[leveloffset=+2]
endif::[]
ifeval::[{project_product}==true]
include::clients/saml/proc-creating-saml-client-prod.adoc[leveloffset=+2]
endif::[]
include::clients/saml/idp-initiated-login.adoc[leveloffset=+3]
include::clients/saml/proc-using-an-entity-descriptor.adoc[leveloffset=+3]
include::clients/con-client-links.adoc[leveloffset=+2]

View file

@ -1,444 +0,0 @@
[[_authentication-flows]]
=== Authentication flows
An _authentication flow_ is a container of authentications, screens, and actions, during log in, registration, and other {project_name} workflows.
To view all the flows, actions, and checks, each flow requires:
.Procedure
. Click *Authentication* in the menu.
. Click the *Flows* tab.
==== Built-in flows
{project_name} has several built-in flows. You cannot modify these flows, but you can alter the flow's requirements to suit your needs.
In the drop-down list, select *browser* to display the Browser Flow screen.
.Browser flow
image:{project_images}/browser-flow.png[Browser Flow]
Hover over the question-mark tooltip of the drop-down list to view a description of the flow. Two sections exist.
===== Auth type
The name of the authentication or the action to execute. If an authentication is indented, it is in a sub-flow. It may or may not be executed, depending on the behavior of its parent.
. Cookie
+
The first time a user logs in successfully, {project_name} sets a session cookie. If the cookie is already set, this authentication type is successful. Since the cookie provider returned success and each execution at this level of the flow is _alternative_, {project_name} does not perform any other execution. This results in a successful login.
. Kerberos
+
This authenticator is disabled by default and is skipped during the Browser Flow.
. Identity Provider Redirector
+
This action is configured through the *Actions* > *Config* link. It redirects to another IdP for <<_identity_broker, identity brokering>>.
. Forms
+
Since this sub-flow is marked as _alternative_, it will not be executed if the *Cookie* authentication type passed. This sub-flow contains an additional authentication type that needs to be executed. {project_name} loads the executions for this sub-flow and processes them.
The first execution is the *Username Password Form*, an authentication type that renders the username and password page. It is marked as _required_, so the user must enter a valid username and password.
The second execution is the *Browser - Conditional OTP* sub-flow. This sub-flow is _conditional_ and executes depending on the result of the *Condition - User Configured* execution. If the result is true, {project_name} loads the executions for this sub-flow and processes them.
The next execution is the *Condition - User Configured* authentication. This authentication checks if {project_name} has configured other executions in the flow for the user. The *Browser - Conditional OTP* sub-flow executes only when the user has a configured OTP credential.
The final execution is the *OTP Form*. {project_name} marks this execution as _required_ but it runs only when the user has an OTP credential set up because of the setup in the _conditional_ sub-flow. If not, the user does not see an OTP form.
===== Requirement
A set of radio buttons that control the execution of an action executes.
[[_execution-requirements]]
====== Required
All _Required_ elements in the flow must be successfully sequentially executed. The flow terminates if a required element fails.
====== Alternative
Only a single element must successfully execute for the flow to evaluate as successful. Because the _Required_ flow elements are sufficient to mark a flow as successful, any _Alternative_ flow element within a flow containing _Required_ flow elements will not execute.
====== Disabled
The element does not count to mark a flow as successful.
====== Conditional
This requirement type is only set on sub-flows.
* A _Conditional_ sub-flow contains executions. These executions must evaluate to logical statements.
* If all executions evaluate as _true_, the _Conditional_ sub-flow acts as _Required_.
* If all executions evaluate as _false_, the _Conditional_ sub-flow acts as _Disabled_.
* If you do not set an execution, the _Conditional_ sub-flow acts as _Disabled_.
* If a flow contains executions and the flow is not set to _Conditional_, {project_name} does not evaluate the executions, and the executions are considered functionally _Disabled_.
==== Creating flows
Important functionality and security considerations apply when you design a flow.
To create a flow, perform the following:
.Procedure
. Click *Authentication* in the menu.
. Click *New*.
[NOTE]
====
You can copy and then modify an existing flow. Select a flow, click *Copy*, and enter a name for the new flow.
====
When creating a new flow, you must create a top-level flow first with the following options:
Alias::
The name of the flow.
Description::
The description you can set to the flow.
Top-Level Flow Type::
The type of flow. The type *client* is used only for the authentication of clients (applications). For all other cases, choose *generic*.
.Create a top-level flow
image:{project_images}/Create-top-level-flow.png[Top Level Flow]
When {project_name} has created the flow, {project_name} displays the *Delete*, *Add execution*, and *Add flow* buttons.
.An empty new flow
image:{project_images}/New-flow.png[New Flow]
Three factors determine the behavior of flows and sub-flows.
* The structure of the flow and sub-flows.
* The executions within the flows
* The requirements set within the sub-flows and the executions.
Executions have a wide variety of actions, from sending a reset email to validating an OTP. Add executions with the *Add execution* button. Hover over the question mark next to *Provider*, to see a description of the execution.
.Adding an authentication execution
image:{project_images}/Create-authentication-execution.png[Adding an Authentication Execution]
Two types of executions exist, _automatic executions_ and _interactive executions_. _Automatic executions_ are similar to the *Cookie* execution and will automatically
perform their action in the flow. _Interactive executions_ halt the flow to get input. Executions executing successfully set their status to _success_. For a flow to complete, it needs at least one execution with a status of _success_.
You can add sub-flows to top-level flows with the *Add flow* button. The *Add flow* button displays the *Create Execution Flow* page. This page is similar to the *Create Top Level Form* page. The difference is that the *Flow Type* can be *generic* (default) or *form*. The *form* type constructs a sub-flow that generates a form for the user, similar to the built-in *Registration* flow.
Sub-flows success depends on how their executions evaluate, including their contained sub-flows. See the <<_execution-requirements, execution requirements section>> for an in-depth explanation of how sub-flows work.
[NOTE]
====
After adding an execution, check the requirement has the correct value.
====
All elements in a flow have a *Delete* option in the *Actions* menu. This action removes the element from the flow.
Executions have a *Config* menu option to configure the execution. It is also possible to add executions and sub-flows to sub-flows with the *Add execution* and *Add flow* menu options.
Since the order of execution is important, you can move executions and sub-flows up and down within their flows using the up and down buttons beside their names.
[WARNING]
====
Make sure to properly test your configuration when you configure the authentication flow to confirm that no security holes exist in your setup. We recommend that you test various
corner cases. For example, consider testing the authentication behavior for a user when you remove various credentials from the user's account before authentication.
As an example, when 2nd-factor authenticators, such as OTP Form or WebAuthn Authenticator, are configured in the flow as REQUIRED and the user does not have credential of particular
type, the user will be able to setup the particular credential during authentication itself. This situation means that the user does not authenticate with this credential as he setup
it right during the authentication. So for browser authentication, make sure to configure your authentication flow with some 1st-factor credentials such as Password or WebAuthn
Passwordless Authenticator.
====
==== Creating a password-less browser login flow
To illustrate the creation of flows, this section describes creating an advanced browser login flow. The purpose of this flow is to allow a user a choice between logging in using a password-less manner with xref:webauthn_{context}[WebAuthn], or two-factor authentication with a password and OTP.
.Procedure
. Click *Authentication* in the menu.
. Click the *Flows* tab.
. Click *New*.
. Enter `Browser Password-less` as an alias.
. Click *Save*.
. Click *Add execution*.
. Select *Cookie* from the drop-down list.
. Click *Save*.
. Click *Alternative* for the *Cookie* authentication type to set its requirement to alternative.
. Click *Add execution*.
. Select *Kerberos* from the drop-down list.
. Click *Add execution*.
. Select *Identity Provider Redirector* from the drop-down list.
. Click *Save*.
. Click *Alternative* for the *Identity Provider Redirector* authentication type to set its requirement to alternative.
. Click *Add flow*.
. Enter *Forms* as an alias.
. Click *Save*.
. Click *Alternative* for the *Forms* authentication type to set its requirement to alternative.
+
.The common part with the browser flow
image:{project_images}/Passwordless-browser-login-common.png[]
+
. Click *Actions* for the *Forms* execution.
. Select *Add execution*.
. Select *Username Form* from the drop-down list.
. Click *Save*.
. Click *Required* for the *Username Form* authentication type to set its requirement to required.
At this stage, the form requires a username but no password. We must enable password authentication to avoid security risks.
. Click *Actions* for the *Forms* sub-flow.
. Click *Add flow*.
. Enter `Authentication` as an alias.
. Click *Save*.
. Click *Required* for the *Authentication* authentication type to set its requirement to required.
. Click *Actions* for the *Authentication* sub-flow.
. Click *Add execution*.
. Select *Webauthn Passwordless Authenticator* from the drop-down list.
. Click *Save*.
. Click *Alternative* for the *Webauthn Passwordless Authenticator* authentication type to set its requirement to alternative.
. Click *Actions* for the *Authentication* sub-flow.
. Click *Add flow*.
. Enter `Password with OTP` as an alias.
. Click *Save*.
. Click *Alternative* for the *Password with OTP* authentication type to set its requirement to alternative.
. Click *Actions* for the *Password with OTP* sub-flow.
. Click *Add execution*.
. Select *Password Form* from the drop-down list.
. Click *Save*.
. Click *Required* for the *Password Form* authentication type to set its requirement to required.
. Click *Actions* for the *Password with OTP* sub-flow.
. Click *Add execution*.
. Select *OTP Form* from the drop-down list.
. Click *Save*.
. Click *Required* for the *OTP Form* authentication type to set its requirement to required.
Finally, change the bindings.
. Click the *Bindings* tab.
. Click the *Browser Flow* drop-down list.
. Select *Browser Password-less* from the drop-down list.
. Click *Save*.
.A password-less browser login
image:{project_images}/Passwordless-browser-login.png[]
After entering the username, the flow works as follows:
If users have WebAuthn passwordless credentials recorded, they can use these credentials to log in directly. This is the password-less login. The user can also select *Password with OTP* because the `WebAuthn Passwordless` execution and the `Password with OTP` flow are set to *Alternative*. If they are set to *Required*, the user has to enter WebAuthn, password, and OTP.
If the user selects the *Try another way* link with `WebAuthn passwordless` authentication, the user can choose between `Password` and `Security Key` (WebAuthn passwordless). When selecting the password, the user will need to continue and log in with the assigned OTP. If the user has no WebAuthn credentials, the user must enter the password and then the OTP. If the user has no OTP credential, they will be asked to record one.
[NOTE]
====
Since the WebAuthn Passwordless execution is set to *Alternative* rather than *Required*, this flow will never ask the user to register a WebAuthn credential. For a user to have a Webauthn credential, an administrator must add a required action to the user. Do this by:
. Enabling the *Webauthn Register Passwordless* required action in the realm (see the xref:webauthn_{context}[WebAuthn] documentation).
. Setting the required action using the *Credential Reset* part of a user's xref:ref-user-credentials_{context}[Credentials] management menu.
Creating an advanced flow such as this can have side effects. For example, if you enable the ability to reset the password for users, this would be accessible from the password form. In the default `Reset Credentials` flow, users must enter their username. Since the user has already entered a username earlier in the `Browser Password-less` flow, this action is unnecessary for {project_name} and sub-optimal for user experience. To correct this problem, you can:
* Copy the `Reset Credentials` flow. Set its name to `Reset Credentials for password-less`, for example.
* Select *Delete* in the *Actions* menu of the *Choose user* execution.
* In the *Bindings* menu, change the reset credential flow from *Reset Credentials* to *Reset Credentials for password-less*
====
[[_step-up-flow]]
==== Creating a browser login flow with step-up mechanism
This section describes how to create advanced browser login flow using the step-up mechanism. The purpose of step-up authentication is to allow access to clients or resources based on a specific authentication level of a user.
.Procedure
. Click *Authentication* in the menu.
. Click the *Flows* tab.
. Click *New*.
. Enter `Browser Incl Step up Mechanism` as an alias.
. Click *Save*.
. Click *Add execution*.
. Select *Cookie* from the item list.
. Click *Save*.
. Click *Alternative* for the *Cookie* authentication type to set its requirement to alternative.
. Click *Add flow*.
. Enter *Auth Flow* as an alias.
. Click *Save*.
. Click *Alternative* for the *Auth Flow* authentication type to set its requirement to alternative.
Now you configure the flow for the first authentication level.
. Click *Actions* for the *Auth Flow*.
. Click *Add flow*.
. Enter `1st Condition Flow` as an alias.
. Click *Save*.
. Click *Conditional* for the *1st Condition Flow* authentication type to set its requirement to conditional.
. Click *Actions* for the *1st Condition Flow*.
. Click *Add execution*.
. Select *Conditional - Level Of Authentication* from the item list.
. Click *Save*.
. Click *Required* for the *Conditional - Level Of Authentication* authentication type to set its requirement to required.
. Click *Actions* for the *Conditional - Level Of Authentication*.
. Click *Config*.
. Enter `Level 1` as an alias.
. Enter `1` for the Level of Authentication (LoA).
. Set Max Age to *36000*. This value is in seconds and it is equivalent to 10 hours, which is the default `SSO Session Max` timeout set in the realm.
As a result, when a user authenticates with this level, subsequent SSO logins can re-use this level and the user does not need to authenticate
with this level until the end of the user session, which is 10 hours by default.
. Click *Save*
+
.Configure the condition for the first authentication level
image:{project_images}/authentication-step-up-condition-1.png[]
. Click *Actions* for the *1st Condition Flow*.
. Click *Add execution*.
. Select *Username Password Form* from the item list.
. Click *Save*.
. Click *Required* for the *Username Password Form* authentication type to set its requirement to required.
Now you configure the flow for the second authentication level.
. Click *Actions* for the *Auth Flow*.
. Click *Add flow*.
. Enter `2nd Condition Flow` as an alias.
. Click *Save*.
. Click *Conditional* for the *2nd Condition Flow* authentication type to set its requirement to conditional.
. Click *Actions* for the *2nd Condition Flow*.
. Click *Add execution*.
. Select *Conditional - Level Of Authentication* from the item list.
. Click *Save*.
. Click *Required* for the *Conditional - Level Of Authentication* authentication type to set its requirement to required.
. Click *Actions* for the *Conditional - Level Of Authentication*.
. Click *Config*.
. Enter `Level 2` as an alias.
. Enter `2` for the Level of Authentication (LoA).
. Set Max Age to *0*. As a result, when a user authenticates, this level is valid just for the current authentication, but not any
subsequent SSO authentications. So the user will always need to authenticate again with this level when this level is requested.
. Click *Save*
+
.Configure the condition for the second authentication level
image:{project_images}/authentication-step-up-condition-2.png[]
. Click *Actions* for the *2nd Condition Flow*.
. Click *Add execution*.
. Select *OTP Form* from the item list.
. Click *Save*.
. Click *Required* for the *OTP Form* authentication type to set its requirement to required.
Finally, change the bindings.
. Click the *Bindings* tab.
. Click the *Browser Flow* item list.
. Select *Browser Incl Step up Mechanism* from the item list.
. Click *Save*.
.Browser login with step-up mechanism
image:{project_images}/authentication-step-up-flow.png[]
.Request a certain authentication level
To use the step-up mechanism, you specify a requested level of authentication (LoA) in your authentication request. The `claims` parameter is used for this purpose:
[source,subs=+attributes]
----
https://{DOMAIN}{kc_realms_path}/{REALMNAME}/protocol/openid-connect/auth?client_id={CLIENT-ID}&redirect_uri={REDIRECT-URI}&scope=openid&response_type=code&response_mode=query&nonce=exg16fxdjcu&claims=%7B%22id_token%22%3A%7B%22acr%22%3A%7B%22essential%22%3Atrue%2C%22values%22%3A%5B%22gold%22%5D%7D%7D%7D
----
The `claims` parameter is specified in a JSON representation:
[source]
----
claims= {
"id_token": {
"acr": {
"essential": true,
"values": ["gold"]
}
}
}
----
The {project_name} javascript adapter has support for easy construct of this JSON and sending it in the login request.
See link:{adapterguide_link_js_adapter}[Javascript adapter documentation] for more details.
You can also use simpler parameter `acr_values` instead of `claims` parameter to request particular levels as non-essential. This is mentioned
in the OIDC specification.
You can also configure the default level for the particular client, which is used when the parameter `acr_values` or the parameter `claims` with the `acr` claim is not present.
For further details, see <<_mapping-acr-to-loa-client,Client ACR configuration>>).
NOTE: To request the acr_values as text (such as `gold`) instead of a numeric value, you configure the mapping between the ACR and the LoA.
It is possible to configure it at the realm level (recommended) or at the client level. For configuration see <<_mapping-acr-to-loa-realm,ACR to LoA Mapping>>.
For more details see the https://openid.net/specs/openid-connect-core-1_0.html#acrSemantics[official OIDC specification].
*Flow logic*
The logic for the previous configured authentication flow is as follows: +
If a client request a high authentication level, meaning Level of Authentication 2 (LoA 2), a user has to perform full 2-factor authentication: Username/Password + OTP.
However, if a user already has a session in Keycloak, that was logged in with username and password (LoA 1), the user is only asked for the second authentication factor (OTP).
The option *Max Age* in the condition determines how long (how much seconds) the subsequent authentication level is valid. This settings helps to decide
whether the user will be asked to present the authentication factor again during a subsequent authentication. If the particular level X is requested
by the `claims` or `acr_values` parameter and user already authenticated with level X, but it is expired (for example max age is configured to 300 and user authenticated before 310 seconds)
then the user will be asked to re-authenticate again with the particular level. However if the level is not yet expired, the user will be automatically
considered as authenticated with that level.
Using *Max Age* with the value 0 means, that particular level is valid just for this single authentication. Hence every re-authentication requesting that level
will need to authenticate again with that level. This is useful for operations that require higher security in the application (e.g. send payment) and always require authentication
with the specific level.
WARNING: Note that parameters such as `claims` or `acr_values` might be changed by the user in the URL when the login request is sent from the client to the {project_name} via the user's browser.
This situation can be mitigated if client uses PAR (Pushed authorization request), a request object, or other mechanisms that prevents the user from rewrite the parameters in the URL.
Hence after the authentication, clients are encouraged to check the ID Token to doublecheck that `acr` in the token corresponds to the expected level.
If no explicit level is requested by parameters, the {project_name} will require the authentication with the first LoA
condition found in the authentication flow, such as the Username/Password in the preceding example. When a user was already authenticated with that level
and that level expired, the user is not required to re-authenticate, but `acr` in the token will have the value 0. This result is considered as authentication
based solely on `long-lived browser cookie` as mentioned in the section 2 of OIDC Core 1.0 specification.
NOTE: A conflict situation may arise when an admin specifies several flows, sets different LoA levels to each, and assigns the flows to different clients. However, the rule is always the same: if a user has a certain level, it needs only have that level to connect to a client. It's up to the admin to make sure that the LoA is coherent.
*Example scenario*
. Max Age is configured as 300 seconds for level 1 condition.
. Login request is sent without requesting any acr. Level 1 will be used and the user needs to authenticate with username and password. The token will have `acr=1`.
. Another login request is sent after 100 seconds. The user is automatically authenticated due to the SSO and the token will return `acr=1`.
. Another login request is sent after another 201 seconds (301 seconds since authentication in point 2). The user is automatically authenticated due to the SSO, but the token will return `acr=0` due the level 1 is considered expired.
. Another login request is sent, but now it will explicitly request ACR of level 1 in the `claims` parameter. User will be asked to re-authenticate with username/password
and then `acr=1` will be returned in the token.
*ACR claim in the token*
ACR claim is added to the token by the `acr loa level` protocol mapper defined in the `acr` client scope. This client scope is the realm default client scope
and hence will be added to all newly created clients in the realm.
In case you do not want `acr` claim inside tokens or you need some custom logic for adding it, you can remove the client scope from your client.
Note when the login request initiates a request with the `claims` parameter requesting `acr` as `essential` claim, then {project_name} will always return
one of the specified levels. If it is not able to return one of the specified levels (For example if the requested level is unknown or bigger than configured conditions
in the authentication flow), then {project_name} will throw an error.
[[_user_session_limits]]
*User session limits*
Limits on the number of session that a user can have can be configured. Sessions can be limited per realm or per client.
To add session limits to a flow, perform the following steps.
. Click *Add execution* for the flow.
. Select *User Session Count Limiter* from the item list.
. Click *Save*.
. Click *Required* for the *User Session Count Limiter* authentication type to set its requirement to required.
. Click *Actions* for the *User Session Count Limiter*.
. Click *Config*.
. Enter an alias for this config.
. Enter the required maximum number of sessions a user can have in this realm. If a value of 0 is used, this check is disabled.
. Enter the required maximum number of sessions a user can have for the client. If a value of 0 is used, this check is disabled.
. Select the behavior that is required when the user tries to create a session after the limit is reached. Available bahaviors are:
Deny new session - when a new session is requested and the session limit is reached, no new sessions can be created.
Terminate oldest session - when a new session is requested and the session limit has been reached, the oldest session will be removed and the new session created.
. Optionally, add a custom error message to be displayed when the limit is reached.
Note that the user session limits should be added to your bound *Browser Flow*, *Direct Grant Flow*, *Reset Credentials* and also to any *Post Broker Login Flow*.
Currently, the administrator is responsible for maintaining consistency between the different configurations.
Note also that the user session limit feature is not available for CIBA.
ifeval::[{project_community}==true]
=== Script Authenticator
Ability to upload scripts through the Admin Console and REST endpoints is deprecated.
For more details see link:{developerguide_jsproviders_link}[{developerguide_jsproviders_name}].
endif::[]

View file

@ -1,284 +0,0 @@
[id="webauthn_{context}"]
=== W3C Web Authentication (WebAuthn)
{project_name} provides support for https://www.w3.org/TR/webauthn/[W3C Web Authentication (WebAuthn)]. {project_name} works as a WebAuthn's https://www.w3.org/TR/webauthn/#webauthn-relying-party[Relying Party (RP)].
[NOTE]
====
WebAuthn's operations success depends on the user's WebAuthn supporting authenticator, browser, and platform. Make sure your authenticator, browser, and platform support the WebAuthn specification.
====
==== Setup
The setup procedure of WebAuthn support for 2FA is the following:
[[_webauthn-register]]
===== Enable WebAuthn authenticator registration
. Click *Authentication* in the menu.
. Click the *Required Actions* tab.
. Click *Register*.
. Click the *Required Action* drop-down list.
. Click *Webauthn Register*.
. Click *Ok*.
Mark the *Default Action* checkbox if you want all new users to be required to register their WebAuthn credentials.
[[_webauthn-authenticator-setup]]
===== Adding WebAuthn authentication to a browser flow
. Click *Authentication* in the menu.
. Click the *Browser* flow.
. Click *Copy* to make a copy of the built-in *Browser* flow.
. Enter a name for the copy.
. Click *Ok*.
. Click the _Actions_ link for *WebAuthn Browser Browser - Conditional OTP* and click _Delete_.
. Click *Delete*.
If you require WebAuthn for all users:
. Click on the _Actions_ link for *WebAuthn Browser Forms*.
. Click *Add execution*.
. Click the *Provider* drop-down list.
. Click *WebAuthn Authenticator*.
. Click *Save*.
. Click _REQUIRED_ for *WebAuthn Authenticator*
+
image:images/webauthn-browser-flow-required.png[]
+
. Click the *Bindings* tab.
. Click the *Browser Flow* drop-down list.
. Click *WebAuthn Browser*.
. Click *Save*.
[NOTE]
====
If a user does not have WebAuthn credentials, the user must register WebAuthn credentials.
====
Users can log in with WebAuthn if they have a WebAuthn credential registered only. So instead of adding the *WebAuthn Authenticator* execution, you can:
.Procedure
. Click on the _Actions_ link for *WebAuthn Browser Forms*.
. Click *Add flow*.
. Enter "Conditional 2FA" for the _Alias_ field.
. Click *Save*.
. Click _CONDITIONAL_ for *Conditional 2FA*
. Click on the _Actions_ link for *Conditional 2FA*.
. Click *Add execution*.
. Click the *Provider* drop-down list.
. Click *Condition - User Configured*.
. Click *Save*.
. Click _REQUIRED_ for *Conditional 2FA*
. Click on the _Actions_ link for *Conditional 2FA*.
. Click *Add execution*.
. Click the *Provider* drop-down list.
. Click *WebAuthn Authenticator*.
. Click *Save*.
. Click _ALTERNATIVE_ for *Conditional 2FA*
+
image:images/webauthn-browser-flow-conditional.png[]
The user can choose between using WebAuthn and OTP for the second factor:
.Procedure
. Click the _Actions_ link for *Conditional 2FA*.
. Click *Add execution*.
. Click the *Provider* drop-down list.
. Click *ITP Form*.
. Click *Save*.
. Click _ALTERNATIVE_ for *Conditional 2FA*
+
image:images/webauthn-browser-flow-conditional-with-OTP.png[]
==== Authenticate with WebAuthn authenticator
After registering a WebAuthn authenticator, the user carries out the following operations:
* Open the login form. The user must authenticate with a username and password.
* The user's browser asks the user to authenticate by using their WebAuthn authenticator.
==== Managing WebAuthn as an administrator
===== Managing credentials
{project_name} manages WebAuthn credentials similarly to other credentials from xref:ref-user-credentials_{context}[User credential management]:
* {project_name} assigns users a required action to create a WebAuthn credential from the *Reset Actions* list and select *Webauthn Register*.
* Administrators can delete a WebAuthn credential by clicking *Delete*.
* Administrators can view the credential's data, such as the AAGUID, by selecting *Show data...*.
* Administrators can set a label for the credential by setting a value in the *User Label* field and saving the data.
[[_webauthn-policy]]
===== Managing policy
Administrators can configure WebAuthn related operations as *WebAuthn Policy* per realm.
.Procedure
. Click *Authentication* in the menu.
. Click the *WebAuthn Policy* tab.
. Configure the items within the policy (see description below).
. Click *Save*.
The configurable items and their description are as follows:
|===
|Configuration|Description
|Relying Party Entity Name
|The readable server name as a WebAuthn Relying Party. This item is mandatory and applies to the registration of the WebAuthn authenticator. The default setting is "keycloak". For more details, see https://www.w3.org/TR/webauthn/#dictionary-pkcredentialentity[WebAuthn Specification].
|Signature Algorithms
|The algorithms telling the WebAuthn authenticator which signature algorithms to use for the https://www.w3.org/TR/webauthn/#iface-pkcredential[Public Key Credential]. {project_name} uses the Public Key Credential to sign and verify https://www.w3.org/TR/webauthn/#authentication-assertion[Authentication Assertions]. If no algorithms exist, the default https://datatracker.ietf.org/doc/html/rfc8152#section-8.1[ES256] is adapted. ES256 is an optional configuration item applying to the registration of WebAuthn authenticators. For more details, see https://www.w3.org/TR/webauthn/#dictdef-publickeycredentialparameters[WebAuthn Specification].
|Relying Party ID
|The ID of a WebAuthn Relying Party that determines the scope of https://www.w3.org/TR/webauthn/#public-key-credential[Public Key Credentials]. The ID must be the origin's effective domain. This ID is an optional configuration item applied to the registration of WebAuthn authenticators. If this entry is blank, {project_name} adapts the host part of {project_name}'s base URL. For more details, see https://www.w3.org/TR/webauthn/[WebAuthn Specification].
|Attestation Conveyance Preference
|The WebAuthn API implementation on the browser (https://www.w3.org/TR/webauthn/#webauthn-client[WebAuthn Client]) is the preferential method to generate Attestation statements. This preference is an optional configuration item applying to the registration of the WebAuthn authenticator. If no option exists, its behavior is the same as selecting "none". For more details, see https://www.w3.org/TR/webauthn/[WebAuthn Specification].
|Authenticator Attachment
|The acceptable attachment pattern of a WebAuthn authenticator for the WebAuthn Client. This pattern is an optional configuration item applying to the registration of the WebAuthn authenticator. For more details, see https://www.w3.org/TR/webauthn/#enumdef-authenticatorattachment[WebAuthn Specification].
|Require Resident Key
|The option requiring that the WebAuthn authenticator generates the Public Key Credential as https://www.w3.org/TR/webauthn/[Client-side-resident Public Key Credential Source]. This option applies to the registration of the WebAuthn authenticator. If left blank, its behavior is the same as selecting "No". For more details, see https://www.w3.org/TR/webauthn/#dom-authenticatorselectioncriteria-requireresidentkey[WebAuthn Specification].
|User Verification Requirement
|The option requiring that the WebAuthn authenticator confirms the verification of a user. This is an optional configuration item applying to the registration of a WebAuthn authenticator and the authentication of a user by a WebAuthn authenticator. If no option exists, its behavior is the same as selecting "preferred". For more details, see https://www.w3.org/TR/webauthn/#dom-authenticatorselectioncriteria-userverification[WebAuthn Specification for registering a WebAuthn authenticator] and https://www.w3.org/TR/webauthn/#dom-publickeycredentialrequestoptions-userverification[WebAuthn Specification for authenticating the user by a WebAuthn authenticator].
|Timeout
|The timeout value, in seconds, for registering a WebAuthn authenticator and authenticating the user by using a WebAuthn authenticator. If set to zero, its behavior depends on the WebAuthn authenticator's implementation. The default value is 0. For more details, see https://www.w3.org/TR/webauthn/#dom-publickeycredentialcreationoptions-timeout[WebAuthn Specification for registering a WebAuthn authenticator] and https://www.w3.org/TR/webauthn/#dom-publickeycredentialrequestoptions-timeout[WebAuthn Specification for authenticating the user by a WebAuthn authenticator].
|Avoid Same Authenticator Registration
|If enabled, {project_name} cannot re-register an already registered WebAuthn authenticator.
|Acceptable AAGUIDs
|The white list of AAGUIDs which a WebAuthn authenticator must register against.
|===
==== Attestation statement verification
When registering a WebAuthn authenticator, {project_name} verifies the trustworthiness of the attestation statement generated by the WebAuthn authenticator. {project_name} requires the trust anchor's certificates for this. {project_name} uses the link:{installguide_truststore_link}[{installguide_truststore_name}], so you must import these certificates into it in advance.
To omit this validation, disable this truststore or set the WebAuthn policy's configuration item "Attestation Conveyance Preference" to "none".
==== Managing WebAuthn credentials as a user
===== Register WebAuthn authenticator
The appropriate method to register a WebAuthn authenticator depends on whether the user has already registered an account on {project_name}.
===== New user
If the *WebAuthn Register* required action is *Default Action* in a realm, new users must set up the WebAuthn security key after their first login.
.Procedure
. Open the login form.
. Click *Register*.
. Fill in the items on the form.
. Click *Register*.
After successfully registering, the browser asks the user to enter the text of their WebAuthn authenticator's label.
===== Existing user
If `WebAuthn Authenticator` is set up as required as shown in the first example, then when existing users try to log in, they are required to register their WebAuthn authenticator automatically:
.Procedure
. Open the login form.
. Enter the items on the form.
. Click *Save*.
. Click *Login*.
After successful registration, the user's browser asks the user to enter the text of their WebAuthn authenticator's label.
[[_webauthn_passwordless]]
==== Passwordless WebAuthn together with Two-Factor
{project_name} uses WebAuthn for two-factor authentication, but you can use WebAuthn as the first-factor authentication. In this case, users with `passwordless` WebAuthn credentials can authenticate to {project_name} without a password. {project_name} can use WebAuthn as both the passwordless and two-factor authentication mechanism in the context of a realm and a single authentication flow.
An administrator typically requires that Security Keys registered by users for the WebAuthn passwordless authentication meet different requirements. For example, the security keys may require users to authenticate to the security key using a PIN, or the security key attests with a stronger certificate authority.
Because of this, {project_name} permits administrators to configure a separate `WebAuthn Passwordless Policy`. There is a required `Webauthn Register Passwordless` action of type and separate authenticator of type `WebAuthn Passwordless Authenticator`.
.Procedure
===== Setup
Set up WebAuthn passwordless support as follows:
. Register a new required action for WebAuthn passwordless support. Use the steps described in <<_webauthn-register, Enable WebAuthn Authenticator Registration>>. Register the `Webauthn Register Passwordless` action.
. Configure the policy. You can use the steps and configuration options described in <<_webauthn-policy, Managing Policy>>. Perform the configuration in the Admin Console in the tab *WebAuthn Passwordless Policy*. Typically the requirements for the security key will be stronger than for the two-factor policy. For example, you can set the *User Verification Requirement* to *Required* when you configure the passwordless policy.
. Configure the authentication flow. Use the *WebAuthn Browser* flow described in <<_webauthn-authenticator-setup, Adding WebAuthn Authentication to a Browser Flow>>. Configure the flow as follows:
+
** The *WebAuthn Browser Forms* subflow contains *Username Form* as the first authenticator. Delete the default *Username Password Form* authenticator and add the *Username Form* authenticator. This action requires the user to provide a username as the first step.
+
** There will be a required subflow, which can be named *Passwordless Or Two-factor*, for example. This subflow indicates the user can authenticate with Passwordless WebAuthn credential or with Two-factor authentication.
+
** The flow contains *WebAuthn Passwordless Authenticator* as the first alternative.
+
** The second alternative will be a subflow named *Password And Two-factor Webauthn*, for example. This subflow contains a *Password Form* and a *WebAuthn Authenticator*.
The final configuration of the flow looks similar to this:
.PasswordLess flow
image:images/webauthn-passwordless-flow.png[PasswordLess flow]
You can now add *WebAuthn Register Passwordless* as the required action to a user, already known to {project_name}, to test this. During the first authentication, the user must use the password and second-factor WebAuthn credential. The user does not need to provide the password and second-factor WebAuthn credential if they use the WebAuthn Passwordless credential.
[[_webauthn_loginless]]
==== LoginLess WebAuthn
{project_name} uses WebAuthn for two-factor authentication, but you can use WebAuthn as the first-factor authentication. In this case, users with `passwordless` WebAuthn credentials can authenticate to {project_name} without submitting a login or a password. {project_name} can use WebAuthn as both the loginless/passwordless and two-factor authentication mechanism in the context of a realm.
An administrator typically requires that Security Keys registered by users for the WebAuthn loginless authentication meet different requirements. Loginless authentication requires users to authenticate to the security key (for example by using a PIN code or a fingerprint) and that the cryptographic keys associated with the loginless credential are stored physically on the security key. Not all security keys meet that kind of requirements. Check with your security key vendor if your device supports 'user verification' and 'resident key'. See <<_webauthn-supported-keys, Supported Security Keys>>.
{project_name} permits administrators to configure the `WebAuthn Passwordless Policy` in a way that allows loginless authentication. Note that loginless authentication can only be configured with `WebAuthn Passwordless Policy` and with `WebAuthn Passwordless` credentials. WebAuthn loginless authentication and WebAuthn passwordless authentication can be configured on the same realm but will share the same policy `WebAuthn Passwordless Policy`.
.Procedure
===== Setup
Set up WebAuthn Loginless support as follows:
. Register a new required action for WebAuthn passwordless support. Use the steps described in <<_webauthn-register, Enable WebAuthn Authenticator Registration>>. Register the `Webauthn Register Passwordless` action.
. Configure the `WebAuthn Passwordless Policy`. Perform the configuration in the Admin Console, `Authentication` section, in the tab `WebAuthn Passwordless Policy`. You have to set *User Verification Requirement* to *required* and *Require Resident Key* to *Yes* when you configure the policy for loginless scenario. Note that since there isn't a dedicated Loginless policy it won't be possible to mix authentication scenarios with user verification=no/resident key=no and loginless scenarios (user verification=yes/resident key=yes). Storage capacity is usually very limited on security keys meaning that you won't be able to store many resident keys on your security key.
. Configure the authentication flow. Create a new authentication flow, add the "WebAuthn Passwordless" execution and set the Requirement setting of the execution to *Required*
The final configuration of the flow looks similar to this:
.LoginLess flow
image:images/webauthn-loginless-flow.png[LoginLess flow]
You can now add the required action `WebAuthn Register Passwordless` to a user, already known to {project_name}, to test this. The user with the required action configured will have to authenticate (with a username/password for example) and will then be prompted to register a security key to be used for loginless authentication.
===== Vendor specific remarks
====== Compatibility check list
Loginless authentication with {project_name} requires the security key to meet the following features
** FIDO2 compliance: not to be confused with FIDO/U2F
** User verification: the ability for the security key to authenticate the user (prevents someone finding your security key to be able to authenticate loginless and passwordless)
** Resident key: the ability for the security key to store the login and the cryptographic keys associated with the client application
====== Windows Hello
To use Windows Hello based credentials to authenticate against {project_name}, configure the *Signature Algorithms* setting of the `WebAuthn Passwordless Policy` to include the *RS256* value. Note that some browsers don't allow access to platform security key (like Windows Hello) inside private windows.
[[_webauthn-supported-keys]]
====== Supported security keys
The following security keys have been successfuly tested for loginless authentication with {project_name}:
* Windows Hello (Windows 10 21H1/21H2)
* Yubico Yubikey 5 NFC
* Feitian ePass FIDO-NFC

View file

@ -150,7 +150,7 @@ The configurable items and their description are as follows:
==== Attestation statement verification
When registering a WebAuthn authenticator, {project_name} verifies the trustworthiness of the attestation statement generated by the WebAuthn authenticator. {project_name} requires the trust anchor's certificates for this. {project_name} uses the link:{installguide_truststore_link}[{installguide_truststore_name}], so you must import these certificates into it in advance.
When registering a WebAuthn authenticator, {project_name} verifies the trustworthiness of the attestation statement generated by the WebAuthn authenticator. {project_name} requires the trust anchor's certificates imported into the [truststore](https://www.keycloak.org/server/keycloak-truststore).
To omit this validation, disable this truststore or set the WebAuthn policy's configuration item "Attestation Conveyance Preference" to "none".

View file

@ -72,94 +72,6 @@ The certificate identity mapping can map the extracted user identity to an exist
* Certificate KeyUsage validation.
* Certificate ExtendedKeyUsage validation.
ifeval::["{kc_dist}" == "wildfly"]
==== Enable X.509 client certificate user authentication
The following sections describe how to configure {appserver_name}/Undertow and the {project_name} Server to enable X.509 client certificate authentication.
[[_enable-mtls-wildfly]]
===== Enable mutual SSL in {appserver_name}
See link:https://docs.wildfly.org/24/Admin_Guide.html#enable-ssl[Enable SSL] for the instructions to enable SSL in {appserver_name}.
* Open {project_dirref}/standalone/configuration/standalone.xml and add a new realm:
```xml
<security-realms>
<security-realm name="ssl-realm">
<server-identities>
<ssl>
<keystore path="servercert.jks"
relative-to="jboss.server.config.dir"
keystore-password="servercert password"/>
</ssl>
</server-identities>
<authentication>
<truststore path="truststore.jks"
relative-to="jboss.server.config.dir"
keystore-password="truststore password"/>
</authentication>
</security-realm>
</security-realms>
```
`ssl/keystore`::
The `ssl` element contains the `keystore` element that contains the details to load the server public key pair from a JKS keystore.
`ssl/keystore/path`::
The path to the JKS keystore.
`ssl/keystore/relative-to`::
The path that the keystore path is relative to.
`ssl/keystore/keystore-password`::
The password to open the keystore.
`ssl/keystore/alias` (optional)::
The alias of the entry in the keystore. Set if the keystore contains multiple entries.
`ssl/keystore/key-password` (optional)::
The private key password, if different from the keystore password.
`authentication/truststore`::
Defines how to load a trust store to verify the certificate presented by the remote side of the inbound/outgoing connection. Typically, the truststore contains a collection of trusted CA certificates.
`authentication/truststore/path`::
The path to the JKS keystore containing the certificates of the trusted certificate authorities.
`authentication/truststore/relative-to`::
The path that the truststore path is relative to.
`authentication/truststore/keystore-password`::
The password to open the truststore.
===== Enable HTTPS listener
See link:https://docs.wildfly.org/24/Admin_Guide.html#https-listener[HTTPS Listener] for the instructions to enable HTTPS in WildFly.
* Add the <https-listener> element.
[source,xml,subs="attributes+"]
----
<subsystem xmlns="{subsystem_undertow_xml_urn}">
....
<server name="default-server">
<https-listener name="default"
socket-binding="https"
security-realm="ssl-realm"
verify-client="REQUESTED"/>
</server>
</subsystem>
----
`https-listener/security-realm`::
This value must match the name of the realm from the previous section.
`https-listener/verify-client`::
If set to *REQUESTED*, the server optionally asks for a client certificate.
If set to *REQUIRED*, the server refuses inbound connections if no client certificate has been provided.
endif::[]
[[_browser_flow]]
==== Adding X.509 client certificate authentication to browser flows
@ -317,158 +229,3 @@ image:{project_images}/x509-directgrant-execution.png[X509 Direct Grant Executio
+
.X509 direct grant flow bindings
image:{project_images}/x509-directgrant-flow-bindings.png[X509 Direct Grant Flow Bindings]
==== Client certificate lookup
When the {project_name} server receives a direct HTTP request, the {appserver_name} undertow subsystem establishes an SSL handshake and extracts the client certificate. The {appserver_name} saves the client certificate to the `javax.servlet.request.X509Certificate` attribute of the HTTP request, as specified in the servlet specification. The {project_name} X509 authenticator can look up the certificate from this attribute.
However, when the {project_name} server listens to HTTP requests behind a load balancer or reverse proxy, the proxy server may extract the client certificate and establish a mutual SSL connection. A reverse proxy generally puts the authenticated client certificate in the HTTP header of the underlying request. The proxy forwards the request to the back end {project_name} server. In this case, {project_name} must look up the X.509 certificate chain from the HTTP headers rather than the attribute of the HTTP request.
If {project_name} is behind a reverse proxy, you generally need to configure the alternative provider of the `x509cert-lookup` SPI in {project_dirref}/standalone/configuration/standalone.xml. With the `default` provider looking up the HTTP header certificate, two additional built-in providers exist: `haproxy` and `apache`.
===== HAProxy certificate lookup provider
You use this provider when your {project_name} server is behind an HAProxy reverse proxy. Use the following configuration for your server:
[source,xml]
----
<spi name="x509cert-lookup">
<default-provider>haproxy</default-provider>
<provider name="haproxy" enabled="true">
<properties>
<property name="sslClientCert" value="SSL_CLIENT_CERT"/>
<property name="sslCertChainPrefix" value="CERT_CHAIN"/>
<property name="certificateChainLength" value="10"/>
</properties>
</provider>
</spi>
----
In this example configuration, the client certificate is looked up from the HTTP header, `SSL_CLIENT_CERT`, and the other certificates from its chain are looked up from HTTP headers such as `CERT_CHAIN_0` through `CERT_CHAIN_9`. The attribute `certificateChainLength` is the maximum length of the chain so the last attribute is `CERT_CHAIN_9`.
Consult the HAProxy documentation for the details of configuring the HTTP Headers for the client certificate and client certificate chain.
===== Apache certificate lookup provider
You can use this provider when your {project_name} server is behind an Apache reverse proxy. Use the following configuration for your server:
[source,xml]
----
<spi name="x509cert-lookup">
<default-provider>apache</default-provider>
<provider name="apache" enabled="true">
<properties>
<property name="sslClientCert" value="SSL_CLIENT_CERT"/>
<property name="sslCertChainPrefix" value="CERT_CHAIN"/>
<property name="certificateChainLength" value="10"/>
</properties>
</provider>
</spi>
----
This configuration is the same as the `haproxy` provider. Consult the Apache documentation on link:https://httpd.apache.org/docs/current/mod/mod_ssl.html[mod_ssl] and link:https://httpd.apache.org/docs/current/mod/mod_headers.html[mod_headers] for details on how the HTTP Headers for the client certificate and client certificate chain are configured.
===== NGINX certificate lookup provider
You can use this provider when your {project_name} server is behind an NGINX reverse proxy. Use the following configuration for your server:
[source,xml]
----
<spi name="x509cert-lookup">
<default-provider>nginx</default-provider>
<provider name="nginx" enabled="true">
<properties>
<property name="sslClientCert" value="ssl-client-cert"/>
<property name="sslCertChainPrefix" value="USELESS"/>
<property name="certificateChainLength" value="2"/>
</properties>
</provider>
</spi>
----
[NOTE]
====
The NGINX link:http://nginx.org/en/docs/http/ngx_http_ssl_module.html#variables[SSL/TLS module] does not expose the client certificate chain. {project_name}'s NGINX certificate lookup provider rebuilds it by using the link:{installguide_truststore_link}[{installguide_truststore_name}]. Populate the {project_name} truststore by using the keytool CLI with all root and intermediate CA's for rebuilding client certificate chain.
====
Consult the NGINX documentation for the details of configuring the HTTP Headers for the client certificate.
Example of NGINX configuration file :
[source,txt]
----
...
server {
...
ssl_client_certificate trusted-ca-list-for-client-auth.pem;
ssl_verify_client optional_no_ca;
ssl_verify_depth 2;
...
location / {
...
proxy_set_header ssl-client-cert $ssl_client_escaped_cert;
...
}
...
}
----
[NOTE]
====
All certificates in trusted-ca-list-for-client-auth.pem must be added to link:{installguide_truststore_link}[{installguide_truststore_name}].
====
===== Other reverse proxy implementations
{project_name} does not have built-in support for other reverse proxy implementations. However, you can make other reverse proxies behave in a similar way to `apache` or `haproxy`. If none of these work, create your implementation of the `org.keycloak.services.x509.X509ClientCertificateLookupFactory` and `org.keycloak.services.x509.X509ClientCertificateLookup` providers. See the link:{developerguide_link}[{developerguide_name}] for details on how to add your provider.
ifeval::["{kc_dist}" == "wildfly"]
==== Troubleshooting
Dumping HTTP headers::
To view what the reverse proxy sends to Keycloak, enable the `RequestDumpingHandler` Undertow filter and consult the `server.log` file.
Enable TRACE logging under the logging subsystem::
[source,xml]
----
...
<profile>
<subsystem xmlns="urn:jboss:domain:logging:8.0">
...
<logger category="org.keycloak.authentication.authenticators.x509">
<level name="TRACE"/>
</logger>
<logger category="org.keycloak.services.x509">
<level name="TRACE"/>
</logger>
----
[WARNING]
====
Do not use RequestDumpingHandler or TRACE logging in production.
====
Direct Grant authentication with X.509::
You can use the following template to request a token by using the Resource Owner Password Credentials Grant:
[source,subs=+attributes]
$ curl https://[host][:port]{kc_realms_path}/master/protocol/openid-connect/token \
--insecure \
--data "grant_type=password&scope=openid profile&username=&password=&client_id=CLIENT_ID&client_secret=CLIENT_SECRET" \
-E /path/to/client_cert.crt \
--key /path/to/client_cert.key
[source]
`[host][:port]`::
The host and the port number of the remote {project_name} server.
`CLIENT_ID`::
The client id.
`CLIENT_SECRET`::
For confidential clients, a client secret.
`client_cert.crt`::
A public key certificate to verify the identity of the client in mutual SSL authentication. The certificate must be in PEM format.
`client_cert.key`::
A private key in the public key pair. This key must be in PEM format.
endif::[]

View file

@ -4,17 +4,9 @@
[role="_abstract"]
xref:con-oidc_{context}[OpenID Connect] is the recommended protocol to secure applications. It was designed from the ground up to be web friendly and it works best with HTML5/JavaScript applications.
ifeval::[{project_community}==true]
include::oidc/proc-creating-oidc-client.adoc[leveloffset=+1]
include::oidc/con-basic-settings.adoc[leveloffset=+1]
include::oidc/con-advanced-settings.adoc[leveloffset=+1]
endif::[]
ifeval::[{project_product}==true]
include::oidc/proc-creating-oidc-client-prod.adoc[leveloffset=+1]
include::oidc/con-basic-settings-prod.adoc[leveloffset=+1]
include::oidc/con-advanced-settings-prod.adoc[leveloffset=+1]
endif::[]
include::oidc/con-confidential-client-credentials.adoc[leveloffset=+1]

Some files were not shown because too many files have changed in this diff Show more