KEYCLOAK-15291 Addressing review comments. (#1201)

Co-authored-by: Stian Thorgersen <stianst@gmail.com>
This commit is contained in:
andymunro 2021-11-10 06:39:43 -05:00 committed by GitHub
parent ca1620abf7
commit cb730556a7
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
45 changed files with 332 additions and 251 deletions

View file

@ -3,7 +3,7 @@
There are multiple ways you can logout from a web application.
For Jakarta EE servlet containers, you can call `HttpServletRequest.logout()`. For any other browser application, you can point
the browser at any url of your web application that has a security constraint and pass in a query parameter GLO, i.e. `$$http://myapp?GLO=true$$`.
This will log you out if you have an SSO session with your browser.
This will log you out if you have an SSO session with your browser.
[[_saml_logout_in_cluster]]
===== Logout in Clustered Environment
@ -49,9 +49,9 @@ Using distributed cache may lead to results where the SAML logout request would
to SAML session index to HTTP session mapping which would lead to unsuccessful logout.
[[_saml_logout_in_cross_dc]]
===== Logout in Cross DC Scenario
===== Logout in cross-site scenario
The cross DC scenario only applies to WildFly 10 and higher, and EAP 7 and higher.
The cross-site scenario only applies to WildFly 10 and higher, and EAP 7 and higher.
Special handling is needed for handling sessions that span multiple data centers. Imagine the following scenario:

View file

@ -1,6 +1,6 @@
[[cache-configuration]]
== Server Cache Configuration
== Server cache configuration
{project_name} has two types of caches. One type of cache sits in front of the database to decrease load on the DB
and to decrease overall response times by keeping data in memory. Realm, client, role, and user metadata is kept in this type of cache.

View file

@ -1,7 +1,16 @@
=== Clearing Caches at Runtime
=== Clearing cache at runtime
To clear the realm or user cache, go to the {project_name} admin console Realm Settings->Cache Config page.
On this page you can clear the realm cache, the user cache or cache of external public keys.
You can clear the realm cache, user cache, or the external public keys.
.Procedure
. Log into the Admin Console.
. Click *Realm Settings*.
. Click the *Cache* tab.
. Clear the realm cache, the user cache or cache of external public keys.
NOTE: The cache will be cleared for all realms!

View file

@ -1,12 +1,15 @@
=== Disabling Caching
=== Disabling caching
To disable the realm or user cache, you must edit the `standalone.xml`, `standalone-ha.xml`,
or `domain.xml` file in your distribution. The location of this file
depends on your <<_operating-mode, operating mode>>.
Here's what the config looks like initially.
You can disable the realm or user cache.
.Procedure
. Edit the `standalone.xml`, `standalone-ha.xml`, or `domain.xml` file in your distribution.
+
The location of this file depends on your <<_operating-mode, operating mode>>.
Here is a sample config file.
+
[source,xml]
----
@ -20,7 +23,8 @@ Here's what the config looks like initially.
----
To disable the cache set the `enabled` attribute to false for the cache you want to disable. You must reboot your
server for this change to take effect.
. Set the `enabled` attribute to false for the cache you want to disable.
. Reboot your server for this change to take effect.

View file

@ -1,6 +1,6 @@
[[_eviction]]
=== Eviction and Expiration
=== Eviction and expiration
There are multiple different caches configured for {project_name}.
There is a realm cache that holds information about secured applications, general security data, and configuration options.

View file

@ -1,6 +1,6 @@
[[_replication]]
=== Replication and Failover
=== Replication and failover
There are caches like `sessions`, `authenticationSessions`, `offlineSessions`, `loginFailures` and a few others (See <<_eviction>> for more details),
which are configured as distributed caches when using a clustered setup. Entries are

View file

@ -1,17 +1,14 @@
[[_clustering]]
== Clustering
== Configuring {project_name} to run in a cluster
This section covers configuring {project_name} to run in a cluster. There's a number
of things you have to do when setting up a cluster, specifically:
To configure {project_name} to run in a cluster, you perform these actions:
* <<_operating-mode,Pick an operation mode>>
* <<_database,Configure a shared external database>>
* Set up a load balancer
* Supplying a private network that supports IP multicast
Picking an operation mode and configuring a shared database have been discussed earlier in this guide. In this chapter
we'll discuss setting up a load balancer and supplying a private network. We'll also discuss some issues that you need
to be aware of when booting up a host in the cluster.
Picking an operation mode and configuring a shared database have been discussed earlier in this guide. This chapter describes setting up a load balancer and supplying a private network as well as booting up a host in the cluster.
NOTE: It is possible to cluster {project_name} without IP Multicast, but this topic is beyond the scope of this guide. For more information, see link:{appserver_jgroups_link}[JGroups] chapter of the _{appserver_jgroups_name}_.

View file

@ -1,5 +1,5 @@
=== Booting the Cluster
=== Booting the cluster
Booting {project_name} in a cluster depends on your <<_operating-mode, operating mode>>

View file

@ -1,5 +1,5 @@
=== Clustering Example
=== Clustering example
{project_name} does come with an out of the box clustering demo that leverages domain mode. Review the
<<_clustered-domain-example, Clustered Domain Example>> chapter for more details.

View file

@ -1,5 +1,5 @@
[[_setting-up-a-load-balancer-or-proxy]]
=== Setting Up a Load Balancer or Proxy
=== Setting Up a load balancer or proxy
This section discusses a number of things you need to configure before you can put a reverse proxy or load balancer
in front of your clustered {project_name} deployment. It also covers configuring the built-in load balancer that
@ -11,7 +11,7 @@ The following diagram illustrates the use of a load balancer. In this example, t
.Example Load Balancer Diagram
image:{project_images}/load_balancer.png[]
==== Identifying Client IP Addresses
==== Identifying client IP addresses
A few features in {project_name} rely on the fact that the remote
address of the HTTP client connecting to the authentication server is the real IP address of the client machine. Examples include:
@ -87,9 +87,9 @@ pull this information from the AJP packets.
</subsystem>
----
==== Enable HTTPS/SSL with a Reverse Proxy
==== Enabling HTTPS/SSL with a reverse proxy
Assuming that your reverse proxy doesn't use port 8443 for SSL you also need to configure what port HTTPS traffic is redirected to.
Assuming that your reverse proxy doesn't use port 8443 for SSL you also need to configure to what port the HTTPS traffic is redirected.
[source,xml,subs="attributes+"]
----
<subsystem xmlns="{subsystem_undertow_xml_urn}">
@ -100,14 +100,15 @@ Assuming that your reverse proxy doesn't use port 8443 for SSL you also need to
</subsystem>
----
Add the `redirect-socket` attribute to the `http-listener` element. The value should be `proxy-https` which points to a
.Procedure
. Add the `redirect-socket` attribute to the `http-listener` element. The value should be `proxy-https` which points to a
socket binding you also need to define.
Then add a new `socket-binding` element to the `socket-binding-group` element:
. Add a new `socket-binding` element to the `socket-binding-group` element:
+
[source,xml]
----
<socket-binding-group name="standard-sockets" default-interface="public"
port-offset="${jboss.socket.binding.port-offset:0}">
...
@ -116,27 +117,36 @@ Then add a new `socket-binding` element to the `socket-binding-group` element:
</socket-binding-group>
----
==== Verify Configuration
==== Verifying the configuration
You can verify the reverse proxy or load balancer configuration by opening the path `/auth/realms/master/.well-known/openid-configuration`
through the reverse proxy. For example if the reverse proxy address is `\https://acme.com/` then open the URL
You can verify the reverse proxy or load balancer configuration
.Procedure
. Open the path `/auth/realms/master/.well-known/openid-configuration`
through the reverse proxy.
+
For example if the reverse proxy address is `\https://acme.com/` then open the URL
`\https://acme.com/auth/realms/master/.well-known/openid-configuration`. This will show a JSON document listing a number
of endpoints for {project_name}. Make sure the endpoints starts with the address (scheme, domain and port) of your
of endpoints for {project_name}.
. Make sure the endpoints starts with the address (scheme, domain and port) of your
reverse proxy or load balancer. By doing this you make sure that {project_name} is using the correct endpoint.
You should also verify that {project_name} sees the correct source IP address for requests. To check this, you can
try to login to the admin console with an invalid username and/or password. This should show a warning in the server log
. Verify that {project_name} sees the correct source IP address for requests.
+
To check this, you can
try to login to the Admin Console with an invalid username and/or password. This should show a warning in the server log
something like this:
+
[source]
----
08:14:21,287 WARN XNIO-1 task-45 [org.keycloak.events] type=LOGIN_ERROR, realmId=master, clientId=security-admin-console, userId=8f20d7ba-4974-4811-a695-242c8fbd1bf8, ipAddress=X.X.X.X, error=invalid_user_credentials, auth_method=openid-connect, auth_type=code, redirect_uri=http://localhost:8080/auth/admin/master/console/?redirect_fragment=%2Frealms%2Fmaster%2Fevents-settings, code_id=a3d48b67-a439-4546-b992-e93311d6493e, username=admin
----
Check that the value of `ipAddress` is the IP address of the machine you tried to login with and not the IP address
of the reverse proxy or load balancer.
. Check that the value of `ipAddress` is the IP address of the machine you tried to login with and not the IP address of the reverse proxy or load balancer.
==== Using the Built-In Load Balancer
==== Using the built-in load balancer
This section covers configuring the built-in load balancer that is discussed in the
<<_clustered-domain-example, Clustered Domain Example>>.
@ -149,14 +159,12 @@ on one machine. To bring up a slave on another host, you'll need to
the _standalone/_ directory.
. Edit the _host-slave.xml_ file to change the bind addresses used or override them on the command line
.Procedure
. Open _domain.xml_ so you can registering the new host slave with the load balancer configuration.
===== Register a New Host With Load Balancer
Let's look first at registering the new host slave with the load balancer configuration in _domain.xml_. Open this
file and go to the undertow configuration in the `load-balancer` profile. Add a new `host` definition called
`remote-host3` within the `reverse-proxy` XML block.
. Go to the undertow configuration in the `load-balancer` profile. Add a new `host` definition called `remote-host3` within the `reverse-proxy` XML block.
+
.domain.xml reverse-proxy config
[source,xml,subs="attributes+"]
----
@ -172,14 +180,15 @@ file and go to the undertow configuration in the `load-balancer` profile. Add a
...
</subsystem>
----
+
The `output-socket-binding` is a logical name pointing to a `socket-binding` configured later in the _domain.xml_ file.
The `instance-id` attribute must also be unique to the new host as this value is used by a cookie to enable sticky
sessions when load balancing.
Next go down to the `load-balancer-sockets` `socket-binding-group` and add the `outbound-socket-binding` for `remote-host3`. This new
binding needs to point to the host and port of the new host.
. Go down to the `load-balancer-sockets` `socket-binding-group` and add the `outbound-socket-binding` for `remote-host3`.
+
This new binding needs to point to the host and port of the new host.
+
.domain.xml outbound-socket-binding
[source,xml]
----
@ -197,7 +206,7 @@ binding needs to point to the host and port of the new host.
</socket-binding-group>
----
===== Master Bind Addresses
===== Master bind addresses
Next thing you'll have to do is to change the `public` and `management` bind addresses for the master host. Either
edit the _domain.xml_ file as discussed in the <<_bind-address, Bind Addresses>> chapter
@ -208,7 +217,7 @@ or specify these bind addresses on the command line as follows:
$ domain.sh --host-config=host-master.xml -Djboss.bind.address=192.168.0.2 -Djboss.bind.address.management=192.168.0.2
----
===== Host Slave Bind Addresses
===== Host slave bind addresses
Next you'll have to change the `public`, `management`, and domain controller bind addresses (`jboss.domain.master-address`). Either edit the
_host-slave.xml_ file or specify them on the command line as follows:
@ -224,6 +233,8 @@ $ domain.sh --host-config=host-slave.xml
The values of `jboss.bind.address` and `jboss.bind.address.management` pertain to the host slave's IP address.
The value of `jboss.domain.master.address` needs to be the IP address of the domain controller, which is the management address of the master host.
==== Configuring Other Load Balancers
[role="_additional-resources"]
.Additional resources
* See link:{appserver_loadbalancer_link}[the load balancing] section in the _{appserver_loadbalancer_name}_ for information how to use other software-based load balancers.
See link:{appserver_loadbalancer_link}[the load balancing] section in the _{appserver_loadbalancer_name}_ for information how to use other software-based load balancers.

View file

@ -1,11 +1,14 @@
=== Multicast Network Setup
=== Setting up multicast networking
Out of the box clustering support needs IP Multicast. Multicast is a network broadcast protocol. This protocol is used at boot time to discover and join the cluster. It is also used to broadcast messages for the replication and invalidation of distributed caches used by {project_name}.
The default clustering support needs IP Multicast. Multicast is a network broadcast protocol. This protocol is used at boot time to discover and join the cluster. It is also used to broadcast messages for the replication and invalidation of distributed caches used by {project_name}.
The clustering subsystem for {project_name} runs on the JGroups stack. Out of the box, the bind addresses for clustering are bound to a private network interface with 127.0.0.1 as default IP address.
You have to edit your the _standalone-ha.xml_ or _domain.xml_ sections discussed in the <<_bind-address,Bind Address>> chapter.
.Procedure
. Edit your the _standalone-ha.xml_ or _domain.xml_ sections discussed in the <<_bind-address,Bind Address>> chapter.
+
.private network config
[source,xml]
----
@ -27,7 +30,7 @@ You have to edit your the _standalone-ha.xml_ or _domain.xml_ sections discussed
</socket-binding-group>
----
Things you'll want to configure are the `jboss.bind.address.private` and `jboss.default.multicast.address` as well as the ports of the services on the clustering stack.
. Configure the `jboss.bind.address.private` and `jboss.default.multicast.address` as well as the ports of the services on the clustering stack.
+
NOTE: It is possible to cluster {project_name} without IP Multicast, but this topic is beyond the scope of this guide. For more information, see link:{appserver_jgroups_link}[JGroups] in the _{appserver_jgroups_name}_.

View file

@ -1,5 +1,5 @@
=== Recommended Network Architecture
=== Recommended network architecture
The recommended network architecture for deploying {project_name} is to set up an HTTP/HTTPS load balancer on
a public IP address that routes requests to {project_name} servers sitting on a private network. This
@ -7,4 +7,3 @@ isolates all clustering connections and provides a nice means of protecting the
NOTE: By default, there is nothing to prevent unauthorized nodes from joining the cluster and broadcasting multicast messages.
This is why cluster nodes should be in a private network, with a firewall protecting them from outside attacks.

View file

@ -1,5 +1,5 @@
=== Securing Cluster Communication
=== Secure cluster communication
When cluster nodes are isolated on a private network it requires access to the private network to be able to join a cluster or to view communication in the cluster. In addition you can also enable authentication and encryption for cluster communication. As long as your private network is secure it is not necessary to enable authentication and encryption. {project_name} does not send very sensitive information on the cluster in either case.

View file

@ -1,6 +1,6 @@
[[_clustering_db_lock]]
=== Serialized Cluster Startup
=== Serialized cluster startup
{project_name} cluster nodes are allowed to boot concurrently.
When {project_name} server instance boots up it may do some database migration, importing, or first time initializations.

View file

@ -1,6 +1,6 @@
[[_manage_config]]
== Manage Subsystem Configuration
== Managing the subsystem configuration
Low-level configuration of {project_name} is done by editing the
`standalone.xml`, `standalone-ha.xml`, or `domain.xml` file

View file

@ -1,6 +1,6 @@
[[_cli_recipes]]
=== CLI Recipes
=== CLI recipes
Here are some configuration tasks and how to perform them with CLI commands.
Note that in all but the first example, we use the wildcard path `**` to mean
you should substitute or the path to the keycloak-server subsystem.
@ -13,39 +13,39 @@ For domain mode, this would mean something like:
`**` = `/profile=auth-server-clustered/subsystem=keycloak-server`
==== Change the web context of the server
==== Changing the web context of the server
[source]
----
/subsystem=keycloak-server/:write-attribute(name=web-context,value=myContext)
----
==== Set the global default theme
==== Setting the global default theme
[source]
----
**/theme=defaults/:write-attribute(name=default,value=myTheme)
----
==== Add a new SPI and a provider
==== Adding a new SPI and a provider
[source]
----
**/spi=mySPI/:add
**/spi=mySPI/provider=myProvider/:add(enabled=true)
----
==== Disable a provider
==== Disabling a provider
[source]
----
**/spi=mySPI/provider=myProvider/:write-attribute(name=enabled,value=false)
----
==== Change the default provider for an SPI
==== Changing the default provider for an SPI
[source]
----
**/spi=mySPI/:write-attribute(name=default-provider,value=myProvider)
----
==== Configure the dblock SPI
==== Configuring the dblock SPI
[source]
----
**/spi=dblock/:add(default-provider=jpa)
**/spi=dblock/provider=jpa/:add(properties={lockWaitTimeout => "900"},enabled=true)
----
==== Add or change a single property value for a provider
==== Adding or changing a single property value for a provider
[source]
----
**/spi=dblock/provider=jpa/:map-put(name=properties,key=lockWaitTimeout,value=3)
@ -55,7 +55,7 @@ For domain mode, this would mean something like:
----
**/spi=dblock/provider=jpa/:map-remove(name=properties,key=lockRecheckTime)
----
==== Set values on a provider property of type `List`
==== Setting values on a provider property of type `List`
[source]
----
**/spi=eventsStore/provider=jpa/:map-put(name=properties,key=exclude-events,value=[EVENT1,EVENT2])

View file

@ -1,6 +1,6 @@
[[_config_spi_providers]]
=== Configure SPI Providers
=== Configure SPI providers
The specifics of each configuration setting is discussed elsewhere in
context with that setting. However, it is useful to understand the format used

View file

@ -1,6 +1,6 @@
[[_start_cli]]
=== Start the {appserver_name} CLI
=== Starting the {appserver_name} CLI
Besides editing the configuration by hand, you also have the option of changing
the configuration by issuing commands via the _jboss-cli_ tool. CLI allows
you to configure servers locally or remotely. And it is especially useful when
@ -44,7 +44,7 @@ as your running standalone server or domain controller and your account has appr
to setup or enter in an admin username and password. See the link:{appserver_admindoc_link}[_{appserver_admindoc_name}_]
for more details on how to make things more secure if you are uncomfortable with that setup.
=== CLI Embedded Mode
=== CLI embedded mode
If you do happen to be on the same machine as your standalone server and you want to
issue commands while the server is not active, you can embed the server into CLI and make
@ -58,7 +58,7 @@ execute the `embed-server` command with the config file you wish to change.
[standalone@embedded /]
----
=== CLI GUI Mode
=== Using CLI GUI mode
The CLI can also run in GUI mode. GUI mode launches a Swing application that
allows you to graphically view and edit the entire management model of a _running_ server.
@ -66,22 +66,28 @@ GUI mode is especially useful when you need help formatting your CLI commands an
about the options available. The GUI can also retrieve server logs from a local or
remote server.
.Start in GUI mode
.Procedure
. Start the CLI in GUI mode
+
[source]
----
$ .../bin/jboss-cli.sh --gui
----
_Note: to connect to a remote server, you pass the `--connect` option as well.
Use the --help option for more details._
+
Note: to connect to a remote server, you pass the `--connect` option as well.
Use the --help option for more details.
After launching GUI mode, you will probably want to scroll down to find the node,
`subsystem=keycloak-server`. If you right-click on the node and click
`Explore subsystem=keycloak-server`, you will get a new tab that shows only
the keycloak-server subsystem.
. Scroll down to find the node `subsystem=keycloak-server`.
image:images/cli-gui.png[]
. Right-click the node and select `Explore subsystem=keycloak-server`.
+
A new tab displays only the keycloak-server subsystem.
+
.keycloak-server subsystem
image:images/cli-gui.png[keycloak-server subsystem]
=== CLI Scripting
=== CLI scripting
The CLI has extensive scripting capabilities. A script is just a text
file with CLI commands in it. Consider a simple script that turns off theme
@ -93,7 +99,7 @@ and template caching.
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheThemes,value=false)
/subsystem=keycloak-server/theme=defaults/:write-attribute(name=cacheTemplates,value=false)
----
To execute the script, I can follow the `Scripts` menu in CLI GUI, or execute the
To execute the script, you can follow the `Scripts` menu in CLI GUI, or execute the
script from the command line as follows:
[source]
----

View file

@ -1,6 +1,6 @@
[[_database]]
== Relational Database Setup
== Setting up the relational database
{project_name} comes with its own embedded Java-based relational database called H2.
This is the default database that {project_name} will use to persist data and really only exists so that you can run the authentication
server out of the box. We highly recommend that you replace it with a more production ready external database. The H2 database

View file

@ -1,7 +1,7 @@
[[_rdbms-setup-checklist]]
=== RDBMS Setup Checklist
=== Database setup checklist
These are the steps you will need to perform to get an RDBMS configured for {project_name}.
Following are the steps you perform to get an RDBMS configured for {project_name}.
. Locate and download a JDBC driver for your database
. Package the driver JAR into a module and install this module into the server

View file

@ -1,7 +1,7 @@
=== Modify the {project_name} Datasource
=== Modifying the {project_name} datasource
After declaring your JDBC driver, you have to modify the existing datasource configuration that {project_name} uses
You modify the existing datasource configuration that {project_name} uses
to connect it to your new external database. You'll do
this within the same configuration file and XML block that you registered your JDBC driver in. Here's an example
that sets up the connection to your new database:
@ -28,18 +28,26 @@ that sets up the connection to your new database:
</subsystem>
----
Search for the `datasource` definition for `KeycloakDS`. You'll first need to modify the `connection-url`. The
.Prerequisites
* You have already declared your JDBC driver.
.Procedure
. Search for the `datasource` definition for `KeycloakDS`.
+
You'll first need to modify the `connection-url`. The
documentation for your vendor's JDBC implementation should specify the format for this connection URL value.
Next define the `driver` you will use. This is the logical name of the JDBC driver you declared in the previous section of this
. Define the `driver` you will use.
+
This is the logical name of the JDBC driver you declared in the previous section of this
chapter.
+
It is expensive to open a new connection to a database every time you want to perform a transaction. To compensate, the datasource
implementation maintains a pool of open connections. The `max-pool-size` specifies the maximum number of connections it will pool.
You may want to change the value of this depending on the load of your system.
Finally, with PostgreSQL at least, you need to define the database username and password that is needed to connect to the database. You
may be worried that this is in clear text in the example. There are methods to obfuscate this, but this is beyond the
scope of this guide.
. Define the database username and password that is needed to connect to the database. This step is necessary for at least PostgreSQL. You may be concerned that these credentials are in clear text in the example. Methods exist to obfuscate these credentials, but these methods are beyond the scope of this guide.
NOTE: For more information about datasource features, see link:{appserver_datasource_link}[the datasource configuration chapter] in the _{appserver_admindoc_name}_.

View file

@ -1,15 +1,21 @@
=== Package the JDBC Driver
=== Packaging the JDBC driver
Find and download the JDBC driver JAR for your RDBMS. Before you can use this driver, you must package it up into a module and install it into the server. Modules define JARs that are loaded into the {project_name} classpath and the dependencies those JARs have on other modules. They are pretty simple to set up.
Find and download the JDBC driver JAR for your RDBMS. Before you can use this driver, you must package it up into a module and install it into the server. Modules define JARs that are loaded into the {project_name} classpath and the dependencies those JARs have on other modules.
Within the _.../modules/_ directory of your {project_name} distribution, you need to create a directory structure to hold your module definition. The convention is use the Java package name of the JDBC driver for the name of the directory structure. For PostgreSQL, create the directory _org/postgresql/main_. Copy your database driver JAR into this directory and create an empty _module.xml_ file within it too.
.Procedure
. Create a directory structure to hold your module definition within the _.../modules/_ directory of your {project_name} distribution.
+
The convention is use the Java package name of the JDBC driver for the name of the directory structure. For PostgreSQL, create the directory _org/postgresql/main_.
. Copy your database driver JAR into this directory and create an empty _module.xml_ file within it too.
+
.Module Directory
image:{project_images}/db-module.png[]
After you have done this, open up the _module.xml_ file and create the following XML:
image:{project_images}/db-module.png[Module Directory]
. Open up the _module.xml_ file and create the following XML:
+
.Module XML
[source,xml]
----
@ -26,15 +32,33 @@ After you have done this, open up the _module.xml_ file and create the following
</dependencies>
</module>
----
+
* The module name should match the directory structure of your module. So, _org/postgresql_ maps to `org.postgresql`.
* The `resource-root path` attribute should specify the JAR filename of the driver.
* The rest are just the normal dependencies that any JDBC driver JAR would have.
The module name should match the directory structure of your module. So, _org/postgresql_ maps to `org.postgresql`. The `resource-root path` attribute should specify the JAR filename of the driver. The rest are just the normal dependencies that any JDBC driver JAR would have.
=== Declaring and loading the JDBC driver
=== Declare and Load JDBC Driver
You declare your JDBC into your deployment profile so that it loads and becomes available when the server boots up.
The next thing you have to do is declare your newly packaged JDBC driver into your deployment profile so that it loads and becomes available when the server boots up. Where you perform this action depends on your <<_operating-mode, operating mode>>. If you're deploying in standard mode, edit _.../standalone/configuration/standalone.xml_. If you're deploying in standard clustering mode, edit _.../standalone/configuration/standalone-ha.xml_. If you're deploying in domain mode, edit _.../domain/configuration/domain.xml_. In domain mode, you'll need to make sure you edit the profile you are using: either `auth-server-standalone` or `auth-server-clustered`
.Prerequisites
Within the profile, search for the `drivers` XML block within the `datasources` subsystem. You should see a pre-defined driver declared for the H2 JDBC driver. This is where you'll declare the JDBC driver for your external database.
You have packaged the JDBC driver.
.Procedure
. Declare your JDBC driver by editing one of these files based on your deployment mode:
* For standard mode, edit _.../standalone/configuration/standalone.xml_.
* For standard clustering mode, edit _.../standalone/configuration/standalone-ha.xml_.
* For domain mode, edit _.../domain/configuration/domain.xml_.
+
In domain mode, make sure you edit the profile you are using: either `auth-server-standalone` or `auth-server-clustered`
. Within the profile, search for the `drivers` XML block within the `datasources` subsystem.
+
You should see a pre-defined driver declared for the H2 JDBC driver. This is where you'll declare the JDBC driver for your external database.
+
.JDBC Drivers
[source,xml,subs="attributes+"]
----
@ -50,9 +74,14 @@ Within the profile, search for the `drivers` XML block within the `datasources`
</subsystem>
----
Within the `drivers` XML block you'll need to declare an additional JDBC driver. It needs to have a `name` which you can choose to be anything you want. You specify the `module` attribute which points to the `module` package you created earlier for the driver JAR. Finally you have to specify the driver's Java class. Here's an example of installing PostgreSQL driver that lives in the module example defined earlier in this chapter.
. Within the `drivers` XML block, declare an additional JDBC driver.
* Assign any `name` to this driver.
* Specify the `module` attribute which points to the `module` package that you created earlier for the driver JAR.
* Specify the driver's Java class.
+
Here's an example of installing a PostgreSQL driver that lives in the module example defined earlier in this chapter.
+
.Declare Your JDBC Drivers
[source,xml,subs="attributes+"]
----

View file

@ -1,5 +1,5 @@
=== Unicode Considerations for Databases
=== Unicode considerations for databases
Database schema in {project_name} only accounts for Unicode strings in the following special fields:
@ -23,7 +23,7 @@ character set for `VARCHAR` and `CHAR` fields. If yes, there is a high chance th
the expense of field length. If it only supports Unicode in `NVARCHAR` and `NCHAR` fields, Unicode support for all text
fields is unlikely as Keycloak schema uses `VARCHAR` and `CHAR` fields extensively.
==== Oracle Database
==== Oracle database
Unicode characters are properly handled provided the database was created with Unicode support in `VARCHAR` and `CHAR`
fields (e.g. by using `AL32UTF8` character set as the database character set). No special settings is needed for JDBC
@ -35,12 +35,12 @@ strictly necessary, to also set the `oracle.jdbc.convertNcharLiterals` connectio
can be set either as system properties or as connection properties. Please note that setting `oracle.jdbc.defaultNChar`
may have negative impact on performance. For details, please refer to Oracle JDBC driver configuration documentation.
==== Microsoft SQL Server Database
==== Microsoft SQL Server database
Unicode characters are properly handled only for the special fields. No special settings of JDBC driver or database is
necessary.
==== MySQL Database
==== MySQL database
Unicode characters are properly handled provided the database was created with Unicode support in `VARCHAR` and `CHAR`
fields in the `CREATE DATABASE` command (e.g. by using `utf8` character set as the default database character set in
@ -53,7 +53,7 @@ Unicode values.
At the side of JDBC driver settings, it is necessary to add a connection property `characterEncoding=UTF-8` to the JDBC
connection settings.
==== PostgreSQL Database
==== PostgreSQL database
Unicode is supported when the database character set is `UTF8`. In that case, Unicode characters can be used in any
field, there is no reduction of field length for non-special fields. No special settings of JDBC driver is necessary.

View file

@ -1,6 +1,6 @@
[[_hostname]]
== Hostname
== Use of the public hostname
{project_name} uses the public hostname for a number of things. For example, in the token issuer fields and URLs sent in
password reset emails.

View file

@ -1,5 +1,5 @@
== Installation
== Installing the software
ifeval::[{project_community}==true]
Installing {project_name} is as simple as downloading it and unzipping it. This chapter reviews system requirements
as well as the directory structure of the distribution.

View file

@ -1,9 +1,7 @@
=== Distribution Directory Structure
=== Important directories
This chapter walks you through the directory structure of the server distribution.
Let's examine the purpose of some of the directories:
The following are some important directories in the server distribution.
_bin/_::
This contains various scripts to either boot the server or perform some other management action on the server.

View file

@ -1,20 +1,28 @@
=== Installing Distribution Files
=== Installing the Keycloak server
The Keycloak Server has two downloadable distributions:
* 'keycloak-{project_version}.[zip|tar.gz]'
* 'keycloak-overlay-{project_version}.[zip|tar.gz]'
* `keycloak-{project_version}.[zip|tar.gz]`
+
This file is the server only distribution. It contains nothing other than the scripts and binaries
to run the Keycloak Server.
The 'keycloak-{project_version}.[zip|tar.gz]' file is the server only distribution. It contains nothing other than the scripts and binaries
to run the Keycloak Server. To unpack this file just run your operating system's `unzip` or `gunzip` and `tar` utilities.
* `keycloak-overlay-{project_version}.[zip|tar.gz]`
+
This file is a WildFly add-on that you can use to install the Keycloak Server on top of an existing
WildFly distribution. We do not support users who want to run their applications and Keycloak on the same server instance.
The 'keycloak-overlay-{project_version}.[zip|tar.gz]' file is a WildFly add-on that allows you to install Keycloak Server on top of an existing
WildFly distribution. We do not support users that want to run their applications and Keycloak on the same server instance. To install the Keycloak Service Pack, just unzip it in the root directory
of your WildFly distribution, open the bin directory in a shell and run `./jboss-cli.[sh|bat] --file=keycloak-install.cli`.
To unpack of these files run the `unzip` or `gunzip` and `tar` utilities.
.Procedure
. To install the Keycloak server, run your operating system's `unzip` or `gunzip` and `tar` utilities on the `keycloak-{project_version}.[zip|tar.gz]` file.
. To install the Keycloak Service Pack, it must be installed on a different server instance.
.. Change to the root directory of your WildFly distribution.
.. Unzip the `keycloak-overlay-{project_version}.[zip|tar.gz]` file.
.. Open the bin directory in a shell.
.. Run `./jboss-cli.[sh|bat] --file=keycloak-install.cli`.

View file

@ -1,5 +1,5 @@
=== Installing RH-SSO from a ZIP File
=== Installing RH-SSO from a ZIP file
The {project_name} server download ZIP file contains the scripts and binaries to run the {project_name} server. You install the {project_version_base} server first, then the {project_version} server patch.

View file

@ -9,7 +9,7 @@ You must subscribe to both the {appserver_name} {appserver_version} and RH-SSO {
NOTE: You cannot continue to receive upgrades to EAP RPMs but stop receiving updates for RH-SSO.
[[subscribing_EAP_repo]]
==== Subscribing to the {appserver_name} {appserver_version} Repository
==== Subscribing to the {appserver_name} {appserver_version} repository
.Prerequisites
@ -31,14 +31,14 @@ For Red Hat Enterprise Linux 8: Using Red Hat Subscription Manager, subscribe to
subscription-manager repos --enable=jb-eap-{appserver_version}-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms
----
==== Subscribing to the RH-SSO {project_versionDoc} Repository and Installing RH-SSO {project_versionDoc}
==== Subscribing to the RH-SSO {project_versionDoc} repository and installing RH-SSO {project_versionDoc}
.Prerequisites
. Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the link:https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html-single/quick_registration_for_rhel/index[Red Hat Subscription Management documentation].
. Ensure that you have already subscribed to the {appserver_name} {appserver_version} repository. For more information see xref:subscribing_EAP_repo[Subscribing to the {appserver_name} {appserver_version} repository].
To subscribe to the RH-SSO {project_versionDoc} repository and install RH-SSO {project_versionDoc}, complete the following steps:
.Procedure
. For Red Hat Enterprise Linux 6, 7: Using Red Hat Subscription Manager, subscribe to the RH-SSO {project_versionDoc} repository using the following command. Replace <RHEL_VERSION> with either 6 or 7 depending on your Red Hat Enterprise Linux version.
+

View file

@ -1,7 +1,7 @@
=== System Requirements
=== Installation prerequisites
These are the requirements to run the {project_name} authentication server:
These prerequisites exist for installing the {project_name} server:
* Can run on any operating system that runs Java
* Java 8 JRE or Java 11 JRE

View file

@ -1,9 +1,9 @@
[[_network]]
== Network Setup
== Setting up the network
{project_name} can run out of the box with some networking limitations. For one, all network endpoints bind to `localhost`
The default installation of {project_name} can run with some networking limitations. For one, all network endpoints bind to `localhost`
so the auth server is really only usable on one local machine. For HTTP based connections, it does not use default ports
like 80 and 443. HTTPS/SSL is not configured out of the box and without it, {project_name} has many security
vulnerabilities.

View file

@ -1,7 +1,7 @@
[[_bind-address]]
=== Bind Addresses
=== Bind addresses
By default {project_name} binds to the localhost loopback address `127.0.0.1`. That's not a very useful default if
you want the authentication server available on your network. Generally, what we recommend is that you deploy a reverse proxy

View file

@ -19,7 +19,7 @@ all requests::
The SSL mode for each realm can be configured in the {project_name} admin console.
==== Enabling SSL/HTTPS for the {project_name} Server
==== Enabling SSL/HTTPS for the {project_name} server
If you are not using a reverse proxy or load balancer to handle HTTPS traffic for you, you'll need to enable HTTPS
for the {project_name} server. This involves
@ -29,7 +29,7 @@ for the {project_name} server. This involves
===== Creating the Certificate and Java Keystore
In order to allow HTTPS connections, you need to obtain a self signed or third-party signed certificate and import it into a Java keystore before you can enable HTTPS in the web container you are deploying the {project_name} Server to.
In order to allow HTTPS connections, you need to obtain a self signed or third-party signed certificate and import it into a Java keystore before you can enable HTTPS in the web container where you are deploying the {project_name} Server.
====== Self Signed Certificate
@ -59,27 +59,25 @@ $ keytool -genkey -alias localhost -keyalg RSA -keystore keycloak.jks -validity
[no]: yes
----
You should answer `What is your first and last name ?` question with the DNS name of the machine you're installing the server on.
For testing purposes, `localhost` should be used.
When you see the question `What is your first and last name ?`, supply the DNS name of the machine where you are installing the server. For testing purposes, `localhost` should be used.
After executing this command, the `keycloak.jks` file will be generated in the same directory as you executed the `keytool` command in.
If you want a third-party signed certificate, but don't have one, you can obtain one for free at http://www.cacert.org[cacert.org].
You'll have to do a little set up first before doing this though.
If you want a third-party signed certificate, but don't have one, you can obtain one for free at http://www.cacert.org[cacert.org]. However, you first need to use the following procedure.
The first thing to do is generate a Certificate Request:
.Procedure
. Generate a Certificate Request:
+
[source]
----
$ keytool -certreq -alias yourdomain -keystore keycloak.jks > keycloak.careq
----
Where `yourdomain` is a DNS name for which this certificate is generated for.
+
Where `yourdomain` is a DNS name for which this certificate is generated.
Keytool generates the request:
+
[source]
----
-----BEGIN NEW CERTIFICATE REQUEST-----
MIIC2jCCAcICAQAwZTELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk1BMREwDwYDVQQHEwhXZXN0Zm9y
ZDEQMA4GA1UEChMHUmVkIEhhdDEQMA4GA1UECxMHUmVkIEhhdDESMBAGA1UEAxMJbG9jYWxob3N0
@ -97,52 +95,58 @@ vqIFQeuLL3BaHwpl3t7j2lMWcK1p80laAxEASib/fAwrRHpLHBXRcq6uALUOZl4Alt8=
-----END NEW CERTIFICATE REQUEST-----
----
Send this ca request to your CA.
. Send this CA request to your Certificate Authority (CA).
+
The CA will issue you a signed certificate and send it to you.
Before you import your new cert, you must obtain and import the root certificate of the CA.
You can download the cert from CA (ie.: root.crt) and import as follows:
. Obtain and import the root certificate of the CA.
+
You can download the cert from CA (in other words: root.crt) and import as follows:
+
[source]
----
$ keytool -import -keystore keycloak.jks -file root.crt -alias root
----
Last step is to import your new CA generated certificate to your keystore:
. Import your new CA generated certificate to your keystore:
+
[source]
----
$ keytool -import -alias yourdomain -keystore keycloak.jks -file your-certificate.cer
----
===== Configure {project_name} to Use the Keystore
Now that you have a Java keystore with the appropriate certificates, you need to configure your {project_name} installation to use it.
First, you must edit the _standalone.xml_, _standalone-ha.xml_, or _host.xml_ file to use the keystore and enable HTTPS. You may then either move the keystore file to the _configuration/_ directory of your deployment or the file in a location you choose and provide an absolute path to it. If you are using absolute paths, remove the optional `relative-to` parameter from your configuration (See <<_operating-mode, operating mode>>).
Add the new `security-realm` element using the CLI:
.Procedure
. Edit the _standalone.xml_, _standalone-ha.xml_, or _host.xml_ file to use the keystore and enable HTTPS.
. Either move the keystore file to the _configuration/_ directory of your deployment or the file in a location you choose and provide an absolute path to it.
+
If you are using absolute paths, remove the optional `relative-to` parameter from your configuration (See <<_operating-mode, operating mode>>).
. Add the new `security-realm` element using the CLI:
+
[source]
----
$ /core-service=management/security-realm=UndertowRealm:add()
$ /core-service=management/security-realm=UndertowRealm/server-identity=ssl:add(keystore-path=keycloak.jks, keystore-relative-to=jboss.server.config.dir, keystore-password=secret)
----
If using domain mode, the commands should be executed in every host using the `/host=<host_name>/` prefix (in order to create the `security-realm` in all of them), like this, which you would repeat for each host:
+
If using domain mode, the commands should be executed in every host using the `/host=<host_name>/` prefix (in order to create the `security-realm` in all of them). Here is an example, which you would repeat for each host:
+
[source]
----
$ /host=<host_name>/core-service=management/security-realm=UndertowRealm/server-identity=ssl:add(keystore-path=keycloak.jks, keystore-relative-to=jboss.domain.config.dir, keystore-password=secret)
----
+
In the standalone or host configuration file, the `security-realms` element should look like this:
+
[source,xml]
----
<security-realm name="UndertowRealm">
<server-identities>
<ssl>
@ -152,17 +156,19 @@ In the standalone or host configuration file, the `security-realms` element shou
</security-realm>
----
Next, in the standalone or each domain configuration file, search for any instances of `security-realm`. Modify the `https-listener` to use the created realm:
. In the standalone or each domain configuration file, search for any instances of `security-realm`.
. Modify the `https-listener` to use the created realm:
+
[source]
----
$ /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=security-realm, value=UndertowRealm)
----
+
If using domain mode, prefix the command with the profile that is being used with: `/profile=<profile_name>/`.
+
The resulting element, `server name="default-server"`, which is a child element of `subsystem xmlns="{subsystem_undertow_xml_urn}"`, should contain the following stanza:
+
[source,xml,subs="attributes+"]
----
<subsystem xmlns="{subsystem_undertow_xml_urn}">

View file

@ -1,5 +1,5 @@
=== Outgoing HTTP Requests
=== Outgoing HTTP requests
The {project_name} server often needs to make non-browser HTTP requests to the applications and services it secures.
The auth server manages these outgoing connections by maintaining an HTTP client connection pool. There are some things
@ -66,7 +66,7 @@ disable-trust-manager::
The default value is `false`.
[[_proxymappings]]
==== Proxy Mappings for Outgoing HTTP Requests
==== Proxy mappings for outgoing HTTP requests
Outgoing HTTP requests sent by {project_name} can optionally use a proxy server based on a comma delimited list of proxy-mappings.
A proxy-mapping denotes the combination of a regex based hostname pattern and a proxy-uri in the form of `hostnamePattern;proxyUri`,
@ -176,7 +176,7 @@ environment variable defined. To do so, you can specify a generic no proxy route
----
[[_truststore]]
==== Outgoing HTTPS Request Truststore
==== Outgoing HTTPS request truststore
When {project_name} invokes on remote HTTPS endpoints, it has to validate the remote server's certificate in order to ensure it is connecting to a trusted server.
This is necessary in order to prevent man-in-the-middle attacks. The certificates of these remote server's or the CA that signed these

View file

@ -1,7 +1,7 @@
[[_ports]]
=== Socket Port Bindings
=== Socket port bindings
The ports opened for each socket have a pre-defined default that can be overridden at the command line or within configuration.
To illustrate this configuration, let's pretend you are running in <<_standalone-mode,standalone mode>> and

View file

@ -1,11 +1,14 @@
[[_operating-mode]]
== Choosing an Operating Mode
== Using operating modes
Before deploying {project_name} in a production environment you need to decide which type of operating mode
you are going to use. Will you run {project_name} within a cluster? Do you want a centralized way to manage
your server configurations? Your choice of operating mode affects how you configure databases, configure caching and even how you boot the server.
Before deploying {project_name} in a production environment you need to decide which type of operating mode you are going to use.
* Will you run {project_name} within a cluster?
* Do you want a centralized way to manage your server configurations?
Your choice of operating mode affects how you configure databases, configure caching and even how you boot the server.
TIP: The {project_name} is built on top of the {appserver_name} Application Server. This guide will only
go over the basics for deployment within a specific mode. If you want specific information on this, a better place

View file

@ -1,14 +1,14 @@
[[crossdc-mode]]
=== Cross-Datacenter Replication Mode
=== Using cross-site replication mode
:tech_feature_name: Cross-Datacenter Replication Mode
:tech_feature_name: cross-site replication mode
:tech_feature_disabled: false
include::../templates/techpreview.adoc[]
Cross-Datacenter Replication mode lets you run {project_name} in a cluster across multiple data centers, most typically using data center sites that are in different geographic regions. When using this mode, each data center will have its own cluster of {project_name} servers.
Use cross-site replication mode to run {project_name} in a cluster across multiple data centers. Typically you use data center sites that are in different geographic regions. When using this mode, each data center will have its own cluster of {project_name} servers.
This documentation will refer to the following example architecture diagram to illustrate and describe a simple Cross-Datacenter Replication use case.
This documentation will refer to the following example architecture diagram to illustrate and describe a simple cross-site replication use case.
[[archdiagram]]
.Example Architecture Diagram
@ -21,10 +21,10 @@ image:{project_images}/cross-dc-architecture.png[]
As this is an advanced topic, we recommend you first read the following, which provide valuable background knowledge:
* link:{installguide_clustering_link}[Clustering with {project_name}]
When setting up for Cross-Datacenter Replication, you will use more independent {project_name} clusters, so you must understand how a cluster works and the basic concepts and requirements such as load balancing, shared databases, and multicasting.
When setting up for cross-site replication, you will use more independent {project_name} clusters, so you must understand how a cluster works and the basic concepts and requirements such as load balancing, shared databases, and multicasting.
ifeval::[{project_product}==true]
* link:{jdgserver_crossdcdocs_link}[Red Hat Data Grid Cross-Datacenter Replication]
* link:{jdgserver_crossdcdocs_link}[Red Hat Data Grid Cross-Site Replication]
{project_name} uses Red Hat Data Grid (RHDG) for the replication of data between the data centers.
endif::[]
@ -36,7 +36,7 @@ endif::[]
[[technicaldetails]]
==== Technical details
This section provides an introduction to the concepts and details of how {project_name} Cross-Datacenter Replication is accomplished.
This section provides an introduction to the concepts and details of how {project_name} cross-site replication is accomplished.
.Data
@ -46,7 +46,7 @@ This section provides an introduction to the concepts and details of how {projec
* An Infinispan cache is used to cache persistent data from the database and also to save some short-lived and frequently-changing metadata, such as for user sessions.
Infinispan is usually much faster than a database, however the data saved using Infinispan are not permanent and is not expected to persist across cluster restarts.
In our example architecture, there are two data centers called `site1` and `site2`. For Cross-Datacenter Replication, we must make sure that both sources of data work reliably and that {project_name}
In our example architecture, there are two data centers called `site1` and `site2`. For cross-site replication, we must make sure that both sources of data work reliably and that {project_name}
servers from `site1` are eventually able to read the data saved by {project_name} servers on `site2` .
Based on the environment, you have the option to decide if you prefer:
@ -72,7 +72,7 @@ These are not seen by an end user's browser and therefore can not be part of a s
[[modes]]
==== Modes
According your requirements, there are two basic operating modes for Cross-Datacenter Replication:
According your requirements, there are two basic operating modes for cross-site replication:
* Active/Passive - Here the users and client applications send the requests just to the {project_name} nodes in just a single data center.
The second data center is used just as a `backup` for saving the data. In case of the failure in the main data center, the data can be usually restored from the second data center.
@ -86,7 +86,7 @@ The active/passive mode is better for performance. For more information about ho
[[database]]
==== Database
{project_name} uses a relational database management system (RDBMS) to persist some metadata about realms, clients, users, and so on. See link:{installguide_database_link}[this chapter] of the server installation guide for more details. In a Cross-Datacenter Replication setup, we assume that either both data centers talk to the same database or that every data center has its own database node and both database nodes are synchronously replicated across the data centers. In both cases, it is required that when a {project_name} server on `site1` persists some data and commits the transaction, those data are immediately visible by subsequent DB transactions on `site2`.
{project_name} uses a relational database management system (RDBMS) to persist some metadata about realms, clients, users, and so on. See link:{installguide_database_link}[this chapter] of the server installation guide for more details. In a cross-site replication setup, we assume that either both data centers talk to the same database or that every data center has its own database node and both database nodes are synchronously replicated across the data centers. In both cases, it is required that when a {project_name} server on `site1` persists some data and commits the transaction, those data are immediately visible by subsequent DB transactions on `site2`.
Details of DB setup are out-of-scope for {project_name}, however many RDBMS vendors like MariaDB and Oracle offer replicated databases and synchronous replication. We test {project_name} with these vendors:
@ -148,7 +148,7 @@ See the <<archdiagram>> for more details.
include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3]
[[setup]]
==== Setting up Cross DC with {jdgserver_name} {jdgserver_version}
==== Setting up cross-site replication with {jdgserver_name} {jdgserver_version}
This example for {jdgserver_name} {jdgserver_version} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total.
@ -634,10 +634,10 @@ Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea',
```
[[administration]]
==== Administration of Cross DC deployment
==== Administration of cross-site deployment
This section contains some tips and options related to Cross-Datacenter Replication.
This section contains some tips and options related to cross-site replication.
* When you run the {project_name} server inside a data center, it is required that the database referenced in `KeycloakDS` datasource is already running and available in that data center. It is also necessary that the {jdgserver_name} server referenced by the `outbound-socket-binding`, which is referenced from the Infinispan cache `remote-store` element, is already running. Otherwise the {project_name} server will fail to start.
@ -771,7 +771,7 @@ The only (non-functional) problem is the exception in the {jdgserver_name} serve
[[backups]]
==== SYNC or ASYNC backups
An important part of the `backup` element is the `strategy` attribute. You must decide whether it needs to be `SYNC` or `ASYNC`. We have 7 caches which might be Cross-Datacenter Replication aware, and these can be configured in 3 different modes regarding cross-dc:
An important part of the `backup` element is the `strategy` attribute. You must decide whether it needs to be `SYNC` or `ASYNC`. We have 7 caches which might be cross-site replication aware, and these can be configured in 3 different modes regarding cross-site:
. SYNC backup
. ASYNC backup

View file

@ -1,9 +1,9 @@
[id="assembly-setting-up-crossdc"]
:context: xsite
= Setting Up Cross DC with {jdgserver_name} {jdgserver_version_latest}
= Setting up cross-site with {jdgserver_name} {jdgserver_version_latest}
[role="_abstract"]
Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of Cross-Datacenter replication.
Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of cross-site replication.
This example for {jdgserver_name} {jdgserver_version_latest} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total.

View file

@ -24,7 +24,7 @@ By default, {jdgserver_name} Server uses `server/conf/infinispan.xml` for static
----
<1> Lists the host names for `server1` and `server2`.
+
. Configure the {jdgserver_name} cluster transport to perform Cross-Datacenter replication.
. Configure the {jdgserver_name} cluster transport to perform cross-site replication.
.. Add the RELAY2 protocol to a JGroups stack.
+
[source,xml,options="nowrap",subs=attributes+]

View file

@ -1,6 +1,6 @@
[id='setting-up-infinispan-{context}']
= Setting Up {jdgserver_name} Servers
For Cross-Datacenter replication, you start by creating remote {jdgserver_name} clusters that can back up {project_name} data.
For cross-site replication, you start by creating remote {jdgserver_name} clusters that can back up {project_name} data.
.Prerequisites

View file

@ -1,11 +1,11 @@
[[_domain-mode]]
=== Domain Clustered Mode
=== Using domain clustered mode
Domain mode is a way to centrally manage and publish the configuration for your servers.
Running a cluster in standard mode can quickly become aggravating as the cluster grows in size. Every time you need
to make a configuration change, you have to perform it on each node in the cluster. Domain mode solves this problem by providing
to make a configuration change, you perform it on each node in the cluster. Domain mode solves this problem by providing
a central place to store and publish configurations. It can be quite complex to set up, but it is worth it in the end.
This capability is built into the {appserver_name} Application Server which {project_name} derives from.
@ -40,7 +40,7 @@ from the domain controller.
NOTE: In some environments, such as Microsoft Azure, the domain mode is not applicable. Please consult the {appserver_name} documentation.
==== Domain Configuration
==== Domain configuration
Various other chapters in this guide walk you through configuring various aspects like databases,
HTTP network connections, caches, and other infrastructure related things. While standalone mode uses the _standalone.xml_ file to configure these things,
@ -125,9 +125,7 @@ binds a `socket-binding-group` to the server group.
</server-groups>
----
==== Host Controller Configuration
==== Host controller configuration
{project_name} comes with two host controller configuration files that reside in the _.../domain/configuration/_ directory:
_host-master.xml_ and _host-slave.xml_. _host-master.xml_ is configured to boot up a domain controller, a load balancer, and
@ -154,7 +152,7 @@ To disable the load balancer server instance, edit _host-master.xml_ and comment
Another interesting thing to note about this file is the declaration of the authentication server instance. It has
a `port-offset` setting. Any network port defined in the _domain.xml_ `socket-binding-group` or the server group
will have the value of `port-offset` added to it. For this example domain setup we do this so that ports opened by
will have the value of `port-offset` added to it. For this sample domain setup, we do this so that ports opened by
the load balancer server don't conflict with the authentication server instance that is started.
[source,xml]
@ -167,7 +165,7 @@ the load balancer server don't conflict with the authentication server instance
</servers>
----
==== Server Instance Working Directories
==== Server instance working directories
Each {project_name} server instance defined in your host files creates a working directory under _.../domain/servers/{SERVER NAME}_.
Additional configuration can be put there, and any temporary, log, or data files the server instance needs or creates go there too.
@ -176,7 +174,7 @@ The structure of these per server directories ends up looking like any other {ap
.Working Directories
image:{project_images}/domain-server-dir.png[]
==== Domain Boot Script
==== Booting in domain clustered mode
When running the server in domain mode, there is a specific script you need to run to boot the server depending on your
operating system. These scripts live in the _bin/_ directory of the server distribution.
@ -202,31 +200,35 @@ When running the boot script you will need to pass in the host controlling confi
`--host-config` switch.
[[_clustered-domain-example]]
==== Clustered Domain Example
==== Testing with a sample clustered domain
You can test drive clustering using the out-of-the-box _domain.xml_ configuration. This example
domain is meant to run on one machine and boots up:
You can test drive clustering using the sample _domain.xml_ configuration. This sample domain is meant to run on one machine and boots up:
* a domain controller
* an HTTP load balancer
* 2 {project_name} server instances
* two {project_name} server instances
To simulate running a cluster on two machines, you'll need to run the `domain.sh` script twice to start two separate
host controllers. The first will be the master host controller which will start a domain controller, an HTTP load balancer, and one
{project_name} authentication server instance. The second will be a slave host controller that only starts
up an authentication server instance.
.Procedure
===== Setup Slave Connection to Domain Controller
. Run the `domain.sh` script twice to start two separate host controllers.
+
The first one is the master host controller that starts a domain controller, an HTTP load balancer, and one
{project_name} authentication server instance. The second one is a slave host controller that starts
up only an authentication server instance.
Before you can boot things up though, you have to configure the slave host controller so that it can talk securely to the domain
controller. If you do not do this, then the slave host will not be able to obtain the centralized configuration from the domain controller.
To set up a secure connection, you have to create a server admin user and a secret that
will be shared between the master and the slave. You do this by running the `.../bin/add-user.sh` script.
. Configure the slave host controller so that it can talk securely to the domain controller. Perform these steps:
+
If you omit these steps, the slave host cannot obtain the centralized configuration from the domain controller.
When you run the script select `Management User` and answer `yes` when it asks you if the new user is going to be used
for one AS process to connect to another. This will generate a secret that you'll need to cut and paste into the
.. Set up a secure connection by creating a server admin user and a secret that are shared between the master and the slave.
+
Run the `.../bin/add-user.sh` script.
.. Select `Management User` when the script asks about the type of user to add.
+
This choice generates a secret that you cut and paste into the
_.../domain/configuration/host-slave.xml_ file.
+
.Add App Server Admin
[source]
----
@ -256,11 +258,11 @@ $ add-user.sh
yes/no? yes
To represent the user add the following to the server-identities definition <secret value="bWdtdDEyMyE=" />
----
+
NOTE: The add-user.sh script does not add the user to the {project_name} server but to the underlying JBoss Enterprise Application Platform. The credentials used and generated in this script are only for demonstration purposes. Please use the ones generated on your system.
NOTE: The add-user.sh does not add user to {project_name} server but to the underlying JBoss Enterprise Application Platform. The credentials used and generated in the above script are only for example purpose. Please use the ones generated on your system.
Next, cut and paste the secret value into the _.../domain/configuration/host-slave.xml_ file as follows:
. Cut and paste the secret value into the _.../domain/configuration/host-slave.xml_ file as follows:
+
[source,xml]
----
<management>
@ -271,27 +273,25 @@ Next, cut and paste the secret value into the _.../domain/configuration/host-sla
</server-identities>
----
You will also need to add the _username_ of the created user in the _.../domain/configuration/host-slave.xml_ file:
. Add the _username_ of the created user in the _.../domain/configuration/host-slave.xml_ file:
+
[source,xml]
----
<remote security-realm="ManagementRealm" username="admin">
----
===== Run the Boot Scripts
Since we're simulating a two node cluster on one development machine, you'll run the boot script twice:
+
. Run the boot script twice to simulate a two node cluster on one development machine.
+
.Boot up master
[source,shell]
----
$ domain.sh --host-config=host-master.xml
----
+
.Boot up slave
[source,shell]
----
$ domain.sh --host-config=host-slave.xml
----
To try it out, open your browser and go to http://localhost:8080/auth.
. Open your browser and go to http://localhost:8080/auth to try it out.

View file

@ -1,14 +1,14 @@
[[_standalone-ha-mode]]
=== Standalone Clustered Mode
=== Using standalone clustered mode
Standalone clustered operation mode is for when you want to run {project_name} within a cluster. This mode
requires that you have a copy of the {project_name} distribution on each machine you want to run a server instance.
This mode can be very easy to deploy initially, but can become quite cumbersome. To make a configuration change
you'll have to modify each distribution on each machine. For a large cluster this can become time consuming and error prone.
Standalone clustered operation mode applies when you want to run {project_name} within a cluster. This mode
requires that you have a copy of the {project_name} distribution on each machine where you want to run a server instance.
This mode can be very easy to deploy initially, but can become quite cumbersome. To make a configuration change,
you modify each distribution on each machine. For a large cluster, this mode can become time consuming and error prone.
==== Standalone Clustered Configuration
==== Standalone clustered configuration
The distribution has a mostly pre-configured app server configuration file for running within a cluster. It has all the specific
infrastructure settings for networking, databases, caches, and discovery. This file resides
@ -24,7 +24,7 @@ WARNING: Any changes you make to this file while the server is running will not
by the server. Instead use the command line scripting or the web console of {appserver_name}. See
the link:{appserver_admindoc_link}[_{appserver_admindoc_name}_] for more information.
==== Standalone Clustered Boot Script
==== Booting in standalone clustered mode
You use the same boot scripts to start {project_name} as you do in standalone mode. The difference is that
you pass in an additional flag to point to the HA config file.

View file

@ -1,15 +1,15 @@
[[_standalone-mode]]
=== Standalone Mode
=== Using standalone mode
Standalone operating mode is only useful when you want to run one, and only one {project_name} server instance.
It is not usable for clustered deployments and all caches are non-distributed and local-only. It is not recommended that
you use standalone mode in production as you will have a single point of failure. If your standalone mode server goes down,
users will not be able to log in. This mode is really only useful to test drive and play with the features of {project_name}
==== Standalone Boot Script
==== Booting in standalone mode
When running the server in standalone mode, there is a specific script you need to run to boot the server depending on your
When running the server in standalone mode, there is a specific script you need to boot the server depending on your
operating system. These scripts live in the _bin/_ directory of the server distribution.
.Standalone Boot Scripts
@ -29,7 +29,7 @@ $ .../bin/standalone.sh
> ...\bin\standalone.bat
----
==== Standalone Configuration
==== Standalone configuration
The bulk of this guide walks you through how to configure infrastructure level aspects of {project_name}. These
aspects are configured in a configuration file that is specific to the application server that {project_name} is a

View file

@ -1,6 +1,6 @@
[[_keycloak_cr]]
=== {project_name} installation using a custom resource
=== Installing {project_name} using a custom resource
You can use the Operator to automate the installation of {project_name} by creating a Keycloak custom resource. When you use a custom resource to install {project_name}, you create the components and services that are described here and illustrated in the graphic that follows.

View file

@ -1,5 +1,5 @@
=== Recommended Additional External Documentation
=== Recommended additional external documentation
{project_name} is built on top of the {appserver_name} application server and its sub-projects like Infinispan (for caching) and Hibernate (for persistence).
This guide only covers basics for infrastructure-level configuration. It is highly recommended that you peruse the documentation