KEYCLOAK-16628 addressed additional review comments

This commit is contained in:
Andy Munro 2021-01-12 17:41:37 -05:00 committed by Marek Posolda
parent 1c083346cd
commit 9c961d5e1e
8 changed files with 397 additions and 64 deletions

View file

@ -6,6 +6,7 @@
include::topics/templates/document-attributes-community.adoc[]
:server_installation_and_configuration_guide:
:context: server_installation_and_configuration_guide
= {installguide_name}
@ -13,4 +14,6 @@ include::topics/templates/document-attributes-community.adoc[]
:release_header_latest_link: {installguide_link_latest}
include::topics/templates/release-header.adoc[]
include::topics.adoc[]
include::topics.adoc[]
:context:

View file

@ -6,7 +6,11 @@
include::topics/templates/document-attributes-product.adoc[]
:server_installation_and_configuration_guide:
:context: server_installation_and_configuration_guide
= {installguide_name}
include::topics.adoc[]
include::topics.adoc[]
:context:

View file

@ -23,9 +23,14 @@ As this is an advanced topic, we recommend you first read the following, which p
* link:{installguide_clustering_link}[Clustering with {project_name}]
When setting up for Cross-Datacenter Replication, you will use more independent {project_name} clusters, so you must understand how a cluster works and the basic concepts and requirements such as load balancing, shared databases, and multicasting.
* link:{jdgserver_crossdcdocs_link}[JBoss Data Grid Cross-Datacenter Replication]
{project_name} uses JBoss Data Grid (JDG) for the replication of Infinispan data between the data centers.
ifeval::[{project_product}==true]
* link:{jdgserver_crossdcdocs_link}[Red Hat Data Grid Cross-Datacenter Replication]
{project_name} uses Red Hat Data Grid (RHDG) for the replication of data between the data centers.
endif::[]
ifeval::[{project_community}==true]
* link:https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html#xsite_replication[Infinispan Cross-Site Replication] replicates data across clusters in separate geographic locations.
endif::[]
[[technicaldetails]]
@ -132,42 +137,41 @@ For more detail about how caches can be configured see <<tuningcache>>.
[[communication]]
==== Communication details
{project_name} uses multiple, separate clusters of Infinispan caches. Every {project_name} node is in the cluster with the other {project_name} nodes in same data center, but not with the {project_name} nodes in different data centers. A {project_name} node does not communicate directly with the {project_name} nodes from different data centers. {project_name} nodes use external JDG (actually {jdgserver_name} servers) for communication across data centers. This is done using the link:https://infinispan.org/docs/stable/titles/hotrod_java/hotrod_java.html[Infinispan HotRod protocol].
{project_name} uses multiple, separate clusters of Infinispan caches. Every {project_name} node is in the cluster with the other {project_name} nodes in same data center, but not with the {project_name} nodes in different data centers. A {project_name} node does not communicate directly with the {project_name} nodes from different data centers. {project_name} nodes use external {jdgserver_name} servers for communication across data centers. This is done using the link:[Infinispan Hot Rod protocol].
The Infinispan caches on the {project_name} side must be configured with the link:https://infinispan.org/docs/stable/titles/configuring/configuring.html#remote_cache_store[remoteStore] to ensure that data are saved to the remote cache. There is separate Infinispan cluster between JDG servers, so the data saved on JDG1 on `site1` are replicated to JDG2 on `site2` .
The Infinispan caches on the {project_name} side must be configured with the link:https://infinispan.org/docs/stable/titles/configuring/configuring.html#remote_cache_store[remoteStore] to ensure that data are saved to the remote cache. There is separate Infinispan cluster between {jdgserver_name} servers, so the data saved on SERVER1 on `site1` are replicated to SERVER2 on `site2` .
Finally, the receiving JDG server notifies the {project_name} servers in its cluster through the Client Listeners, which are a feature of the HotRod protocol. {project_name} nodes on `site2` then update their Infinispan caches and the particular user session is also visible on {project_name} nodes on `site2`.
Finally, the receiving {jdgserver_name} server notifies the {project_name} servers in its cluster through the Client Listeners, which are a feature of the Hot Rod protocol. {project_name} nodes on `site2` then update their Infinispan caches and the particular user session is also visible on {project_name} nodes on `site2`.
See the <<archdiagram>> for more details.
[[setup]]
==== Basic setup
==== Setting up Cross DC with {jdgserver_name} {jdgserver_version}
For this example, we describe using two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total.
This example for {jdgserver_name} {jdgserver_version} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total.
* `Site1` consists of {jdgserver_name} server, `jdg1`, and 2 {project_name} servers, `node11` and `node12` .
* `Site1` consists of {jdgserver_name} server, `server1`, and 2 {project_name} servers, `node11` and `node12` .
* `Site2` consists of {jdgserver_name} server, `jdg2`, and 2 {project_name} servers, `node21` and `node22` .
* `Site2` consists of {jdgserver_name} server, `server2`, and 2 {project_name} servers, `node21` and `node22` .
* {jdgserver_name} servers `jdg1` and `jdg2` are connected to each other through the RELAY2 protocol and `backup` based {jdgserver_name}
caches in a similar way as described in the link:{jdgserver_crossdcdocs_link}[JDG documentation].
* {jdgserver_name} servers `server1` and `server2` are connected to each other through the RELAY2 protocol and `backup` based {jdgserver_name}
caches in a similar way as described in the link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation].
* {project_name} servers `node11` and `node12` form a cluster with each other, but they do not communicate directly with any server in `site2`.
They communicate with the Infinispan server `jdg1` using the HotRod protocol (Remote cache). See <<communication>> for the details.
They communicate with the Infinispan server `server1` using the Hot Rod protocol (Remote cache). See <<communication>> for the details.
* The same details apply for `node21` and `node22`. They cluster with each other and communicate only with `jdg2` server using the HotRod protocol.
* The same details apply for `node21` and `node22`. They cluster with each other and communicate only with `server2` server using the Hot Rod protocol.
Our example setup assumes all that all 4 {project_name} servers talk to the same database. In production, it is recommended to use separate synchronously replicated databases across data centers as described in <<database>>.
[[jdgsetup]]
===== {jdgserver_name} server setup
===== Setting up the {jdgserver_name} server
Follow these steps to set up the {jdgserver_name} server:
. Download {jdgserver_name} {jdgserver_version} server and unzip to a directory you choose. This location will be referred in later steps as `JDG1_HOME` .
. Download {jdgserver_name} {jdgserver_version} server and unzip to a directory you choose. This location will be referred in later steps as `SERVER1_HOME` .
. Change those things in the `JDG1_HOME/standalone/configuration/clustered.xml` in the configuration of JGroups subsystem:
. Change those things in the `SERVER1_HOME/server/conf/infinispan-xsite.xml` in the configuration of JGroups subsystem:
.. Add the `xsite` channel, which will use `tcp` stack, under `channels` element:
+
```xml
@ -189,13 +193,13 @@ Follow these steps to set up the {jdgserver_name} server:
</stack>
```
+
.. Configure the `tcp` stack to use `TCPPING` protocol instead of `MPING`. Remove the `MPING` element and replace it with the `TCPPING`. The `initial_hosts` element points to the hosts `jdg1` and `jdg2`:
.. Configure the `tcp` stack to use `TCPPING` protocol instead of `MPING`. Remove the `MPING` element and replace it with the `TCPPING`. The `initial_hosts` element points to the hosts `server1` and `server2`:
+
```xml
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="TCPPING">
<property name="initial_hosts">jdg1[7600],jdg2[7600]</property>
<property name="initial_hosts">server1[7600],server2[7600]</property>
<property name="ergonomics">false</property>
</protocol>
<protocol type="MERGE3"/>
@ -207,7 +211,7 @@ Details of this more-detailed setup are out-of-scope of the {project_name} docum
+
. Add this into `JDG1_HOME/standalone/configuration/clustered.xml` under cache-container named `clustered`:
. Add this into `SERVER1_HOME/standalone/configuration/clustered.xml` under cache-container named `clustered`:
+
```xml
<cache-container name="clustered" default-cache="default" statistics="true">
@ -300,9 +304,9 @@ Issues related to authorization may exist just for some other versions of {jdgse
```
+
. Copy the server to the second location, which will be referred to later as `JDG2_HOME`.
. Copy the server to the second location, which will be referred to later as `SERVER2_HOME`.
. In the `JDG2_HOME/standalone/configuration/clustered.xml` exchange `site1` with `site2` and vice versa, both in the configuration of `relay` in the JGroups subsystem and in configuration of `backups` in the cache-subsystem. For example:
. In the `SERVER2_HOME/standalone/configuration/clustered.xml` exchange `site1` with `site2` and vice versa, both in the configuration of `relay` in the JGroups subsystem and in configuration of `backups` in the cache-subsystem. For example:
.. The `relay` element should look like this:
+
```xml
@ -321,60 +325,58 @@ Issues related to authorization may exist just for some other versions of {jdgse
...
```
+
It is currently required to have different configuration files for the JDG servers on both sites as the Infinispan subsystem does not support replacing site names with expressions. See link:https://issues.redhat.com/browse/WFLY-9458[this issue] for more details.
NOTE: The _PUBLIC_IP_ADDRESS_ below refers to the IP address or hostname, which can be used for your server to bind to. Note that
every {jdgserver_name} server and {project_name} server needs to use different address. During example setup with all the servers running on the same host,
you may need to add the option `-Djboss.bind.address.management=_PUBLIC_IP_ADDRESS_` as every server needs to use also different management interface.
But this option usually should be omitted in production environments to avoid the ability for remote access to your server. For more information,
see the link:{appserver_socket_link}[_{appserver_socket_name}_].
. Start server `jdg1`:
. Start server `server1`:
+
[source,subs="+quotes"]
----
cd JDG1_HOME/bin
cd SERVER1_HOME/bin
./standalone.sh -c clustered.xml -Djava.net.preferIPv4Stack=true \
-Djboss.default.multicast.address=234.56.78.99 \
-Djboss.node.name=jdg1 -b _PUBLIC_IP_ADDRESS_
-Djboss.node.name=server1 -b _PUBLIC_IP_ADDRESS_
----
+
. Start server `jdg2`. There is a different multicast address, so the `jdg1` and `jdg2` servers are not directly clustered with each other; rather, they are just connected through the RELAY2 protocol, and the TCP JGroups stack is used for communication between them. The start up command looks like this:
. Start server `server2`. There is a different multicast address, so the `server1` and `server2` servers are not directly clustered with each other; rather, they are just connected through the RELAY2 protocol, and the TCP JGroups stack is used for communication between them. The start up command looks like this:
+
[source,subs="+quotes"]
----
cd JDG2_HOME/bin
cd SERVER2_HOME/bin
./standalone.sh -c clustered.xml -Djava.net.preferIPv4Stack=true \
-Djboss.default.multicast.address=234.56.78.100 \
-Djboss.node.name=jdg2 -b _PUBLIC_IP_ADDRESS_
-Djboss.node.name=server2 -b _PUBLIC_IP_ADDRESS_
----
+
. To verify that channel works at this point, you may need to use JConsole and connect either to the running `JDG1` or the `JDG2` server. When you use the MBean `jgroups:type=protocol,cluster="cluster",protocol=RELAY2` and operation `printRoutes`, you should see output like this:
. To verify that channel works at this point, you may need to use JConsole and connect either to the running `SERVER1` or the `SERVER2` server. When you use the MBean `jgroups:type=protocol,cluster="cluster",protocol=RELAY2` and operation `printRoutes`, you should see output like this:
+
```
site1 --> _jdg1:site1
site2 --> _jdg2:site2
site1 --> _server1:site1
site2 --> _server2:site2
```
When you use the MBean `jgroups:type=protocol,cluster="cluster",protocol=GMS`, you should see that the attribute member contains just single member:
.. On `JDG1` it should be like this:
.. On `SERVER1` it should be like this:
+
```
(1) jdg1
(1) server1
```
+
.. And on JDG2 like this:
.. And on SERVER2 like this:
+
```
(1) jdg2
(1) server2
```
+
NOTE: In production, you can have more {jdgserver_name} servers in every data center. You just need to ensure that {jdgserver_name} servers in same data center are using the same multicast address (In other words, the same `jboss.default.multicast.address` during startup). Then in jconsole in `GMS` protocol view, you will see all the members of current cluster.
[[serversetup]]
===== {project_name} servers setup
===== Setting up {project_name} servers
. Unzip {project_name} server distribution to a location you choose. It will be referred to later as `NODE11`.
@ -551,7 +553,7 @@ for the {jdgserver_name} servers as described above) to the `connectionsInfinisp
----
cd NODE11/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=jdg1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
@ -563,7 +565,7 @@ cd NODE11/bin
----
cd NODE12/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node12 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=jdg1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
@ -580,7 +582,7 @@ NOTE: The channel name in the log might be different.
----
cd NODE21/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node21 -Djboss.site.name=site2 \
-Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=jdg2 \
-Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
@ -597,7 +599,7 @@ Received new cluster view for channel keycloak: [node21|0] (1) [node21]
----
cd NODE22/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node22 -Djboss.site.name=site2 \
-Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=jdg2 \
-Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
@ -628,21 +630,22 @@ should be immediately visible on any of 4 nodes as caches should be properly inv
2017-08-25 17:35:17,737 DEBUG [org.keycloak.models.sessions.infinispan.remotestore.RemoteCacheSessionListener] (Client-Listener-sessions-30012a77422542f5) Received event from remote store.
Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false'
```
+
include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3]
[[administration]]
==== Administration of Cross DC deployment
This section contains some tips and options related to Cross-Datacenter Replication.
* When you run the {project_name} server inside a data center, it is required that the database referenced in `KeycloakDS` datasource is already running and available in that data center. It is also necessary that the {jdgserver_name} server referenced by the `outbound-socket-binding`, which is referenced from the Infinispan cache `remote-store` element, is already running. Otherwise the {project_name} server will fail to start.
* Every data center can have more database nodes if you want to support database failover and better reliability. Refer to the documentation of your database and JDBC driver for the details how to set this up on the database side and how the `KeycloakDS` datasource on Keycloak side needs to be configured.
* Every datacenter can have more {jdgserver_name} servers running in the cluster. This is useful if you want some failover and better fault tolerance. The HotRod protocol used for communication between {jdgserver_name} servers and {project_name} servers has a feature that {jdgserver_name} servers will automatically send new topology to the {project_name} servers about the change in the {jdgserver_name} cluster, so the remote store on {project_name} side will know to which {jdgserver_name} servers it can connect. Read the {jdgserver_name} and WildFly documentation for more details.
* Every datacenter can have more {jdgserver_name} servers running in the cluster. This is useful if you want some failover and better fault tolerance. The Hot Rod protocol used for communication between {jdgserver_name} servers and {project_name} servers has a feature that {jdgserver_name} servers will automatically send new topology to the {project_name} servers about the change in the {jdgserver_name} cluster, so the remote store on {project_name} side will know to which {jdgserver_name} servers it can connect. Read the {jdgserver_name} and WildFly documentation for more details.
* It is highly recommended that a master {jdgserver_name} server is running in every site before the {project_name} servers in **any** site are started. As in our example, we started both `jdg1` and `jdg2` first, before all {project_name} servers. If you still need to run the {project_name} server and the backup site is offline, it is recommended to manually switch the backup site offline on the {jdgserver_name} servers on your site, as described in <<onoffline>>. If you do not manually switch the unavailable site offline, the first startup may fail or they may be some exceptions during startup until the backup site is taken offline automatically due the configured count of failed operations.
* It is highly recommended that a master {jdgserver_name} server is running in every site before the {project_name} servers in **any** site are started. As in our example, we started both `server1` and `server2` first, before all {project_name} servers. If you still need to run the {project_name} server and the backup site is offline, it is recommended to manually switch the backup site offline on the {jdgserver_name} servers on your site, as described in <<onoffline>>. If you do not manually switch the unavailable site offline, the first startup may fail or they may be some exceptions during startup until the backup site is taken offline automatically due the configured count of failed operations.
[[onoffline]]
@ -651,22 +654,28 @@ This section contains some tips and options related to Cross-Datacenter Replicat
For example, assume this scenario:
. Site `site2` is entirely offline from the `site1` perspective. This means that all {jdgserver_name} servers on `site2` are off *or* the network between `site1` and `site2` is broken.
. You run {project_name} servers and {jdgserver_name} server `jdg1` in site `site1`
. You run {project_name} servers and {jdgserver_name} server `server1` in site `site1`
. Someone logs in on a {project_name} server on `site1`.
. The {project_name} server from `site1` will try to write the session to the remote cache on `jdg1` server, which is supposed to backup data to the `jdg2` server in the `site2`. See <<communication>> for more information.
. Server `jdg2` is offline or unreachable from `jdg1`. So the backup from `jdg1` to `jdg2` will fail.
. The exception is thrown in `jdg1` log and the failure will be propagated from `jdg1` server to {project_name} servers as well because the default `FAIL` backup failure policy is configured. See <<backupfailure>> for details around the backup policies.
. The {project_name} server from `site1` will try to write the session to the remote cache on `server1` server, which is supposed to backup data to the `server2` server in the `site2`. See <<communication>> for more information.
. Server `server2` is offline or unreachable from `server1`. So the backup from `server1` to `server2` will fail.
. The exception is thrown in `server1` log and the failure will be propagated from `server1` server to {project_name} servers as well because the default `FAIL` backup failure policy is configured. See <<backupfailure>> for details around the backup policies.
. The error will happen on {project_name} side too and user may not be able to finish his login.
According to your environment, it may be more or less probable that the network between sites is unavailable or temporarily broken (split-brain). In case this happens, it is good that {jdgserver_name} servers on `site1` are aware of the fact that {jdgserver_name} servers on `site2` are unavailable, so they will stop trying to reach the servers in the `jdg2` site and the backup failures won't happen. This is called `Take site offline` .
According to your environment, it may be more or less probable that the network between sites is unavailable or temporarily broken (split-brain). In case this happens, it is good that {jdgserver_name} servers on `site1` are aware of the fact that {jdgserver_name} servers on `site2` are unavailable, so they will stop trying to reach the servers in the `server2` site and the backup failures won't happen. This is called `Take site offline` .
.Take site offline
There are 2 ways to take the site offline.
**Manually by admin** - Admin can use the `jconsole` or other tool and run some JMX operations to manually take the particular site offline.
This is useful especially if the outage is planned. With `jconsole` or CLI, you can connect to the `jdg1` server and take the `site2` offline.
More details about this are available in the link:{jdgserver_crossdcdocs_link}#taking_a_site_offline[JDG documentation].
This is useful especially if the outage is planned. With `jconsole` or CLI, you can connect to the `server1` server and take the `site2` offline.
More details about this are available in the
ifeval::[{project_product}==true]
link:{jdgserver_crossdcdocs_link}#taking_a_site_offline[{jdgserver_name} documentation].
endif::[]
ifeval::[{project_community}==true]
link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation].
endif::[]
WARNING: These steps usually need to be done for all the {project_name} caches mentioned in <<backups>>.
@ -719,14 +728,14 @@ When the network is back, it is sufficient to clear the cache just on one {proje
[[tuningcache]]
==== Tuning the JDG cache configuration
==== Tuning the {jdgserver_name} cache configuration
This section contains tips and options for configuring your JDG cache.
[[backupfailure]]
.Backup failure policy
By default, the configuration of backup `failure-policy` in the Infinispan cache configuration in the JDG `clustered.xml` file is configured as `FAIL`. You may change it to `WARN` or `IGNORE`, as you prefer.
By default, the configuration of backup `failure-policy` in the Infinispan cache configuration in the {jdgserver_name} `clustered.xml` file is configured as `FAIL`. You may change it to `WARN` or `IGNORE`, as you prefer.
The difference between `FAIL` and `WARN` is that when `FAIL` is used and the {jdgserver_name} server tries to back data up to the other site and the backup fails then the failure will be propagated back to the caller (the {project_name} server). The backup might fail because the second site is temporarily unreachable or there is a concurrent transaction which is trying to update same entity. In this case, the {project_name} server will then retry the operation a few times. However, if the retry fails, then the user might see the error after a longer timeout.
@ -743,7 +752,7 @@ You only need to manually restore the site when it is back online as mentioned i
In summary, if you expect frequent, longer outages between sites and it is acceptable for you to have some data inconsistencies and a not 100% accurate single-use cache, but you never want end-users to see the errors and long timeouts, then switch to `WARN`.
The difference between `WARN` and `IGNORE` is, that with `IGNORE` warnings are not written in the JDG log. See more details in the Infinispan documentation.
The difference between `WARN` and `IGNORE` is, that with `IGNORE` warnings are not written in the {jdgserver_name} log. See more details in the Infinispan documentation.
.Lock acquisition timeout
@ -856,7 +865,7 @@ then check the log of corresponding {jdgserver_name} server of your site and che
```
(HotRodServerHandler-6-35) ISPN000136: Error executing command ReplaceCommand,
writing keys [[B0x033E243034396234..[39]]: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after
0 milliseconds for key [B0x033E243034396234..[39] and requestor GlobalTx:jdg1:4353. Lock is held by GlobalTx:jdg1:4352
0 milliseconds for key [B0x033E243034396234..[39] and requestor GlobalTx:server1:4353. Lock is held by GlobalTx:server1:4352
```
+
Those exceptions are not necessarily an issue. They may happen anytime when a concurrent edit of the same entity is triggered on both DCs. This is common in a deployment. Usually the {project_name} server is notified about the failed operation and will retry it, so from the user's point of view, there is usually not any issue.
@ -929,8 +938,8 @@ other issues caused by the bug https://issues.redhat.com/browse/ISPN-9323. So fo
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
```
+
and you see some similar errors in the {project_name} log, it can indicate that there are incompatible versions of the HotRod protocol being used.
This is likely happen when you try to use {project_name} with the JDG 7.2 server or an old version of the Infinispan server. It
and you see some similar errors in the {project_name} log, it can indicate that there are incompatible versions of the Hot Rod protocol being used.
This is likely happen when you try to use {project_name} with an old version of the Infinispan server. It
will help if you add the `protocolVersion` property as an additional property to the `remote-store` element in the {project_name}
configuration file. For example:
+

View file

@ -0,0 +1,11 @@
[id="assembly-setting-up-crossdc_{context}"]
= Setting up Cross DC with {jdgserver_name} {jdgserver_version_latest}
[role="_abstract"]
Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of Cross-Datacenter replication.
include::proc-setting-up-infinispan.adoc[leveloffset=4]
include::proc-configuring-remote-cache.adoc[leveloffset=4]

View file

@ -0,0 +1,130 @@
[id="proc-configuring-remote-cache-{context}"]
= Configuring Remote Cache Stores on {project_name}
After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} data, you can configure the Infinispan subsystem to use those clusters as remote stores.
.Prerequisites
* Set up remote {jdgserver_name} clusters that can back up {jdgserver_name} data.
* Create a truststore that contains the SSL certificate with the {jdgserver_name} Server identity.
.Procedure
. Add the truststore to the {project_name} deployment.
. Create a socket binding that points to your {jdgserver_name} cluster.
+
[source,xml,options="nowrap",subs=attributes+]
----
<outbound-socket-binding name="remote-cache"> <1>
<remote-destination host="${remote.cache.host:<server_hostname>}"> <2>
<port="${remote.cache.port:11222}"/> <3>
</outbound-socket-binding>
----
<1> Names the socket binding as `remote-cache`.
<2> Specifies one or more hostnames for the {jdgserver_name} cluster.
<3> Defines the port of `11222` where the Hot Rod endpoint listens.
+
. Add the `org.keycloak.keycloak-model-infinispan` module to the `keycloak` cache container in the Infinispan subsystem.
+
[source,xml,options="nowrap",subs=attributes+]
----
<subsystem xmlns="urn:jboss:domain:infinispan:11.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan">
----
+
. Create a `hotrod-client.properties` file with the following content:
+
[source,xml,options="nowrap",subs=attributes+]
----
infinispan.client.hotrod.server_list = server1:11222
infinispan.client.hotrod.auth_username = myuser
infinispan.client.hotrod.auth_password = qwer1234!
infinispan.client.hotrod.auth_realm = default
infinispan.client.hotrod.auth_server_name = infinispan
infinispan.client.hotrod.sasl_mechanism = SCRAM-SHA-512
infinispan.client.hotrod.trust_store_file_name = /path/to/truststore.jks
infinispan.client.hotrod.trust_store_type = JKS
infinispan.client.hotrod.trust_store_password = password
----
+
. Add `hotrod-client.properties` to the classpath of your {project_name} server. For example:
+
[source,xml,options="nowrap",subs=attributes+]
----
./standalone.sh ... -P <path_to_properties_file>
----
. Update a replicated cache named `work` that is in the Infinispan subsystem with the following configuration:
+
[source,xml,options="nowrap",subs=attributes+]
----
<replicated-cache name="work"> <1>
<remote-store cache="work" <2>
remote-servers="remote-cache" <3>
passivation="false"
fetch-state="false"
purge="false"
preload="false"
shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
</remote-store>
</replicated-cache>
----
<1> Names the cache in the {jdgserver_name} configuration.
<2> Names the corresponding cache on the remote {jdgserver_name} cluster.
<3> Specifies the `remote-cache` socket binding.
+
The preceding cache configuration includes recommended settings for {jdgserver_name} caches.
Hot Rod client configuration properties specify the {jdgserver_name} user credentials and SSL keystore and truststore details.
+
Refer to the
ifeval::[{project_community}==true]
https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html#configure_clients-xsite[{jdgserver_name} documentation]
endif::[]
ifeval::[{project_product}==true]
https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_clients-xsite[{jdgserver_name} documentation]
endif::[]
for descriptions of each property.
. Add distributed caches to the Infinispan subsystem for each of the following caches:
+
* sessions
* clientSessions
* offlineSessions
* offlineClientSessions
* actionTokens
* loginFailures
+
For example, add a cache named `sessions` with the following configuration:
+
[source,xml,options="nowrap",subs=attributes+]
----
<distributed-cache name="sessions" <1>
owners="1"> <2>
<remote-store cache="sessions" <3>
remote-servers="remote-cache" <4>
passivation="false"
fetch-state="false"
purge="false"
preload="false"
shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
</remote-store>
</distributed-cache>
----
<1> Names the cache in the {jdgserver_name} configuration.
<2> Configures one replica of each cache entry across the {jdgserver_name} cluster.
<3> Names the corresponding cache on the remote {jdgserver_name} cluster.
<4> Specifies the `remote-cache` socket binding.
+
. Start {project_name}.
ifeval::[{project_product}==true]
[role="_additional-resources"]
.Additional resources
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/configuring_data_grid/index[Data Grid Configuration Guide] +
link:https://access.redhat.com/webassets/avalon/d/red-hat-data-grid/8.1/api/org/infinispan/client/hotrod/configuration/package-summary.html[Hot Rod Client Configuration API] +
link:https://access.redhat.com/webassets/avalon/d/red-hat-data-grid/8.1/configdocs/[Data Grid Configuration Schema Reference]
endif::[]

View file

@ -0,0 +1,174 @@
[id='setting-up-infinispan-{context}']
= Setting Up {jdgserver_name} Clusters
For Cross-Datacenter replication, you start by creating remote {jdgserver_name} clusters that can back up {project_name} data.
.Prerequisites
* Download and install {jdgserver_name} Server {jdgserver_version_latest}.
[NOTE]
====
{jdgserver_name} Server {jdgserver_version_latest} requires Java 11.
====
* Create a user to authenticate client connections from {jdgserver_name}, for example:
+
[source,bash,options="nowrap",subs=attributes+]
----
$ bin/cli.sh user create myuser -p "qwer1234!"
----
+
You specify these credentials in the Hot Rod client configuration when you set up {jdgserver_name}.
* Create an SSL keystore and truststore to secure connections between {jdgserver_name} and {project_name}.
For example, you can use keytool as follows:
[source,bash,options="nowrap",subs=attributes+]
----
# Create a keystore to provide an SSL identity to your {jdgserver_name} cluster
keytool -genkey -alias server -keyalg RSA -keystore server.jks -keysize 2048
# Export an SSL certificate from the keystore
keytool -exportcert -keystore server.jks -alias server -file server.crt
# Import the SSL certificate into a truststore that {jdgserver_name} can use to verify the SSL identity for {jdgserver_name}
keytool -importcert -keystore truststore.jks -alias server -file server.crt
rm server.crt
----
.Procedure
. Open `infinispan.xml` for editing.
+
By default, {jdgserver_name} Server uses `server/conf/infinispan.xml` for static configuration such as cluster transport and security mechanisms.
. Create a stack the uses TCPPING. The default tcp stack finds nodes by using IP multicast, which is typically unavailable.
+
[source,xml,options="nowrap",subs=attributes+]
----
<stack name="global-cluster" extends="tcp">
<!-- Remove MPING protocol from the stack and add TCPPING -->
<TCPPING initial_hosts="server1[7800],server2[7800]" stack.combine="REPLACE" stack.position="MPING"/>
</stack>
----
+
NOTE: The `initial_hosts` file contains the list of all Infinispan servers at both sites.
. Configure the {jdgserver_name} cluster transport to perform Cross-Datacenter replication.
.. Add the RELAY2 protocol to a JGroups stack.
+
[source,xml,options="nowrap",subs=attributes+]
----
<jgroups>
<stack name="xsite" extends="udp"> <1>
<relay.RELAY2 site="site1" <2>
max_site_masters="1000"/> <3>
<remote-sites default-stack="global-cluster"> <4>
<remote-site name="site1"/>
<remote-site name="site2"/>
</remote-sites>
</stack>
</jgroups>
----
<1> Creates a stack named `xsite` that extends the default UDP cluster transport.
<2> Adds the RELAY2 protocol and names the cluster you are configuring as `site1`. +
The site name must be unique to each {jdgserver_name} cluster.
<3> Sets 1000 as the number of relay nodes for the cluster. You should set a value that is equal to or greater than the maximum number of nodes in your {jdgserver_name} cluster.
<4> Names all {jdgserver_name} clusters that backup caches with {jdgserver_name} data and uses the default TCP stack for inter-cluster transport.
+
.. Configure the {jdgserver_name} cluster transport to use the stack.
+
[source,xml,options="nowrap",subs=attributes+]
----
<cache-container name="default" statistics="true">
<transport cluster="${infinispan.cluster.name:cluster}"
stack="xsite"/> <1>
</cache-container>
----
<1> Uses the `xsite` stack for the cluster.
+
. Configure the keystore as an SSL identity in the server security realm.
+
[source,xml,options="nowrap",subs=attributes+]
----
<server-identities>
<ssl>
<keystore path="server.jks" <1>
relative-to="infinispan.server.config.path"
keystore-password="password" <2>
alias="server" /> <3>
</ssl>
</server-identities>
----
<1> Specifies the path of the keystore that contains the SSL identity.
<2> Specifies the password to access the keystore.
<3> Names the alias of the certificate in the keystore.
+
. Configure the authentication mechanism for the Hot Rod endpoint.
+
[source,xml,options="nowrap",subs=attributes+]
----
<endpoints socket-binding="default">
<hotrod-connector name="hotrod">
<authentication>
<sasl mechanisms="SCRAM-SHA-512" <1>
server-name="infinispan" /> <2>
</authentication>
</hotrod-connector>
<rest-connector name="rest"/>
</endpoints>
----
<1> Configures the SASL authentication mechanism for the Hot Rod endpoint. SCRAM-SHA-512 is the default SASL mechanism for Hot Rod. However you can use whatever is appropriate for your environment, such as GSSAPI.
<2> Defines the name that {jdgserver_name} servers present to clients. You specify this name in the Hot Rod client configuration when you set up {project_name}.
+
. Create a cache template.
+
[source,xml,options="nowrap",subs=attributes+]
----
<cache-container ... >
<replicated-cache-configuration name="sessions-cfg" <1>
mode="SYNC"> <2>
<locking acquire-timeout="0" /> <3>
<backups>
<backup site="site2" strategy="SYNC" /> <4>
</backups>
</replicated-cache-configuration>
</cache-container>
----
<1> Creates a cache template named `sessions-cfg`.
<2> Defines a cache that synchronously replicates data across the cluster.
<3> Disables timeout for lock acquisition.
<4> Names the backup site for the {jdgserver_name} cluster you are configuring.
+
. Repeat the preceding steps to modify `infinispan.xml` on each node in the {jdgserver_name} cluster.
. Start the cluster and open {jdgserver_name} Console in any browser.
. Authenticate with the {jdgserver_name} user you created.
. Add caches for {jdgserver_name} data using the `sessions-cfg` cache template.
.. Navigate to the *Data Container* tab and then select *Create Cache*.
.. Specify a name for the cache.
.. Select `sessions-cfg` from the *Template* drop-down menu.
.. Select *Create*.
+
Use preceding steps to create each of the following caches:
+
* work
* sessions
* clientSessions
* offlineSessions
* offlineClientSessions
* actionTokens
* loginFailures
NOTE: We recommend that you create caches on {jdgserver_name} clusters at runtime through the CLI, Console, or Hot Rod and REST endpoints rather than adding caches to infinispan.xml. This strategy ensures that your caches are automatically synchronized across the cluster and permanently stored. Be sure to create the caches at each site. See the {jdgserver_name} documentation for more information.
ifeval::[{project_product}==true]
[role="_additional-resources"]
.Additional resources
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_relay-xsite[Configuring Data Grid Clusters for Cross-Site Replication] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#ssl_identity-server[Setting Up SSL Identities for Data Grid Server] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configuring_endpoints[Configuring Data Grid Endpoints] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configure_hotrod_authentication-server[Configuring Hot Rod Authentication Mechanisms] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#create_remote_cache[Remotely Creating Caches on Data Grid Clusters]
endif::[]

View file

@ -124,8 +124,9 @@ endif::[]
:appserver_loadbalancer_name: {appserver_name} {appserver_version} Documentation
:jdgserver_name: Infinispan
:jdgserver_version: 9.4.18
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/7.3/html/red_hat_data_grid_user_guide/x_site_replication
:jdgserver_version: 9.4.19
:jdgserver_version_latest: 11.0.3
:jdgserver_crossdcdocs_link: https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html
:fuseVersion: JBoss Fuse 6.3.0 Rollup 12
:fuseHawtioEAPVersion: JBoss EAP 6.4

View file

@ -139,8 +139,9 @@
:appserver_managementcli_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html-single/configuration_guide/#management_cli_overview
:appserver_managementconsole_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html-single/configuration_guide/#management_console_overview
:jdgserver_name: JDG
:jdgserver_version: 7.3.5
:jdgserver_name: RHDG
:jdgserver_version: 7.3.8
:jdgserver_version_latest: 8.1.0
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/7.3/html/red_hat_data_grid_user_guide/x_site_replication
:fuseVersion: JBoss Fuse 6.3.0 Rollup 12