Update cross-site instructions

Closes #1564
This commit is contained in:
AndyMunro 2022-06-13 16:54:41 -04:00 committed by Bruno Oliveira da Silva
parent f8403d68bb
commit 783fb9a357
4 changed files with 12 additions and 523 deletions

View file

@ -10,7 +10,7 @@
<orgname>Red Hat Customer Content Services</orgname> <orgname>Red Hat Customer Content Services</orgname>
</authorgroup> </authorgroup>
<legalnotice lang="en-US" version="5.0" xmlns="http://docbook.org/ns/docbook"> <legalnotice lang="en-US" version="5.0" xmlns="http://docbook.org/ns/docbook">
<para> Copyright <trademark class="copyright"></trademark> 2021 Red Hat, Inc. </para> <para> Copyright <trademark class="copyright"></trademark> {cdate} Red Hat, Inc. </para>
<para>Licensed under the Apache License, Version 2.0 (the "License"); <para>Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
You may obtain a copy of the License at</para> You may obtain a copy of the License at</para>

View file

@ -147,491 +147,6 @@ See the <<archdiagram>> for more details.
include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3] include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3]
[[setup]]
==== Setting up cross-site replication with {jdgserver_name} {jdgserver_version}
This example for {jdgserver_name} {jdgserver_version} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total.
* `Site1` consists of {jdgserver_name} server, `server1`, and 2 {project_name} servers, `node11` and `node12` .
* `Site2` consists of {jdgserver_name} server, `server2`, and 2 {project_name} servers, `node21` and `node22` .
* {jdgserver_name} servers `server1` and `server2` are connected to each other through the RELAY2 protocol and `backup` based {jdgserver_name}
caches in a similar way as described in the link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation].
* {project_name} servers `node11` and `node12` form a cluster with each other, but they do not communicate directly with any server in `site2`.
They communicate with the Infinispan server `server1` using the Hot Rod protocol (Remote cache). See <<communication>> for the details.
* The same details apply for `node21` and `node22`. They cluster with each other and communicate only with `server2` server using the Hot Rod protocol.
Our example setup assumes all that all 4 {project_name} servers talk to the same database. In production, it is recommended to use separate synchronously replicated databases across data centers as described in <<database>>.
[[jdgsetup]]
===== Setting up the {jdgserver_name} server
Follow these steps to set up the {jdgserver_name} server:
. Download {jdgserver_name} {jdgserver_version} server and unzip to a directory you choose. This location will be referred in later steps as `SERVER1_HOME` .
. Change those things in the `SERVER1_HOME/server/conf/infinispan-xsite.xml` in the configuration of JGroups subsystem:
.. Add the `xsite` channel, which will use `tcp` stack, under `channels` element:
+
```xml
<channels default="cluster">
<channel name="cluster"/>
<channel name="xsite" stack="tcp"/>
</channels>
```
+
.. Add a `relay` element to the end of the `udp` stack. We will configure it in a way that our site is `site1` and the other site, where we will backup, is `site2`:
+
```xml
<stack name="udp">
...
<relay site="site1">
<remote-site name="site2" channel="xsite"/>
<property name="relay_multicasts">false</property>
</relay>
</stack>
```
+
.. Configure the `tcp` stack to use `TCPPING` protocol instead of `MPING`. Remove the `MPING` element and replace it with the `TCPPING`. The `initial_hosts` element points to the hosts `server1` and `server2`:
+
```xml
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="TCPPING">
<property name="initial_hosts">server1[7600],server2[7600]</property>
<property name="ergonomics">false</property>
</protocol>
<protocol type="MERGE3"/>
...
</stack>
```
NOTE: This is just an example setup to have things quickly running. In production, you are not required to use `tcp` stack for the JGroups `RELAY2`, but you can configure any other stack. For example, you could use the default udp stack, if the network between your data centers is able to support multicast. Just make sure that the {jdgserver_name} and {project_name} clusters are mutually indiscoverable. Similarly, you are not required to use `TCPPING` as discovery protocol. And in production, you probably won't use `TCPPING` due it's static nature. Finally, site names are also configurable.
Details of this more-detailed setup are out-of-scope of the {project_name} documentation. See the {jdgserver_name} documentation and JGroups documentation for more details.
+
. Add this into `SERVER1_HOME/standalone/configuration/clustered.xml` under cache-container named `clustered`:
+
```xml
<cache-container name="clustered" default-cache="default" statistics="true">
...
<replicated-cache-configuration name="sessions-cfg" mode="SYNC" start="EAGER" batching="false">
ifeval::[{project_product}==true]
<transaction mode="NON_DURABLE_XA" locking="PESSIMISTIC"/>
endif::[]
<locking acquire-timeout="0" />
<backups>
<backup site="site2" failure-policy="FAIL" strategy="SYNC" enabled="true">
<take-offline min-wait="60000" after-failures="3" />
</backup>
</backups>
</replicated-cache-configuration>
<replicated-cache name="work" configuration="sessions-cfg"/>
<replicated-cache name="sessions" configuration="sessions-cfg"/>
<replicated-cache name="clientSessions" configuration="sessions-cfg"/>
<replicated-cache name="offlineSessions" configuration="sessions-cfg"/>
<replicated-cache name="offlineClientSessions" configuration="sessions-cfg"/>
<replicated-cache name="actionTokens" configuration="sessions-cfg"/>
<replicated-cache name="loginFailures" configuration="sessions-cfg"/>
</cache-container>
```
+
NOTE: Details about the configuration options inside `replicated-cache-configuration` are explained in <<tuningcache>>, which includes information about tweaking some of those options.
+
ifeval::[{project_community}==true]
WARNING: Unlike in previous version, the {jdgserver_name} server `replicated-cache-configuration` needs to be configured without `transaction`
element. See <<troubleshooting>> for more details.
endif::[]
+
. Some {jdgserver_name} server releases require authorization before accessing protected caches over network.
+
NOTE: You should not see any issue if you use recommended {jdgserver_name} {jdgserver_version} server and this step can (and should) be ignored.
Issues related to authorization may exist just for some other versions of {jdgserver_name} server.
+
{project_name} requires updates to
`___script_cache` cache containing scripts. If you get errors accessing this cache, you will need to set up authorization in
`clustered.xml` configuration as described below:
+
.. In the `<management>` section, add a security realm:
+
```xml
<management>
<security-realms>
...
<security-realm name="AllowScriptManager">
<authentication>
<users>
<user username="___script_manager">
<password>not-so-secret-password</password>
</user>
</users>
</authentication>
</security-realm>
</security-realms>
```
.. In the server core subsystem, add `<security>` as below:
+
```xml
<subsystem xmlns="urn:infinispan:server:core:8.4">
<cache-container name="clustered" default-cache="default" statistics="true">
<security>
<authorization>
<identity-role-mapper/>
<role name="___script_manager" permissions="ALL"/>
</authorization>
</security>
...
```
.. In the endpoint subsystem, add authentication configuration to Hot Rod connector:
+
```xml
<subsystem xmlns="urn:infinispan:server:endpoint:8.1">
<hotrod-connector cache-container="clustered" socket-binding="hotrod">
...
<authentication security-realm="AllowScriptManager">
<sasl mechanisms="DIGEST-MD5" qop="auth" server-name="keycloak-jdg-server">
<policy>
<no-anonymous value="false" />
</policy>
</sasl>
</authentication>
```
+
. Copy the server to the second location, which will be referred to later as `SERVER2_HOME`.
. In the `SERVER2_HOME/standalone/configuration/clustered.xml` exchange `site1` with `site2` and vice versa, both in the configuration of `relay` in the JGroups subsystem and in configuration of `backups` in the cache-subsystem. For example:
.. The `relay` element should look like this:
+
```xml
<relay site="site2">
<remote-site name="site1" channel="xsite"/>
<property name="relay_multicasts">false</property>
</relay>
```
+
.. The `backups` element like this:
+
```xml
<backups>
<backup site="site1" ....
...
```
+
NOTE: The _PUBLIC_IP_ADDRESS_ below refers to the IP address or hostname, which can be used for your server to bind to. Note that
every {jdgserver_name} server and {project_name} server needs to use different address. During example setup with all the servers running on the same host,
you may need to add the option `-Djboss.bind.address.management=_PUBLIC_IP_ADDRESS_` as every server needs to use also different management interface.
But this option usually should be omitted in production environments to avoid the ability for remote access to your server. For more information,
see the link:{appserver_socket_link}[_{appserver_socket_name}_].
. Start server `server1`:
+
[source,subs="+quotes"]
----
cd SERVER1_HOME/bin
./standalone.sh -c clustered.xml -Djava.net.preferIPv4Stack=true \
-Djboss.default.multicast.address=234.56.78.99 \
-Djboss.node.name=server1 -b _PUBLIC_IP_ADDRESS_
----
+
. Start server `server2`. There is a different multicast address, so the `server1` and `server2` servers are not directly clustered with each other; rather, they are just connected through the RELAY2 protocol, and the TCP JGroups stack is used for communication between them. The start up command looks like this:
+
[source,subs="+quotes"]
----
cd SERVER2_HOME/bin
./standalone.sh -c clustered.xml -Djava.net.preferIPv4Stack=true \
-Djboss.default.multicast.address=234.56.78.100 \
-Djboss.node.name=server2 -b _PUBLIC_IP_ADDRESS_
----
+
. To verify that channel works at this point, you may need to use JConsole and connect either to the running `SERVER1` or the `SERVER2` server. When you use the MBean `jgroups:type=protocol,cluster="cluster",protocol=RELAY2` and operation `printRoutes`, you should see output like this:
+
```
site1 --> _server1:site1
site2 --> _server2:site2
```
When you use the MBean `jgroups:type=protocol,cluster="cluster",protocol=GMS`, you should see that the attribute member contains just single member:
.. On `SERVER1` it should be like this:
+
```
(1) server1
```
+
.. And on SERVER2 like this:
+
```
(1) server2
```
+
NOTE: In production, you can have more {jdgserver_name} servers in every data center. You just need to ensure that {jdgserver_name} servers in same data center are using the same multicast address (In other words, the same `jboss.default.multicast.address` during startup). Then in jconsole in `GMS` protocol view, you will see all the members of current cluster.
[[serversetup]]
===== Setting up {project_name} servers
. Unzip {project_name} server distribution to a location you choose. It will be referred to later as `NODE11`.
. Configure a shared database for KeycloakDS datasource. It is recommended to use MySQL or MariaDB for testing purposes. See <<database>> for more details.
+
In production you will likely need to have a separate database server in every data center and both database servers should be synchronously replicated to each other. In the example setup, we just use a single database and connect all 4 {project_name} servers to it.
+
. Edit `NODE11/standalone/configuration/standalone-ha.xml` :
.. Add the attribute `site` to the JGroups UDP protocol:
+
```xml
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp" site="${jboss.site.name}"/>
```
+
.. Add the `remote-store` under `work` cache:
+
```xml
<replicated-cache name="work">
<remote-store cache="work" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
ifeval::[{project_product}==true]
<property name="protocolVersion">2.6</property>
endif::[]
ifeval::[{project_community}==true]
<property name="protocolVersion">2.9</property>
endif::[]
</remote-store>
</replicated-cache>
```
+
.. Add the `remote-store` like this under `sessions` cache:
+
```xml
<distributed-cache name="sessions" owners="1">
<remote-store cache="sessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
ifeval::[{project_product}==true]
<property name="protocolVersion">2.6</property>
endif::[]
ifeval::[{project_community}==true]
<property name="protocolVersion">2.9</property>
endif::[]
</remote-store>
</distributed-cache>
```
+
.. Do the same for `offlineSessions`, `clientSessions`, `offlineClientSessions`, `loginFailures`, and `actionTokens` caches (the only difference from `sessions` cache is that `cache` property value are different):
+
```xml
<distributed-cache name="offlineSessions" owners="1">
<remote-store cache="offlineSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
ifeval::[{project_product}==true]
<property name="protocolVersion">2.6</property>
endif::[]
ifeval::[{project_community}==true]
<property name="protocolVersion">2.9</property>
endif::[]
</remote-store>
</distributed-cache>
<distributed-cache name="clientSessions" owners="1">
<remote-store cache="clientSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
ifeval::[{project_product}==true]
<property name="protocolVersion">2.6</property>
endif::[]
ifeval::[{project_community}==true]
<property name="protocolVersion">2.9</property>
endif::[]
</remote-store>
</distributed-cache>
<distributed-cache name="offlineClientSessions" owners="1">
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
ifeval::[{project_product}==true]
<property name="protocolVersion">2.6</property>
endif::[]
ifeval::[{project_community}==true]
<property name="protocolVersion">2.9</property>
endif::[]
</remote-store>
</distributed-cache>
<distributed-cache name="loginFailures" owners="1">
<remote-store cache="loginFailures" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
ifeval::[{project_product}==true]
<property name="protocolVersion">2.6</property>
endif::[]
ifeval::[{project_community}==true]
<property name="protocolVersion">2.9</property>
endif::[]
</remote-store>
</distributed-cache>
<distributed-cache name="actionTokens" owners="2">
<object-memory size="-1"/>
<expiration max-idle="-1" interval="300000"/>
<remote-store cache="actionTokens" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="true" shared="true">
<property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
ifeval::[{project_product}==true]
<property name="protocolVersion">2.6</property>
endif::[]
ifeval::[{project_community}==true]
<property name="protocolVersion">2.9</property>
endif::[]
</remote-store>
</distributed-cache>
```
+
.. Add outbound socket binding for the remote store into `socket-binding-group` element configuration:
+
```xml
<outbound-socket-binding name="remote-cache">
<remote-destination host="${remote.cache.host:localhost}" port="${remote.cache.port:11222}"/>
</outbound-socket-binding>
```
+
.. The configuration of distributed cache `authenticationSessions` and other caches is left unchanged.
.. It is recommended to add the `remoteStoreSecurityEnabled` property with the value of `false` (or eventually `true` if you enabled security
for the {jdgserver_name} servers as described above) to the `connectionsInfinispan` SPI in the `keycloak-server` subsystem:
+
```xml
<spi name="connectionsInfinispan">
...
<provider ...>
<properties>
...
<property name="remoteStoreSecurityEnabled" value="false"/>
</properties>
...
```
.. Optionally enable DEBUG logging under the `logging` subsystem:
+
```xml
<logger category="org.keycloak.cluster.infinispan">
<level name="DEBUG"/>
</logger>
<logger category="org.keycloak.connections.infinispan">
<level name="DEBUG"/>
</logger>
<logger category="org.keycloak.models.cache.infinispan">
<level name="DEBUG"/>
</logger>
<logger category="org.keycloak.models.sessions.infinispan">
<level name="DEBUG"/>
</logger>
```
+
. Copy the `NODE11` to 3 other directories referred later as `NODE12`, `NODE21` and `NODE22`.
. Start `NODE11` :
+
[source,subs="+quotes"]
----
cd NODE11/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
. Start `NODE12` :
+
[source,subs="+quotes"]
----
cd NODE12/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node12 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
The cluster nodes should be connected. Something like this should be in the log of both NODE11 and NODE12:
+
```
Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12]
```
NOTE: The channel name in the log might be different.
. Start `NODE21` :
+
[source,subs="+quotes"]
----
cd NODE21/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node21 -Djboss.site.name=site2 \
-Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
It shouldn't be connected to the cluster with `NODE11` and `NODE12`, but to separate one:
+
```
Received new cluster view for channel keycloak: [node21|0] (1) [node21]
```
+
. Start `NODE22` :
+
[source,subs="+quotes"]
----
cd NODE22/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node22 -Djboss.site.name=site2 \
-Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
It should be in cluster with `NODE21` :
+
```
Received new cluster view for channel keycloak: [node21|1] (2) [node21, node22]
```
+
NOTE: The channel name in the log might be different.
. Test:
.. Go to `http://node11:8080/auth/` and create the initial admin user.
.. Go to `http://node11:8080/auth/admin` and login as admin to admin console.
.. Open a second browser and go to any of nodes `http://node12:8080/auth/admin` or `http://node21:8080/auth/admin` or `http://node22:8080/auth/admin`. After login, you should be able to see
the same sessions in tab `Sessions` of particular user, client or realm on all 4 servers.
.. After doing any change in Keycloak admin console (eg. update some user or some realm), the update
should be immediately visible on any of 4 nodes as caches should be properly invalidated everywhere.
.. Check server.logs if needed. After login or logout, the message like this should be on all the nodes `NODEXY/standalone/log/server.log` :
+
```
2017-08-25 17:35:17,737 DEBUG [org.keycloak.models.sessions.infinispan.remotestore.RemoteCacheSessionListener] (Client-Listener-sessions-30012a77422542f5) Received event from remote store.
Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false'
```
[[administration]] [[administration]]
==== Administration of cross-site deployment ==== Administration of cross-site deployment
@ -679,7 +194,7 @@ endif::[]
WARNING: These steps usually need to be done for all the {project_name} caches mentioned in <<backups>>. WARNING: These steps usually need to be done for all the {project_name} caches mentioned in <<backups>>.
**Automatically** - After some amount of failed backups, the `site2` will usually be taken offline automatically. This is done due the configuration of `take-offline` element inside the cache configuration as configured in <<jdgsetup>>. **Automatically** - After some number of failed backups, the `site2` will usually be taken offline automatically. This is done due the configuration of `take-offline` element inside the cache configuration.
```xml ```xml
<take-offline min-wait="60000" after-failures="3" /> <take-offline min-wait="60000" after-failures="3" />
@ -817,11 +332,9 @@ Note the `mode` attribute of cache-configuration element.
The following tips are intended to assist you should you need to troubleshoot: The following tips are intended to assist you should you need to troubleshoot:
* It is recommended to go through the <<setup>> and have this one working first, so that you have some understanding of how things work. It is also wise to read this entire document to have some understanding of things. * We recommend that you go through the procedure for xref:assembly-setting-up-crossdc[Setting up cross-site replication] and have this one working first, so that you have some understanding of how things work. It is also wise to read this entire document to have some understanding of things.
* Check in jconsole cluster status (GMS) and the JGroups status (RELAY) of {jdgserver_name} as described in <<jdgsetup>>. If things do not look as expected, then the issue is likely in the setup of {jdgserver_name} servers. * For the {project_name} servers, you should see a message such as this during the server startup:
* For the {project_name} servers, you should see a message like this during the server startup:
+ +
``` ```
18:09:30,156 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (ServerService Thread Pool -- 54) 18:09:30,156 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (ServerService Thread Pool -- 54)
@ -831,7 +344,7 @@ Node name: node11, Site name: site1
Check that the site name and the node name looks as expected during the startup of {project_name} server. Check that the site name and the node name looks as expected during the startup of {project_name} server.
* Check that {project_name} servers are in cluster as expected, including that only the {project_name} servers from the same data center are in cluster with each other. * Check that {project_name} servers are in cluster as expected, including that only the {project_name} servers from the same data center are in cluster with each other.
This can be also checked in JConsole through the GMS view. See link:{installguide_troubleshooting_link}[cluster troubleshooting] for additional details. This can be also checked in JConsole through the GMS view.
* If there are exceptions during startup of {project_name} server like this: * If there are exceptions during startup of {project_name} server like this:
+ +
@ -856,8 +369,6 @@ then check the log of corresponding {jdgserver_name} server of your site and che
* Check the Infinispan statistics, which are available through JMX. For example, try to login and then see if the new session was successfully written to both {jdgserver_name} servers and is available in the `sessions` cache there. This can be done indirectly by checking the count of elements in the `sessions` cache for the MBean `jboss.datagrid-infinispan:type=Cache,name="sessions(repl_sync)",manager="clustered",component=Statistics` and attribute `numberOfEntries`. After login, there should be one more entry for `numberOfEntries` on both {jdgserver_name} servers on both sites. * Check the Infinispan statistics, which are available through JMX. For example, try to login and then see if the new session was successfully written to both {jdgserver_name} servers and is available in the `sessions` cache there. This can be done indirectly by checking the count of elements in the `sessions` cache for the MBean `jboss.datagrid-infinispan:type=Cache,name="sessions(repl_sync)",manager="clustered",component=Statistics` and attribute `numberOfEntries`. After login, there should be one more entry for `numberOfEntries` on both {jdgserver_name} servers on both sites.
* Enable DEBUG logging as described <<serversetup>>. For example, if you log in and you think that the new session is not available on the second site, it's good to check the {project_name} server logs and check that listeners were triggered as described in the <<serversetup>>. If you do not know and want to ask on keycloak-user mailing list, it is helpful to send the log files from {project_name} servers on both datacenters in the email. Either add the log snippets to the mails or put the logs somewhere and reference them in the email.
* If you updated the entity, such as `user`, on {project_name} server on `site1` and you do not see that entity updated on the {project_name} server on `site2`, then the issue can be either in the replication of the synchronous database itself or that {project_name} caches are not properly invalidated. You may try to temporarily disable the {project_name} caches as described link:{installguide_disablingcaching_link}[here] to nail down if the issue is at the database replication level. Also it may help to manually connect to the database and check if data are updated as expected. This is specific to every database, so you will need to consult the documentation for your database. * If you updated the entity, such as `user`, on {project_name} server on `site1` and you do not see that entity updated on the {project_name} server on `site2`, then the issue can be either in the replication of the synchronous database itself or that {project_name} caches are not properly invalidated. You may try to temporarily disable the {project_name} caches as described link:{installguide_disablingcaching_link}[here] to nail down if the issue is at the database replication level. Also it may help to manually connect to the database and check if data are updated as expected. This is specific to every database, so you will need to consult the documentation for your database.
* Sometimes you may see the exceptions related to locks like this in {jdgserver_name} server log: * Sometimes you may see the exceptions related to locks like this in {jdgserver_name} server log:
@ -870,33 +381,6 @@ writing keys [[B0x033E243034396234..[39]]: org.infinispan.util.concurrent.Timeou
+ +
Those exceptions are not necessarily an issue. They may happen anytime when a concurrent edit of the same entity is triggered on both DCs. This is common in a deployment. Usually the {project_name} server is notified about the failed operation and will retry it, so from the user's point of view, there is usually not any issue. Those exceptions are not necessarily an issue. They may happen anytime when a concurrent edit of the same entity is triggered on both DCs. This is common in a deployment. Usually the {project_name} server is notified about the failed operation and will retry it, so from the user's point of view, there is usually not any issue.
* If there are exceptions during startup of {project_name} server, like this:
+
```
16:44:18,321 WARN [org.infinispan.client.hotrod.impl.protocol.Codec21] (ServerService Thread Pool -- 55) ISPN004005: Error received from the server: java.lang.SecurityException: ISPN000287: Unauthorized access: subject 'Subject with principal(s): []' lacks 'READ' permission
...
```
+
These log entries are the result of {project_name} automatically detecting whether authentication is required on {jdgserver_name} and
mean that authentication is necessary. At this point you will notice that either the server starts successfully and you can safely
ignore these or that the server fails to start. If the server fails to start, ensure that {jdgserver_name} has been configured
properly for authentication as described in <<jdgsetup>>. To prevent this log entry from being included, you can force authentication
by setting `remoteStoreSecurityEnabled` property to `true` in `spi=connectionsInfinispan/provider=default` configuration:
+
```xml
<subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">
...
<spi name="connectionsInfinispan">
...
<provider name="default" enabled="true">
<properties>
...
<property name="remoteStoreSecurityEnabled" value="true"/>
</properties>
</provider>
</spi>
```
* If you try to authenticate with {project_name} to your application, but authentication fails with an infinite number * If you try to authenticate with {project_name} to your application, but authentication fails with an infinite number
of redirects in your browser and you see the errors like this in the {project_name} server log: of redirects in your browser and you see the errors like this in the {project_name} server log:
+ +

View file

@ -1,10 +1,15 @@
[id="assembly-setting-up-crossdc"] [id="assembly-setting-up-crossdc"]
:context: xsite :context: xsite
= Setting up cross-site with {jdgserver_name} {jdgserver_version_latest} = Setting up cross-site replication
[role="_abstract"] [role="_abstract"]
Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of cross-site replication. Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of cross-site replication.
[NOTE]
====
The exact instructions for your version of {jdgserver_name} may be slightly different from these instructions, which apply to {jdgserver_name} 8.1. Check the {jdgserver_name} documentation for any steps that do not work as described.
====
This example for {jdgserver_name} {jdgserver_version_latest} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total. This example for {jdgserver_name} {jdgserver_version_latest} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total.
* `Site1` consists of {jdgserver_name} server, `server1`, and 2 {project_name} servers, `node11` and `node12` . * `Site1` consists of {jdgserver_name} server, `server1`, and 2 {project_name} servers, `node11` and `node12` .

View file

@ -157,7 +157,7 @@
:jdgserver_name: RHDG :jdgserver_name: RHDG
:jdgserver_version: 7.3 :jdgserver_version: 7.3
:jdgserver_version_latest: 8.1 :jdgserver_version_latest: 8.x
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/data_grid_guide_to_cross-site_replication/index :jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/data_grid_guide_to_cross-site_replication/index
:subsystem_undertow_xml_urn: urn:jboss:domain:undertow:12.0 :subsystem_undertow_xml_urn: urn:jboss:domain:undertow:12.0