diff --git a/server_installation/master-docinfo.xml b/server_installation/master-docinfo.xml index f1d800155b..b0cc742088 100644 --- a/server_installation/master-docinfo.xml +++ b/server_installation/master-docinfo.xml @@ -10,7 +10,7 @@ Red Hat Customer Content Services - Copyright 2021 Red Hat, Inc. + Copyright {cdate} Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at diff --git a/server_installation/topics/operating-mode/crossdc.adoc b/server_installation/topics/operating-mode/crossdc.adoc index fd057ecfd4..cadc0bcc92 100644 --- a/server_installation/topics/operating-mode/crossdc.adoc +++ b/server_installation/topics/operating-mode/crossdc.adoc @@ -147,491 +147,6 @@ See the <> for more details. include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3] -[[setup]] -==== Setting up cross-site replication with {jdgserver_name} {jdgserver_version} - -This example for {jdgserver_name} {jdgserver_version} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total. - -* `Site1` consists of {jdgserver_name} server, `server1`, and 2 {project_name} servers, `node11` and `node12` . - -* `Site2` consists of {jdgserver_name} server, `server2`, and 2 {project_name} servers, `node21` and `node22` . - -* {jdgserver_name} servers `server1` and `server2` are connected to each other through the RELAY2 protocol and `backup` based {jdgserver_name} -caches in a similar way as described in the link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation]. - -* {project_name} servers `node11` and `node12` form a cluster with each other, but they do not communicate directly with any server in `site2`. -They communicate with the Infinispan server `server1` using the Hot Rod protocol (Remote cache). See <> for the details. - -* The same details apply for `node21` and `node22`. They cluster with each other and communicate only with `server2` server using the Hot Rod protocol. - -Our example setup assumes all that all 4 {project_name} servers talk to the same database. In production, it is recommended to use separate synchronously replicated databases across data centers as described in <>. - - -[[jdgsetup]] -===== Setting up the {jdgserver_name} server -Follow these steps to set up the {jdgserver_name} server: - -. Download {jdgserver_name} {jdgserver_version} server and unzip to a directory you choose. This location will be referred in later steps as `SERVER1_HOME` . - -. Change those things in the `SERVER1_HOME/server/conf/infinispan-xsite.xml` in the configuration of JGroups subsystem: -.. Add the `xsite` channel, which will use `tcp` stack, under `channels` element: -+ -```xml - - - - -``` -+ -.. Add a `relay` element to the end of the `udp` stack. We will configure it in a way that our site is `site1` and the other site, where we will backup, is `site2`: -+ -```xml - - ... - - - false - - -``` -+ -.. Configure the `tcp` stack to use `TCPPING` protocol instead of `MPING`. Remove the `MPING` element and replace it with the `TCPPING`. The `initial_hosts` element points to the hosts `server1` and `server2`: -+ -```xml - - - - server1[7600],server2[7600] - false - - - ... - -``` -NOTE: This is just an example setup to have things quickly running. In production, you are not required to use `tcp` stack for the JGroups `RELAY2`, but you can configure any other stack. For example, you could use the default udp stack, if the network between your data centers is able to support multicast. Just make sure that the {jdgserver_name} and {project_name} clusters are mutually indiscoverable. Similarly, you are not required to use `TCPPING` as discovery protocol. And in production, you probably won't use `TCPPING` due it's static nature. Finally, site names are also configurable. -Details of this more-detailed setup are out-of-scope of the {project_name} documentation. See the {jdgserver_name} documentation and JGroups documentation for more details. - -+ - -. Add this into `SERVER1_HOME/standalone/configuration/clustered.xml` under cache-container named `clustered`: -+ -```xml - - ... - -ifeval::[{project_product}==true] - -endif::[] - - - - - - - - - - - - - - - - - -``` -+ -NOTE: Details about the configuration options inside `replicated-cache-configuration` are explained in <>, which includes information about tweaking some of those options. -+ -ifeval::[{project_community}==true] -WARNING: Unlike in previous version, the {jdgserver_name} server `replicated-cache-configuration` needs to be configured without `transaction` -element. See <> for more details. -endif::[] -+ -. Some {jdgserver_name} server releases require authorization before accessing protected caches over network. -+ -NOTE: You should not see any issue if you use recommended {jdgserver_name} {jdgserver_version} server and this step can (and should) be ignored. -Issues related to authorization may exist just for some other versions of {jdgserver_name} server. -+ -{project_name} requires updates to -`___script_cache` cache containing scripts. If you get errors accessing this cache, you will need to set up authorization in -`clustered.xml` configuration as described below: -+ -.. In the `` section, add a security realm: -+ -```xml - - - ... - - - - - not-so-secret-password - - - - - - -``` - -.. In the server core subsystem, add `` as below: -+ -```xml - - - - - - - - - ... -``` - -.. In the endpoint subsystem, add authentication configuration to Hot Rod connector: -+ -```xml - - - ... - - - - - - - -``` - -+ -. Copy the server to the second location, which will be referred to later as `SERVER2_HOME`. - -. In the `SERVER2_HOME/standalone/configuration/clustered.xml` exchange `site1` with `site2` and vice versa, both in the configuration of `relay` in the JGroups subsystem and in configuration of `backups` in the cache-subsystem. For example: -.. The `relay` element should look like this: -+ -```xml - - - false - -``` -+ - -.. The `backups` element like this: -+ -```xml - - _server1:site1 -site2 --> _server2:site2 -``` -When you use the MBean `jgroups:type=protocol,cluster="cluster",protocol=GMS`, you should see that the attribute member contains just single member: -.. On `SERVER1` it should be like this: -+ -``` -(1) server1 -``` -+ -.. And on SERVER2 like this: -+ -``` -(1) server2 -``` -+ -NOTE: In production, you can have more {jdgserver_name} servers in every data center. You just need to ensure that {jdgserver_name} servers in same data center are using the same multicast address (In other words, the same `jboss.default.multicast.address` during startup). Then in jconsole in `GMS` protocol view, you will see all the members of current cluster. - - -[[serversetup]] -===== Setting up {project_name} servers - -. Unzip {project_name} server distribution to a location you choose. It will be referred to later as `NODE11`. - -. Configure a shared database for KeycloakDS datasource. It is recommended to use MySQL or MariaDB for testing purposes. See <> for more details. -+ -In production you will likely need to have a separate database server in every data center and both database servers should be synchronously replicated to each other. In the example setup, we just use a single database and connect all 4 {project_name} servers to it. -+ -. Edit `NODE11/standalone/configuration/standalone-ha.xml` : - -.. Add the attribute `site` to the JGroups UDP protocol: -+ -```xml - - -``` -+ - -.. Add the `remote-store` under `work` cache: -+ -```xml - - - true - org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory -ifeval::[{project_product}==true] - 2.6 -endif::[] -ifeval::[{project_community}==true] - 2.9 -endif::[] - - -``` -+ - -.. Add the `remote-store` like this under `sessions` cache: -+ -```xml - - - true - org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory -ifeval::[{project_product}==true] - 2.6 -endif::[] -ifeval::[{project_community}==true] - 2.9 -endif::[] - - -``` -+ - -.. Do the same for `offlineSessions`, `clientSessions`, `offlineClientSessions`, `loginFailures`, and `actionTokens` caches (the only difference from `sessions` cache is that `cache` property value are different): -+ -```xml - - - true - org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory -ifeval::[{project_product}==true] - 2.6 -endif::[] -ifeval::[{project_community}==true] - 2.9 -endif::[] - - - - - - true - org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory -ifeval::[{project_product}==true] - 2.6 -endif::[] -ifeval::[{project_community}==true] - 2.9 -endif::[] - - - - - - true - org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory -ifeval::[{project_product}==true] - 2.6 -endif::[] -ifeval::[{project_community}==true] - 2.9 -endif::[] - - - - - - true - org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory -ifeval::[{project_product}==true] - 2.6 -endif::[] -ifeval::[{project_community}==true] - 2.9 -endif::[] - - - - - - - - true - org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory -ifeval::[{project_product}==true] - 2.6 -endif::[] -ifeval::[{project_community}==true] - 2.9 -endif::[] - - -``` -+ - -.. Add outbound socket binding for the remote store into `socket-binding-group` element configuration: -+ -```xml - - - -``` -+ - -.. The configuration of distributed cache `authenticationSessions` and other caches is left unchanged. - -.. It is recommended to add the `remoteStoreSecurityEnabled` property with the value of `false` (or eventually `true` if you enabled security -for the {jdgserver_name} servers as described above) to the `connectionsInfinispan` SPI in the `keycloak-server` subsystem: -+ -```xml - - ... - - - ... - - - ... -``` - -.. Optionally enable DEBUG logging under the `logging` subsystem: -+ -```xml - - - - - - - - - - - - -``` -+ - -. Copy the `NODE11` to 3 other directories referred later as `NODE12`, `NODE21` and `NODE22`. - -. Start `NODE11` : -+ -[source,subs="+quotes"] ----- -cd NODE11/bin -./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \ - -Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \ - -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ - ----- -+ - -. Start `NODE12` : -+ -[source,subs="+quotes"] ----- -cd NODE12/bin -./standalone.sh -c standalone-ha.xml -Djboss.node.name=node12 -Djboss.site.name=site1 \ - -Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \ - -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ ----- -+ -The cluster nodes should be connected. Something like this should be in the log of both NODE11 and NODE12: -+ -``` -Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12] -``` -NOTE: The channel name in the log might be different. - -. Start `NODE21` : -+ -[source,subs="+quotes"] ----- -cd NODE21/bin -./standalone.sh -c standalone-ha.xml -Djboss.node.name=node21 -Djboss.site.name=site2 \ - -Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \ - -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ ----- -+ -It shouldn't be connected to the cluster with `NODE11` and `NODE12`, but to separate one: -+ -``` -Received new cluster view for channel keycloak: [node21|0] (1) [node21] -``` -+ - -. Start `NODE22` : -+ -[source,subs="+quotes"] ----- -cd NODE22/bin -./standalone.sh -c standalone-ha.xml -Djboss.node.name=node22 -Djboss.site.name=site2 \ - -Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \ - -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ ----- -+ -It should be in cluster with `NODE21` : -+ -``` -Received new cluster view for channel keycloak: [node21|1] (2) [node21, node22] -``` -+ - -NOTE: The channel name in the log might be different. - -. Test: - -.. Go to `http://node11:8080/auth/` and create the initial admin user. - -.. Go to `http://node11:8080/auth/admin` and login as admin to admin console. - -.. Open a second browser and go to any of nodes `http://node12:8080/auth/admin` or `http://node21:8080/auth/admin` or `http://node22:8080/auth/admin`. After login, you should be able to see -the same sessions in tab `Sessions` of particular user, client or realm on all 4 servers. - -.. After doing any change in Keycloak admin console (eg. update some user or some realm), the update -should be immediately visible on any of 4 nodes as caches should be properly invalidated everywhere. - -.. Check server.logs if needed. After login or logout, the message like this should be on all the nodes `NODEXY/standalone/log/server.log` : -+ -``` -2017-08-25 17:35:17,737 DEBUG [org.keycloak.models.sessions.infinispan.remotestore.RemoteCacheSessionListener] (Client-Listener-sessions-30012a77422542f5) Received event from remote store. -Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false' -``` [[administration]] ==== Administration of cross-site deployment @@ -679,7 +194,7 @@ endif::[] WARNING: These steps usually need to be done for all the {project_name} caches mentioned in <>. -**Automatically** - After some amount of failed backups, the `site2` will usually be taken offline automatically. This is done due the configuration of `take-offline` element inside the cache configuration as configured in <>. +**Automatically** - After some number of failed backups, the `site2` will usually be taken offline automatically. This is done due the configuration of `take-offline` element inside the cache configuration. ```xml @@ -817,11 +332,9 @@ Note the `mode` attribute of cache-configuration element. The following tips are intended to assist you should you need to troubleshoot: -* It is recommended to go through the <> and have this one working first, so that you have some understanding of how things work. It is also wise to read this entire document to have some understanding of things. +* We recommend that you go through the procedure for xref:assembly-setting-up-crossdc[Setting up cross-site replication] and have this one working first, so that you have some understanding of how things work. It is also wise to read this entire document to have some understanding of things. -* Check in jconsole cluster status (GMS) and the JGroups status (RELAY) of {jdgserver_name} as described in <>. If things do not look as expected, then the issue is likely in the setup of {jdgserver_name} servers. - -* For the {project_name} servers, you should see a message like this during the server startup: +* For the {project_name} servers, you should see a message such as this during the server startup: + ``` 18:09:30,156 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (ServerService Thread Pool -- 54) @@ -831,7 +344,7 @@ Node name: node11, Site name: site1 Check that the site name and the node name looks as expected during the startup of {project_name} server. * Check that {project_name} servers are in cluster as expected, including that only the {project_name} servers from the same data center are in cluster with each other. -This can be also checked in JConsole through the GMS view. See link:{installguide_troubleshooting_link}[cluster troubleshooting] for additional details. +This can be also checked in JConsole through the GMS view. * If there are exceptions during startup of {project_name} server like this: + @@ -856,8 +369,6 @@ then check the log of corresponding {jdgserver_name} server of your site and che * Check the Infinispan statistics, which are available through JMX. For example, try to login and then see if the new session was successfully written to both {jdgserver_name} servers and is available in the `sessions` cache there. This can be done indirectly by checking the count of elements in the `sessions` cache for the MBean `jboss.datagrid-infinispan:type=Cache,name="sessions(repl_sync)",manager="clustered",component=Statistics` and attribute `numberOfEntries`. After login, there should be one more entry for `numberOfEntries` on both {jdgserver_name} servers on both sites. -* Enable DEBUG logging as described <>. For example, if you log in and you think that the new session is not available on the second site, it's good to check the {project_name} server logs and check that listeners were triggered as described in the <>. If you do not know and want to ask on keycloak-user mailing list, it is helpful to send the log files from {project_name} servers on both datacenters in the email. Either add the log snippets to the mails or put the logs somewhere and reference them in the email. - * If you updated the entity, such as `user`, on {project_name} server on `site1` and you do not see that entity updated on the {project_name} server on `site2`, then the issue can be either in the replication of the synchronous database itself or that {project_name} caches are not properly invalidated. You may try to temporarily disable the {project_name} caches as described link:{installguide_disablingcaching_link}[here] to nail down if the issue is at the database replication level. Also it may help to manually connect to the database and check if data are updated as expected. This is specific to every database, so you will need to consult the documentation for your database. * Sometimes you may see the exceptions related to locks like this in {jdgserver_name} server log: @@ -870,33 +381,6 @@ writing keys [[B0x033E243034396234..[39]]: org.infinispan.util.concurrent.Timeou + Those exceptions are not necessarily an issue. They may happen anytime when a concurrent edit of the same entity is triggered on both DCs. This is common in a deployment. Usually the {project_name} server is notified about the failed operation and will retry it, so from the user's point of view, there is usually not any issue. -* If there are exceptions during startup of {project_name} server, like this: -+ -``` -16:44:18,321 WARN [org.infinispan.client.hotrod.impl.protocol.Codec21] (ServerService Thread Pool -- 55) ISPN004005: Error received from the server: java.lang.SecurityException: ISPN000287: Unauthorized access: subject 'Subject with principal(s): []' lacks 'READ' permission - ... -``` -+ -These log entries are the result of {project_name} automatically detecting whether authentication is required on {jdgserver_name} and -mean that authentication is necessary. At this point you will notice that either the server starts successfully and you can safely -ignore these or that the server fails to start. If the server fails to start, ensure that {jdgserver_name} has been configured -properly for authentication as described in <>. To prevent this log entry from being included, you can force authentication -by setting `remoteStoreSecurityEnabled` property to `true` in `spi=connectionsInfinispan/provider=default` configuration: -+ -```xml - - ... - - ... - - - ... - - - - -``` - * If you try to authenticate with {project_name} to your application, but authentication fails with an infinite number of redirects in your browser and you see the errors like this in the {project_name} server log: + diff --git a/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc b/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc index 8a61deb5da..fc2ea57b8a 100644 --- a/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc +++ b/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc @@ -1,10 +1,15 @@ [id="assembly-setting-up-crossdc"] :context: xsite -= Setting up cross-site with {jdgserver_name} {jdgserver_version_latest} += Setting up cross-site replication [role="_abstract"] Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of cross-site replication. +[NOTE] +==== +The exact instructions for your version of {jdgserver_name} may be slightly different from these instructions, which apply to {jdgserver_name} 8.1. Check the {jdgserver_name} documentation for any steps that do not work as described. +==== + This example for {jdgserver_name} {jdgserver_version_latest} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total. * `Site1` consists of {jdgserver_name} server, `server1`, and 2 {project_name} servers, `node11` and `node12` . diff --git a/topics/templates/document-attributes-product.adoc b/topics/templates/document-attributes-product.adoc index 3d74fb98d9..e23ec61551 100644 --- a/topics/templates/document-attributes-product.adoc +++ b/topics/templates/document-attributes-product.adoc @@ -157,7 +157,7 @@ :jdgserver_name: RHDG :jdgserver_version: 7.3 -:jdgserver_version_latest: 8.1 +:jdgserver_version_latest: 8.x :jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/data_grid_guide_to_cross-site_replication/index :subsystem_undertow_xml_urn: urn:jboss:domain:undertow:12.0