From 47b8591bbf7cff5bf33c2c5f342c7b1d907392db Mon Sep 17 00:00:00 2001 From: Don Naro Date: Fri, 29 Jan 2021 15:05:00 +0000 Subject: [PATCH] doc updates to keycloak-16628 (#52) --- .../topics/operating-mode/crossdc.adoc | 13 +- .../crossdc/assembly-setting-up-crossdc.adoc | 10 +- .../crossdc/proc-configuring-infinispan.adoc | 137 ++++++++++++++ .../proc-configuring-remote-cache.adoc | 29 +-- .../proc-creating-infinispan-caches.adoc | 48 +++++ .../crossdc/proc-setting-up-infinispan.adoc | 169 +++--------------- .../document-attributes-product.adoc | 4 +- 7 files changed, 237 insertions(+), 173 deletions(-) create mode 100644 server_installation/topics/operating-mode/crossdc/proc-configuring-infinispan.adoc create mode 100644 server_installation/topics/operating-mode/crossdc/proc-creating-infinispan-caches.adoc diff --git a/server_installation/topics/operating-mode/crossdc.adoc b/server_installation/topics/operating-mode/crossdc.adoc index f8dd066d47..0f00d2ee0c 100644 --- a/server_installation/topics/operating-mode/crossdc.adoc +++ b/server_installation/topics/operating-mode/crossdc.adoc @@ -6,7 +6,7 @@ :tech_feature_disabled: false include::../templates/techpreview.adoc[] -Cross-Datacenter Replication mode is for when you want to run {project_name} in a cluster across multiple data centers, most typically using data center sites that are in different geographic regions. When using this mode, each data center will have its own cluster of {project_name} servers. +Cross-Datacenter Replication mode lets you run {project_name} in a cluster across multiple data centers, most typically using data center sites that are in different geographic regions. When using this mode, each data center will have its own cluster of {project_name} servers. This documentation will refer to the following example architecture diagram to illustrate and describe a simple Cross-Datacenter Replication use case. @@ -137,14 +137,17 @@ For more detail about how caches can be configured see <>. [[communication]] ==== Communication details -{project_name} uses multiple, separate clusters of Infinispan caches. Every {project_name} node is in the cluster with the other {project_name} nodes in same data center, but not with the {project_name} nodes in different data centers. A {project_name} node does not communicate directly with the {project_name} nodes from different data centers. {project_name} nodes use external {jdgserver_name} servers for communication across data centers. This is done using the link:[Infinispan Hot Rod protocol]. +{project_name} uses multiple, separate clusters of Infinispan caches. Every {project_name} node is in the cluster with the other {project_name} nodes in same data center, but not with the {project_name} nodes in different data centers. A {project_name} node does not communicate directly with the {project_name} nodes from different data centers. {project_name} nodes use external {jdgserver_name} servers for communication across data centers. This is done using the link:https://infinispan.org/docs/stable/titles/server/server.html#hot_rod[Infinispan Hot Rod protocol]. -The Infinispan caches on the {project_name} side must be configured with the link:https://infinispan.org/docs/stable/titles/configuring/configuring.html#remote_cache_store[remoteStore] to ensure that data are saved to the remote cache. There is separate Infinispan cluster between {jdgserver_name} servers, so the data saved on SERVER1 on `site1` are replicated to SERVER2 on `site2` . +The Infinispan caches on the {project_name} side use link:https://infinispan.org/docs/stable/titles/configuring/configuring.html#remote_cache_store[`remoteStore`] configuration to offload data to a remote {jdgserver_name} cluster. +{jdgserver_name} clusters in separate data centers then replicate that data to ensure it is backed up. -Finally, the receiving {jdgserver_name} server notifies the {project_name} servers in its cluster through the Client Listeners, which are a feature of the Hot Rod protocol. {project_name} nodes on `site2` then update their Infinispan caches and the particular user session is also visible on {project_name} nodes on `site2`. +The receiving {jdgserver_name} server notifies the {project_name} servers in its cluster through Client Listeners, which are a feature of the Hot Rod protocol. {project_name} nodes on `site2` then update their Infinispan caches and the particular user session is also visible on {project_name} nodes on `site2`. See the <> for more details. +include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3] + [[setup]] ==== Setting up Cross DC with {jdgserver_name} {jdgserver_version} @@ -631,8 +634,6 @@ should be immediately visible on any of 4 nodes as caches should be properly inv Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false' ``` -include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3] - [[administration]] ==== Administration of Cross DC deployment diff --git a/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc b/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc index 4730fad262..60635f91e5 100644 --- a/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc +++ b/server_installation/topics/operating-mode/crossdc/assembly-setting-up-crossdc.adoc @@ -1,11 +1,11 @@ - -[id="assembly-setting-up-crossdc_{context}"] -= Setting up Cross DC with {jdgserver_name} {jdgserver_version_latest} +[id="assembly-setting-up-crossdc"] +:context: xsite += Setting Up Cross DC with {jdgserver_name} {jdgserver_version_latest} [role="_abstract"] Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of Cross-Datacenter replication. include::proc-setting-up-infinispan.adoc[leveloffset=4] +include::proc-configuring-infinispan.adoc[leveloffset=4] +include::proc-creating-infinispan-caches.adoc[leveloffset=4] include::proc-configuring-remote-cache.adoc[leveloffset=4] - - diff --git a/server_installation/topics/operating-mode/crossdc/proc-configuring-infinispan.adoc b/server_installation/topics/operating-mode/crossdc/proc-configuring-infinispan.adoc new file mode 100644 index 0000000000..c6c57485a6 --- /dev/null +++ b/server_installation/topics/operating-mode/crossdc/proc-configuring-infinispan.adoc @@ -0,0 +1,137 @@ +[id='configuring-infinispan-{context}'] += Configuring {jdgserver_name} Clusters +Configure {jdgserver_name} clusters to replicate {project_name} data across data centers. + +.Prerequisites + +* Install and set up {jdgserver_name} Server. + +.Procedure + +. Open `infinispan.xml` for editing. ++ +By default, {jdgserver_name} Server uses `server/conf/infinispan.xml` for static configuration such as cluster transport and security mechanisms. + +. Create a stack that uses TCPPING as the cluster discovery protocol. ++ +[source,xml,options="nowrap",subs=attributes+] +---- + + + + stack.combine="REPLACE" stack.position="MPING"/> + +---- +<1> Lists the host names for `server1` and `server2`. ++ +. Configure the {jdgserver_name} cluster transport to perform Cross-Datacenter replication. +.. Add the RELAY2 protocol to a JGroups stack. ++ +[source,xml,options="nowrap",subs=attributes+] +---- + + <1> + + max_site_masters="1000"/> <3> + <4> + + + + + +---- +<1> Creates a stack named `xsite` that extends the default UDP cluster transport. +<2> Adds the RELAY2 protocol and names the cluster you are configuring as `site1`. The site name must be unique to each {jdgserver_name} cluster. +<3> Sets 1000 as the number of relay nodes for the cluster. You should set a value that is equal to or greater than the maximum number of nodes in your {jdgserver_name} cluster. +<4> Names all {jdgserver_name} clusters that backup caches with {jdgserver_name} data and uses the default TCP stack for inter-cluster transport. ++ +.. Configure the {jdgserver_name} cluster transport to use the stack. ++ +[source,xml,options="nowrap",subs=attributes+] +---- + + <1> + +---- +<1> Uses the `xsite` stack for the cluster. ++ +. Configure the keystore as an SSL identity in the server security realm. ++ +[source,xml,options="nowrap",subs=attributes+] +---- + + + + relative-to="infinispan.server.config.path" + keystore-password="password" <2> + alias="server" /> <3> + + +---- +<1> Specifies the path of the keystore that contains the SSL identity. +<2> Specifies the password to access the keystore. +<3> Names the alias of the certificate in the keystore. ++ +. Configure the authentication mechanism for the Hot Rod endpoint. ++ +[source,xml,options="nowrap",subs=attributes+] +---- + + + + + server-name="infinispan" /> <2> + + + + +---- +<1> Configures the SASL authentication mechanism for the Hot Rod endpoint. SCRAM-SHA-512 is the default SASL mechanism for Hot Rod. However you can use whatever is appropriate for your environment, such as GSSAPI. +<2> Defines the name that {jdgserver_name} servers present to clients. You specify this name in the Hot Rod client configuration when you set up {project_name}. ++ +. Create a cache template. ++ +NOTE: Add the cache template to `infinispan.xml` on each node in the {jdgserver_name} cluster. ++ +[source,xml,options="nowrap",subs=attributes+] +---- + + + mode="SYNC"> <2> + <3> + + <4> + + + +---- +<1> Creates a cache template named `sessions-cfg`. +<2> Defines a cache that synchronously replicates data across the cluster. +<3> Disables timeout for lock acquisition. +<4> Names the backup site for the {jdgserver_name} cluster you are configuring. ++ +. Start {jdgserver_name} at each site. ++ +[source,bash,options="nowrap",subs=attributes+] +---- +$ bin/server.sh +---- ++ +. Check {jdgserver_name} server logs to verify the clusters form cross-site views. ++ +[source,options="nowrap",subs=attributes+] +---- +INFO [org.infinispan.XSITE] (jgroups-5,${server.hostname}) ISPN000439: Received new x-site view: [NYC] +INFO [org.infinispan.XSITE] (jgroups-7,${server.hostname}) ISPN000439: Received new x-site view: [NYC, LON] +---- + +ifeval::[{project_product}==true] +[role="_additional-resources"] +.Additional resources +link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] + +link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_relay-xsite[Configuring Data Grid Clusters for Cross-Site Replication] + +link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#ssl_identity-server[Setting Up SSL Identities for Data Grid Server] + +link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configuring_endpoints[Configuring Data Grid Endpoints] + +link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configure_hotrod_authentication-server[Configuring Hot Rod Authentication Mechanisms] +endif::[] diff --git a/server_installation/topics/operating-mode/crossdc/proc-configuring-remote-cache.adoc b/server_installation/topics/operating-mode/crossdc/proc-configuring-remote-cache.adoc index 191969426e..80d83b7241 100644 --- a/server_installation/topics/operating-mode/crossdc/proc-configuring-remote-cache.adoc +++ b/server_installation/topics/operating-mode/crossdc/proc-configuring-remote-cache.adoc @@ -1,4 +1,3 @@ - [id="proc-configuring-remote-cache-{context}"] = Configuring Remote Cache Stores on {project_name} After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} data, you can configure the Infinispan subsystem to use those clusters as remote stores. @@ -16,7 +15,7 @@ After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} da [source,xml,options="nowrap",subs=attributes+] ---- <1> - <2> + <2> <3> ---- @@ -46,13 +45,6 @@ infinispan.client.hotrod.trust_store_file_name = /path/to/truststore.jks infinispan.client.hotrod.trust_store_type = JKS infinispan.client.hotrod.trust_store_password = password ---- -+ -. Add `hotrod-client.properties` to the classpath of your {project_name} server. For example: -+ -[source,xml,options="nowrap",subs=attributes+] ----- -./standalone.sh ... -P ----- . Update a replicated cache named `work` that is in the Infinispan subsystem with the following configuration: + @@ -83,7 +75,7 @@ ifeval::[{project_community}==true] https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html#configure_clients-xsite[{jdgserver_name} documentation] endif::[] ifeval::[{project_product}==true] -https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_clients-xsite[{jdgserver_name} documentation] +https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_clients-xsite[{jdgserver_name} documentation] endif::[] for descriptions of each property. @@ -119,7 +111,22 @@ For example, add a cache named `sessions` with the following configuration: <3> Names the corresponding cache on the remote {jdgserver_name} cluster. <4> Specifies the `remote-cache` socket binding. + -. Start {project_name}. +. Start each {project_name} server with `hotrod-client.properties` on the classpath, for example: ++ +[source,xml,options="nowrap",subs=attributes+] +---- +./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \ + -Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \ + -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ + -P path/to/hotrod-client.properties +---- ++ +. Check server logs for the following messages: ++ +[source,options="nowrap",subs=attributes+] +---- +Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12] +----- ifeval::[{project_product}==true] [role="_additional-resources"] diff --git a/server_installation/topics/operating-mode/crossdc/proc-creating-infinispan-caches.adoc b/server_installation/topics/operating-mode/crossdc/proc-creating-infinispan-caches.adoc new file mode 100644 index 0000000000..a4a584dc8e --- /dev/null +++ b/server_installation/topics/operating-mode/crossdc/proc-creating-infinispan-caches.adoc @@ -0,0 +1,48 @@ +[id='creating-infinispan-caches-{context}'] += Creating Infinispan Caches +Create the Infinispan caches that {project_name} requires. + +We recommend that you create caches on {jdgserver_name} clusters at runtime rather than adding caches to `infinispan.xml`. +This strategy ensures that your caches are automatically synchronized across the cluster and permanently stored. + +The following procedure uses the {jdgserver_name} Command Line Interface (CLI) to create all the required caches in a single batch command. + +.Prerequisites + +* Configure your {jdgserver_name} clusters. + +.Procedure + +. Create a batch file that contains caches, for example: ++ +[source,bash,options="nowrap",subs=attributes+] +----- +cat > /tmp/caches.batch< - - - ----- -+ -NOTE: The `initial_hosts` file contains the list of all Infinispan servers at both sites. - -. Configure the {jdgserver_name} cluster transport to perform Cross-Datacenter replication. -.. Add the RELAY2 protocol to a JGroups stack. -+ -[source,xml,options="nowrap",subs=attributes+] ----- - - <1> - - max_site_masters="1000"/> <3> - <4> - - - - - ----- -<1> Creates a stack named `xsite` that extends the default UDP cluster transport. -<2> Adds the RELAY2 protocol and names the cluster you are configuring as `site1`. + -The site name must be unique to each {jdgserver_name} cluster. -<3> Sets 1000 as the number of relay nodes for the cluster. You should set a value that is equal to or greater than the maximum number of nodes in your {jdgserver_name} cluster. -<4> Names all {jdgserver_name} clusters that backup caches with {jdgserver_name} data and uses the default TCP stack for inter-cluster transport. -+ -.. Configure the {jdgserver_name} cluster transport to use the stack. -+ -[source,xml,options="nowrap",subs=attributes+] ----- - - <1> - ----- -<1> Uses the `xsite` stack for the cluster. -+ -. Configure the keystore as an SSL identity in the server security realm. -+ -[source,xml,options="nowrap",subs=attributes+] ----- - - - - relative-to="infinispan.server.config.path" - keystore-password="password" <2> - alias="server" /> <3> - - ----- -<1> Specifies the path of the keystore that contains the SSL identity. -<2> Specifies the password to access the keystore. -<3> Names the alias of the certificate in the keystore. -+ -. Configure the authentication mechanism for the Hot Rod endpoint. -+ -[source,xml,options="nowrap",subs=attributes+] ----- - - - - - server-name="infinispan" /> <2> - - - - ----- -<1> Configures the SASL authentication mechanism for the Hot Rod endpoint. SCRAM-SHA-512 is the default SASL mechanism for Hot Rod. However you can use whatever is appropriate for your environment, such as GSSAPI. -<2> Defines the name that {jdgserver_name} servers present to clients. You specify this name in the Hot Rod client configuration when you set up {project_name}. -+ -. Create a cache template. -+ -[source,xml,options="nowrap",subs=attributes+] ----- - - - mode="SYNC"> <2> - <3> - - <4> - - - ----- -<1> Creates a cache template named `sessions-cfg`. -<2> Defines a cache that synchronously replicates data across the cluster. -<3> Disables timeout for lock acquisition. -<4> Names the backup site for the {jdgserver_name} cluster you are configuring. -+ -. Repeat the preceding steps to modify `infinispan.xml` on each node in the {jdgserver_name} cluster. -. Start the cluster and open {jdgserver_name} Console in any browser. -. Authenticate with the {jdgserver_name} user you created. -. Add caches for {jdgserver_name} data using the `sessions-cfg` cache template. -.. Navigate to the *Data Container* tab and then select *Create Cache*. -.. Specify a name for the cache. -.. Select `sessions-cfg` from the *Template* drop-down menu. -.. Select *Create*. -+ -Use preceding steps to create each of the following caches: -+ -* work -* sessions -* clientSessions -* offlineSessions -* offlineClientSessions -* actionTokens -* loginFailures - -NOTE: We recommend that you create caches on {jdgserver_name} clusters at runtime through the CLI, Console, or Hot Rod and REST endpoints rather than adding caches to infinispan.xml. This strategy ensures that your caches are automatically synchronized across the cluster and permanently stored. Be sure to create the caches at each site. See the {jdgserver_name} documentation for more information. - -ifeval::[{project_product}==true] -[role="_additional-resources"] -.Additional resources -link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] + -link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_relay-xsite[Configuring Data Grid Clusters for Cross-Site Replication] + -link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#ssl_identity-server[Setting Up SSL Identities for Data Grid Server] + -link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configuring_endpoints[Configuring Data Grid Endpoints] + -link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configure_hotrod_authentication-server[Configuring Hot Rod Authentication Mechanisms] + -link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#create_remote_cache[Remotely Creating Caches on Data Grid Clusters] -endif::[] diff --git a/topics/templates/document-attributes-product.adoc b/topics/templates/document-attributes-product.adoc index bc1cbabb3f..76a7c155f5 100644 --- a/topics/templates/document-attributes-product.adoc +++ b/topics/templates/document-attributes-product.adoc @@ -140,8 +140,8 @@ :appserver_managementconsole_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html-single/configuration_guide/#management_console_overview :jdgserver_name: RHDG -:jdgserver_version: 7.3.8 -:jdgserver_version_latest: 8.1.0 +:jdgserver_version: 7.3 +:jdgserver_version_latest: 8.1 :jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/7.3/html/red_hat_data_grid_user_guide/x_site_replication :fuseVersion: JBoss Fuse 6.3.0 Rollup 12