doc updates to keycloak-16628 (#52)

This commit is contained in:
Don Naro 2021-01-29 15:05:00 +00:00 committed by Marek Posolda
parent 9c961d5e1e
commit 47b8591bbf
7 changed files with 237 additions and 173 deletions

View file

@ -6,7 +6,7 @@
:tech_feature_disabled: false :tech_feature_disabled: false
include::../templates/techpreview.adoc[] include::../templates/techpreview.adoc[]
Cross-Datacenter Replication mode is for when you want to run {project_name} in a cluster across multiple data centers, most typically using data center sites that are in different geographic regions. When using this mode, each data center will have its own cluster of {project_name} servers. Cross-Datacenter Replication mode lets you run {project_name} in a cluster across multiple data centers, most typically using data center sites that are in different geographic regions. When using this mode, each data center will have its own cluster of {project_name} servers.
This documentation will refer to the following example architecture diagram to illustrate and describe a simple Cross-Datacenter Replication use case. This documentation will refer to the following example architecture diagram to illustrate and describe a simple Cross-Datacenter Replication use case.
@ -137,14 +137,17 @@ For more detail about how caches can be configured see <<tuningcache>>.
[[communication]] [[communication]]
==== Communication details ==== Communication details
{project_name} uses multiple, separate clusters of Infinispan caches. Every {project_name} node is in the cluster with the other {project_name} nodes in same data center, but not with the {project_name} nodes in different data centers. A {project_name} node does not communicate directly with the {project_name} nodes from different data centers. {project_name} nodes use external {jdgserver_name} servers for communication across data centers. This is done using the link:[Infinispan Hot Rod protocol]. {project_name} uses multiple, separate clusters of Infinispan caches. Every {project_name} node is in the cluster with the other {project_name} nodes in same data center, but not with the {project_name} nodes in different data centers. A {project_name} node does not communicate directly with the {project_name} nodes from different data centers. {project_name} nodes use external {jdgserver_name} servers for communication across data centers. This is done using the link:https://infinispan.org/docs/stable/titles/server/server.html#hot_rod[Infinispan Hot Rod protocol].
The Infinispan caches on the {project_name} side must be configured with the link:https://infinispan.org/docs/stable/titles/configuring/configuring.html#remote_cache_store[remoteStore] to ensure that data are saved to the remote cache. There is separate Infinispan cluster between {jdgserver_name} servers, so the data saved on SERVER1 on `site1` are replicated to SERVER2 on `site2` . The Infinispan caches on the {project_name} side use link:https://infinispan.org/docs/stable/titles/configuring/configuring.html#remote_cache_store[`remoteStore`] configuration to offload data to a remote {jdgserver_name} cluster.
{jdgserver_name} clusters in separate data centers then replicate that data to ensure it is backed up.
Finally, the receiving {jdgserver_name} server notifies the {project_name} servers in its cluster through the Client Listeners, which are a feature of the Hot Rod protocol. {project_name} nodes on `site2` then update their Infinispan caches and the particular user session is also visible on {project_name} nodes on `site2`. The receiving {jdgserver_name} server notifies the {project_name} servers in its cluster through Client Listeners, which are a feature of the Hot Rod protocol. {project_name} nodes on `site2` then update their Infinispan caches and the particular user session is also visible on {project_name} nodes on `site2`.
See the <<archdiagram>> for more details. See the <<archdiagram>> for more details.
include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3]
[[setup]] [[setup]]
==== Setting up Cross DC with {jdgserver_name} {jdgserver_version} ==== Setting up Cross DC with {jdgserver_name} {jdgserver_version}
@ -631,8 +634,6 @@ should be immediately visible on any of 4 nodes as caches should be properly inv
Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false' Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false'
``` ```
include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3]
[[administration]] [[administration]]
==== Administration of Cross DC deployment ==== Administration of Cross DC deployment

View file

@ -1,11 +1,11 @@
[id="assembly-setting-up-crossdc"]
[id="assembly-setting-up-crossdc_{context}"] :context: xsite
= Setting up Cross DC with {jdgserver_name} {jdgserver_version_latest} = Setting Up Cross DC with {jdgserver_name} {jdgserver_version_latest}
[role="_abstract"] [role="_abstract"]
Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of Cross-Datacenter replication. Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of Cross-Datacenter replication.
include::proc-setting-up-infinispan.adoc[leveloffset=4] include::proc-setting-up-infinispan.adoc[leveloffset=4]
include::proc-configuring-infinispan.adoc[leveloffset=4]
include::proc-creating-infinispan-caches.adoc[leveloffset=4]
include::proc-configuring-remote-cache.adoc[leveloffset=4] include::proc-configuring-remote-cache.adoc[leveloffset=4]

View file

@ -0,0 +1,137 @@
[id='configuring-infinispan-{context}']
= Configuring {jdgserver_name} Clusters
Configure {jdgserver_name} clusters to replicate {project_name} data across data centers.
.Prerequisites
* Install and set up {jdgserver_name} Server.
.Procedure
. Open `infinispan.xml` for editing.
+
By default, {jdgserver_name} Server uses `server/conf/infinispan.xml` for static configuration such as cluster transport and security mechanisms.
. Create a stack that uses TCPPING as the cluster discovery protocol.
+
[source,xml,options="nowrap",subs=attributes+]
----
<stack name="global-cluster" extends="tcp">
<!-- Remove MPING protocol from the stack and add TCPPING -->
<TCPPING initial_hosts="server1[7800],server2[7800]" <1>
stack.combine="REPLACE" stack.position="MPING"/>
</stack>
----
<1> Lists the host names for `server1` and `server2`.
+
. Configure the {jdgserver_name} cluster transport to perform Cross-Datacenter replication.
.. Add the RELAY2 protocol to a JGroups stack.
+
[source,xml,options="nowrap",subs=attributes+]
----
<jgroups>
<stack name="xsite" extends="udp"> <1>
<relay.RELAY2 site="site1" <2>
max_site_masters="1000"/> <3>
<remote-sites default-stack="global-cluster"> <4>
<remote-site name="site1"/>
<remote-site name="site2"/>
</remote-sites>
</stack>
</jgroups>
----
<1> Creates a stack named `xsite` that extends the default UDP cluster transport.
<2> Adds the RELAY2 protocol and names the cluster you are configuring as `site1`. The site name must be unique to each {jdgserver_name} cluster.
<3> Sets 1000 as the number of relay nodes for the cluster. You should set a value that is equal to or greater than the maximum number of nodes in your {jdgserver_name} cluster.
<4> Names all {jdgserver_name} clusters that backup caches with {jdgserver_name} data and uses the default TCP stack for inter-cluster transport.
+
.. Configure the {jdgserver_name} cluster transport to use the stack.
+
[source,xml,options="nowrap",subs=attributes+]
----
<cache-container name="default" statistics="true">
<transport cluster="${infinispan.cluster.name:cluster}"
stack="xsite"/> <1>
</cache-container>
----
<1> Uses the `xsite` stack for the cluster.
+
. Configure the keystore as an SSL identity in the server security realm.
+
[source,xml,options="nowrap",subs=attributes+]
----
<server-identities>
<ssl>
<keystore path="server.jks" <1>
relative-to="infinispan.server.config.path"
keystore-password="password" <2>
alias="server" /> <3>
</ssl>
</server-identities>
----
<1> Specifies the path of the keystore that contains the SSL identity.
<2> Specifies the password to access the keystore.
<3> Names the alias of the certificate in the keystore.
+
. Configure the authentication mechanism for the Hot Rod endpoint.
+
[source,xml,options="nowrap",subs=attributes+]
----
<endpoints socket-binding="default">
<hotrod-connector name="hotrod">
<authentication>
<sasl mechanisms="SCRAM-SHA-512" <1>
server-name="infinispan" /> <2>
</authentication>
</hotrod-connector>
<rest-connector name="rest"/>
</endpoints>
----
<1> Configures the SASL authentication mechanism for the Hot Rod endpoint. SCRAM-SHA-512 is the default SASL mechanism for Hot Rod. However you can use whatever is appropriate for your environment, such as GSSAPI.
<2> Defines the name that {jdgserver_name} servers present to clients. You specify this name in the Hot Rod client configuration when you set up {project_name}.
+
. Create a cache template.
+
NOTE: Add the cache template to `infinispan.xml` on each node in the {jdgserver_name} cluster.
+
[source,xml,options="nowrap",subs=attributes+]
----
<cache-container ... >
<replicated-cache-configuration name="sessions-cfg" <1>
mode="SYNC"> <2>
<locking acquire-timeout="0" /> <3>
<backups>
<backup site="site2" strategy="SYNC" /> <4>
</backups>
</replicated-cache-configuration>
</cache-container>
----
<1> Creates a cache template named `sessions-cfg`.
<2> Defines a cache that synchronously replicates data across the cluster.
<3> Disables timeout for lock acquisition.
<4> Names the backup site for the {jdgserver_name} cluster you are configuring.
+
. Start {jdgserver_name} at each site.
+
[source,bash,options="nowrap",subs=attributes+]
----
$ bin/server.sh
----
+
. Check {jdgserver_name} server logs to verify the clusters form cross-site views.
+
[source,options="nowrap",subs=attributes+]
----
INFO [org.infinispan.XSITE] (jgroups-5,${server.hostname}) ISPN000439: Received new x-site view: [NYC]
INFO [org.infinispan.XSITE] (jgroups-7,${server.hostname}) ISPN000439: Received new x-site view: [NYC, LON]
----
ifeval::[{project_product}==true]
[role="_additional-resources"]
.Additional resources
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_relay-xsite[Configuring Data Grid Clusters for Cross-Site Replication] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#ssl_identity-server[Setting Up SSL Identities for Data Grid Server] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configuring_endpoints[Configuring Data Grid Endpoints] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configure_hotrod_authentication-server[Configuring Hot Rod Authentication Mechanisms]
endif::[]

View file

@ -1,4 +1,3 @@
[id="proc-configuring-remote-cache-{context}"] [id="proc-configuring-remote-cache-{context}"]
= Configuring Remote Cache Stores on {project_name} = Configuring Remote Cache Stores on {project_name}
After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} data, you can configure the Infinispan subsystem to use those clusters as remote stores. After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} data, you can configure the Infinispan subsystem to use those clusters as remote stores.
@ -16,7 +15,7 @@ After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} da
[source,xml,options="nowrap",subs=attributes+] [source,xml,options="nowrap",subs=attributes+]
---- ----
<outbound-socket-binding name="remote-cache"> <1> <outbound-socket-binding name="remote-cache"> <1>
<remote-destination host="${remote.cache.host:<server_hostname>}"> <2> <remote-destination host="${remote.cache.host:server_hostname}"> <2>
<port="${remote.cache.port:11222}"/> <3> <port="${remote.cache.port:11222}"/> <3>
</outbound-socket-binding> </outbound-socket-binding>
---- ----
@ -46,13 +45,6 @@ infinispan.client.hotrod.trust_store_file_name = /path/to/truststore.jks
infinispan.client.hotrod.trust_store_type = JKS infinispan.client.hotrod.trust_store_type = JKS
infinispan.client.hotrod.trust_store_password = password infinispan.client.hotrod.trust_store_password = password
---- ----
+
. Add `hotrod-client.properties` to the classpath of your {project_name} server. For example:
+
[source,xml,options="nowrap",subs=attributes+]
----
./standalone.sh ... -P <path_to_properties_file>
----
. Update a replicated cache named `work` that is in the Infinispan subsystem with the following configuration: . Update a replicated cache named `work` that is in the Infinispan subsystem with the following configuration:
+ +
@ -83,7 +75,7 @@ ifeval::[{project_community}==true]
https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html#configure_clients-xsite[{jdgserver_name} documentation] https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html#configure_clients-xsite[{jdgserver_name} documentation]
endif::[] endif::[]
ifeval::[{project_product}==true] ifeval::[{project_product}==true]
https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_clients-xsite[{jdgserver_name} documentation] https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_clients-xsite[{jdgserver_name} documentation]
endif::[] endif::[]
for descriptions of each property. for descriptions of each property.
@ -119,7 +111,22 @@ For example, add a cache named `sessions` with the following configuration:
<3> Names the corresponding cache on the remote {jdgserver_name} cluster. <3> Names the corresponding cache on the remote {jdgserver_name} cluster.
<4> Specifies the `remote-cache` socket binding. <4> Specifies the `remote-cache` socket binding.
+ +
. Start {project_name}. . Start each {project_name} server with `hotrod-client.properties` on the classpath, for example:
+
[source,xml,options="nowrap",subs=attributes+]
----
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
-P path/to/hotrod-client.properties
----
+
. Check server logs for the following messages:
+
[source,options="nowrap",subs=attributes+]
----
Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12]
-----
ifeval::[{project_product}==true] ifeval::[{project_product}==true]
[role="_additional-resources"] [role="_additional-resources"]

View file

@ -0,0 +1,48 @@
[id='creating-infinispan-caches-{context}']
= Creating Infinispan Caches
Create the Infinispan caches that {project_name} requires.
We recommend that you create caches on {jdgserver_name} clusters at runtime rather than adding caches to `infinispan.xml`.
This strategy ensures that your caches are automatically synchronized across the cluster and permanently stored.
The following procedure uses the {jdgserver_name} Command Line Interface (CLI) to create all the required caches in a single batch command.
.Prerequisites
* Configure your {jdgserver_name} clusters.
.Procedure
. Create a batch file that contains caches, for example:
+
[source,bash,options="nowrap",subs=attributes+]
-----
cat > /tmp/caches.batch<<EOF
echo "creating caches..."
create cache work --template=sessions-cfg
create cache sessions --template=sessions-cfg
create cache clientSessions --template=sessions-cfg
create cache offlineSessions --template=sessions-cfg
create cache offlineClientSessions --template=sessions-cfg
create cache actionTokens --template=sessions-cfg
create cache loginFailures --template=sessions-cfg
echo "verifying caches"
ls caches
EOF
-----
+
. Create the caches with the CLI.
+
[source,bash,options="nowrap",subs=attributes+]
-----
$ bin/cli.sh -c http://localhost:11222 -f /tmp/caches.batch
-----
ifeval::[{project_product}==true]
[role="_additional-resources"]
.Additional resources
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#create_remote_cache[Remotely Creating Caches on Data Grid Clusters] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_command_line_interface/index#batch_operations[Performing Batch Operations with the CLI] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/data_grid_rest_api/rest_v2_api#rest_v2_cache_exists[Verifying Caches]
endif::[]

View file

@ -1,5 +1,5 @@
[id='setting-up-infinispan-{context}'] [id='setting-up-infinispan-{context}']
= Setting Up {jdgserver_name} Clusters = Setting Up {jdgserver_name} Servers
For Cross-Datacenter replication, you start by creating remote {jdgserver_name} clusters that can back up {project_name} data. For Cross-Datacenter replication, you start by creating remote {jdgserver_name} clusters that can back up {project_name} data.
.Prerequisites .Prerequisites
@ -11,164 +11,35 @@ For Cross-Datacenter replication, you start by creating remote {jdgserver_name}
{jdgserver_name} Server {jdgserver_version_latest} requires Java 11. {jdgserver_name} Server {jdgserver_version_latest} requires Java 11.
==== ====
* Create a user to authenticate client connections from {jdgserver_name}, for example: .Procedure
. Create a user to authenticate client connections from {jdgserver_name}, for example:
+ +
[source,bash,options="nowrap",subs=attributes+] [source,bash,options="nowrap",subs=attributes+]
---- ----
$ bin/cli.sh user create myuser -p "qwer1234!" $ bin/cli.sh user create myuser -p "qwer1234!"
---- ----
+ +
You specify these credentials in the Hot Rod client configuration when you set up {jdgserver_name}. Note: You specify these credentials in the Hot Rod client configuration when you create remote caches on {project_name}.
. Create an SSL keystore and truststore to secure connections between {jdgserver_name} and {project_name}, for example:
* Create an SSL keystore and truststore to secure connections between {jdgserver_name} and {project_name}. .. Create a keystore to provide an SSL identity to your {jdgserver_name} cluster
+
For example, you can use keytool as follows:
[source,bash,options="nowrap",subs=attributes+] [source,bash,options="nowrap",subs=attributes+]
---- ----
# Create a keystore to provide an SSL identity to your {jdgserver_name} cluster
keytool -genkey -alias server -keyalg RSA -keystore server.jks -keysize 2048 keytool -genkey -alias server -keyalg RSA -keystore server.jks -keysize 2048
----
# Export an SSL certificate from the keystore +
.. Export an SSL certificate from the keystore.
+
[source,bash,options="nowrap",subs=attributes+]
----
keytool -exportcert -keystore server.jks -alias server -file server.crt keytool -exportcert -keystore server.jks -alias server -file server.crt
----
# Import the SSL certificate into a truststore that {jdgserver_name} can use to verify the SSL identity for {jdgserver_name} +
.. Import the SSL certificate into a truststore that {jdgserver_name} can use to verify the SSL identity for {jdgserver_name}.
+
[source,bash,options="nowrap",subs=attributes+]
----
keytool -importcert -keystore truststore.jks -alias server -file server.crt keytool -importcert -keystore truststore.jks -alias server -file server.crt
rm server.crt rm server.crt
---- ----
.Procedure
. Open `infinispan.xml` for editing.
+
By default, {jdgserver_name} Server uses `server/conf/infinispan.xml` for static configuration such as cluster transport and security mechanisms.
. Create a stack the uses TCPPING. The default tcp stack finds nodes by using IP multicast, which is typically unavailable.
+
[source,xml,options="nowrap",subs=attributes+]
----
<stack name="global-cluster" extends="tcp">
<!-- Remove MPING protocol from the stack and add TCPPING -->
<TCPPING initial_hosts="server1[7800],server2[7800]" stack.combine="REPLACE" stack.position="MPING"/>
</stack>
----
+
NOTE: The `initial_hosts` file contains the list of all Infinispan servers at both sites.
. Configure the {jdgserver_name} cluster transport to perform Cross-Datacenter replication.
.. Add the RELAY2 protocol to a JGroups stack.
+
[source,xml,options="nowrap",subs=attributes+]
----
<jgroups>
<stack name="xsite" extends="udp"> <1>
<relay.RELAY2 site="site1" <2>
max_site_masters="1000"/> <3>
<remote-sites default-stack="global-cluster"> <4>
<remote-site name="site1"/>
<remote-site name="site2"/>
</remote-sites>
</stack>
</jgroups>
----
<1> Creates a stack named `xsite` that extends the default UDP cluster transport.
<2> Adds the RELAY2 protocol and names the cluster you are configuring as `site1`. +
The site name must be unique to each {jdgserver_name} cluster.
<3> Sets 1000 as the number of relay nodes for the cluster. You should set a value that is equal to or greater than the maximum number of nodes in your {jdgserver_name} cluster.
<4> Names all {jdgserver_name} clusters that backup caches with {jdgserver_name} data and uses the default TCP stack for inter-cluster transport.
+
.. Configure the {jdgserver_name} cluster transport to use the stack.
+
[source,xml,options="nowrap",subs=attributes+]
----
<cache-container name="default" statistics="true">
<transport cluster="${infinispan.cluster.name:cluster}"
stack="xsite"/> <1>
</cache-container>
----
<1> Uses the `xsite` stack for the cluster.
+
. Configure the keystore as an SSL identity in the server security realm.
+
[source,xml,options="nowrap",subs=attributes+]
----
<server-identities>
<ssl>
<keystore path="server.jks" <1>
relative-to="infinispan.server.config.path"
keystore-password="password" <2>
alias="server" /> <3>
</ssl>
</server-identities>
----
<1> Specifies the path of the keystore that contains the SSL identity.
<2> Specifies the password to access the keystore.
<3> Names the alias of the certificate in the keystore.
+
. Configure the authentication mechanism for the Hot Rod endpoint.
+
[source,xml,options="nowrap",subs=attributes+]
----
<endpoints socket-binding="default">
<hotrod-connector name="hotrod">
<authentication>
<sasl mechanisms="SCRAM-SHA-512" <1>
server-name="infinispan" /> <2>
</authentication>
</hotrod-connector>
<rest-connector name="rest"/>
</endpoints>
----
<1> Configures the SASL authentication mechanism for the Hot Rod endpoint. SCRAM-SHA-512 is the default SASL mechanism for Hot Rod. However you can use whatever is appropriate for your environment, such as GSSAPI.
<2> Defines the name that {jdgserver_name} servers present to clients. You specify this name in the Hot Rod client configuration when you set up {project_name}.
+
. Create a cache template.
+
[source,xml,options="nowrap",subs=attributes+]
----
<cache-container ... >
<replicated-cache-configuration name="sessions-cfg" <1>
mode="SYNC"> <2>
<locking acquire-timeout="0" /> <3>
<backups>
<backup site="site2" strategy="SYNC" /> <4>
</backups>
</replicated-cache-configuration>
</cache-container>
----
<1> Creates a cache template named `sessions-cfg`.
<2> Defines a cache that synchronously replicates data across the cluster.
<3> Disables timeout for lock acquisition.
<4> Names the backup site for the {jdgserver_name} cluster you are configuring.
+
. Repeat the preceding steps to modify `infinispan.xml` on each node in the {jdgserver_name} cluster.
. Start the cluster and open {jdgserver_name} Console in any browser.
. Authenticate with the {jdgserver_name} user you created.
. Add caches for {jdgserver_name} data using the `sessions-cfg` cache template.
.. Navigate to the *Data Container* tab and then select *Create Cache*.
.. Specify a name for the cache.
.. Select `sessions-cfg` from the *Template* drop-down menu.
.. Select *Create*.
+
Use preceding steps to create each of the following caches:
+
* work
* sessions
* clientSessions
* offlineSessions
* offlineClientSessions
* actionTokens
* loginFailures
NOTE: We recommend that you create caches on {jdgserver_name} clusters at runtime through the CLI, Console, or Hot Rod and REST endpoints rather than adding caches to infinispan.xml. This strategy ensures that your caches are automatically synchronized across the cluster and permanently stored. Be sure to create the caches at each site. See the {jdgserver_name} documentation for more information.
ifeval::[{project_product}==true]
[role="_additional-resources"]
.Additional resources
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_relay-xsite[Configuring Data Grid Clusters for Cross-Site Replication] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#ssl_identity-server[Setting Up SSL Identities for Data Grid Server] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configuring_endpoints[Configuring Data Grid Endpoints] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#configure_hotrod_authentication-server[Configuring Hot Rod Authentication Mechanisms] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#create_remote_cache[Remotely Creating Caches on Data Grid Clusters]
endif::[]

View file

@ -140,8 +140,8 @@
:appserver_managementconsole_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html-single/configuration_guide/#management_console_overview :appserver_managementconsole_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html-single/configuration_guide/#management_console_overview
:jdgserver_name: RHDG :jdgserver_name: RHDG
:jdgserver_version: 7.3.8 :jdgserver_version: 7.3
:jdgserver_version_latest: 8.1.0 :jdgserver_version_latest: 8.1
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/7.3/html/red_hat_data_grid_user_guide/x_site_replication :jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/7.3/html/red_hat_data_grid_user_guide/x_site_replication
:fuseVersion: JBoss Fuse 6.3.0 Rollup 12 :fuseVersion: JBoss Fuse 6.3.0 Rollup 12