KEYCLOAK-16628 made SCRAM-SHA-512 appear conditionally in upstream and DIGEST-MD5 in downstream in all locations for the new release of Data Grid and Infinispan

This commit is contained in:
Don Naro 2021-01-29 20:47:47 +00:00 committed by Marek Posolda
parent e112a2db67
commit 2c1413abb5
7 changed files with 176 additions and 46 deletions

View file

@ -146,6 +146,8 @@ The receiving {jdgserver_name} server notifies the {project_name} servers in its
See the <<archdiagram>> for more details. See the <<archdiagram>> for more details.
include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3]
[[setup]] [[setup]]
==== Setting up Cross DC with {jdgserver_name} {jdgserver_version} ==== Setting up Cross DC with {jdgserver_name} {jdgserver_version}
@ -321,9 +323,9 @@ Issues related to authorization may exist just for some other versions of {jdgse
.. The `backups` element like this: .. The `backups` element like this:
+ +
```xml ```xml
<backups> <backups>
<backup site="site1" .... <backup site="site1" ....
... ...
``` ```
+ +
NOTE: The _PUBLIC_IP_ADDRESS_ below refers to the IP address or hostname, which can be used for your server to bind to. Note that NOTE: The _PUBLIC_IP_ADDRESS_ below refers to the IP address or hostname, which can be used for your server to bind to. Note that
@ -390,8 +392,8 @@ In production you will likely need to have a separate database server in every d
.. Add the attribute `site` to the JGroups UDP protocol: .. Add the attribute `site` to the JGroups UDP protocol:
+ +
```xml ```xml
<stack name="udp"> <stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp" site="${jboss.site.name}"/> <transport type="UDP" socket-binding="jgroups-udp" site="${jboss.site.name}"/>
``` ```
+ +
@ -632,8 +634,6 @@ should be immediately visible on any of 4 nodes as caches should be properly inv
Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false' Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false'
``` ```
include::crossdc/assembly-setting-up-crossdc.adoc[leveloffset=3]
[[administration]] [[administration]]
==== Administration of Cross DC deployment ==== Administration of Cross DC deployment
@ -672,7 +672,7 @@ There are 2 ways to take the site offline.
This is useful especially if the outage is planned. With `jconsole` or CLI, you can connect to the `server1` server and take the `site2` offline. This is useful especially if the outage is planned. With `jconsole` or CLI, you can connect to the `server1` server and take the `site2` offline.
More details about this are available in the More details about this are available in the
ifeval::[{project_product}==true] ifeval::[{project_product}==true]
link:{jdgserver_crossdcdocs_link}#taking_a_site_offline[{jdgserver_name} documentation]. link:{jdgserver_crossdcdocs_link}#xsite_auto_offline-xsite[{jdgserver_name} documentation].
endif::[] endif::[]
ifeval::[{project_community}==true] ifeval::[{project_community}==true]
link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation]. link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation].
@ -715,7 +715,7 @@ A bidirectional state transfer will ensure that entities which were created *aft
Unfortunately, there is no any universal solution to this. Split-brains and network outages are just state, which is usually impossible to be handled 100% correctly with 100% consistent data between sites. In the case of {project_name}, it typically is not a critical issue. In the worst case, users will need to re-login again to their clients, or have the improper count of loginFailures tracked for brute force protection. See the {jdgserver_name}/JGroups documentation for more tips how to deal with split-brain. Unfortunately, there is no any universal solution to this. Split-brains and network outages are just state, which is usually impossible to be handled 100% correctly with 100% consistent data between sites. In the case of {project_name}, it typically is not a critical issue. In the worst case, users will need to re-login again to their clients, or have the improper count of loginFailures tracked for brute force protection. See the {jdgserver_name}/JGroups documentation for more tips how to deal with split-brain.
The state transfer can be also done on the {jdgserver_name} server side through JMX. The operation name is `pushState`. There are few other operations to monitor status, cancel push state, and so on. The state transfer can be also done on the {jdgserver_name} server side through JMX. The operation name is `pushState`. There are few other operations to monitor status, cancel push state, and so on.
More info about state transfer is available in the link:{jdgserver_crossdcdocs_link}#pushing_state_transfer_to_sites[{jdgserver_name} docs]. More info about state transfer is available in the link:{jdgserver_crossdcdocs_link}#cli_xsite_push-cli-ops[{jdgserver_name} docs].
[[clearcache]] [[clearcache]]

View file

@ -5,6 +5,22 @@
[role="_abstract"] [role="_abstract"]
Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of Cross-Datacenter replication. Use the following procedures for {jdgserver_name} {jdgserver_version_latest} to perform a basic setup of Cross-Datacenter replication.
This example for {jdgserver_name} {jdgserver_version_latest} involves two data centers, `site1` and `site2`. Each data center consists of 1 {jdgserver_name} server and 2 {project_name} servers. We will end up with 2 {jdgserver_name} servers and 4 {project_name} servers in total.
* `Site1` consists of {jdgserver_name} server, `server1`, and 2 {project_name} servers, `node11` and `node12` .
* `Site2` consists of {jdgserver_name} server, `server2`, and 2 {project_name} servers, `node21` and `node22` .
* {jdgserver_name} servers `server1` and `server2` are connected to each other through the RELAY2 protocol and `backup` based {jdgserver_name}
caches in a similar way as described in the link:{jdgserver_crossdcdocs_link}[{jdgserver_name} documentation].
* {project_name} servers `node11` and `node12` form a cluster with each other, but they do not communicate directly with any server in `site2`.
They communicate with the Infinispan server `server1` using the Hot Rod protocol (Remote cache). See <<communication>> for more information.
* The same details apply for `node21` and `node22`. They cluster with each other and communicate only with `server2` server using the Hot Rod protocol.
Our example setup assumes that the four {project_name} servers talk to the same database. In production, we recommend that you use separate synchronously replicated databases across data centers as described in <<database>>.
include::proc-setting-up-infinispan.adoc[leveloffset=4] include::proc-setting-up-infinispan.adoc[leveloffset=4]
include::proc-configuring-infinispan.adoc[leveloffset=4] include::proc-configuring-infinispan.adoc[leveloffset=4]
include::proc-creating-infinispan-caches.adoc[leveloffset=4] include::proc-creating-infinispan-caches.adoc[leveloffset=4]

View file

@ -80,14 +80,19 @@ By default, {jdgserver_name} Server uses `server/conf/infinispan.xml` for static
<endpoints socket-binding="default"> <endpoints socket-binding="default">
<hotrod-connector name="hotrod"> <hotrod-connector name="hotrod">
<authentication> <authentication>
ifeval::[{project_product}==true]
<sasl mechanisms="DIGEST-MD5" <1>
endif::[]
ifeval::[{project_community}==true]
<sasl mechanisms="SCRAM-SHA-512" <1> <sasl mechanisms="SCRAM-SHA-512" <1>
endif::[]
server-name="infinispan" /> <2> server-name="infinispan" /> <2>
</authentication> </authentication>
</hotrod-connector> </hotrod-connector>
<rest-connector name="rest"/> <rest-connector name="rest"/>
</endpoints> </endpoints>
---- ----
<1> Configures the SASL authentication mechanism for the Hot Rod endpoint. SCRAM-SHA-512 is the default SASL mechanism for Hot Rod. However you can use whatever is appropriate for your environment, such as GSSAPI. <1> Configures the SASL authentication mechanism for the Hot Rod endpoint. SCRAM-SHA-512 is the default SASL mechanism for Hot Rod. However you can use whatever is appropriate for your environment, such as DIGEST-MD5 or GSSAPI.
<2> Defines the name that {jdgserver_name} servers present to clients. You specify this name in the Hot Rod client configuration when you set up {project_name}. <2> Defines the name that {jdgserver_name} servers present to clients. You specify this name in the Hot Rod client configuration when you set up {project_name}.
+ +
. Create a cache template. . Create a cache template.
@ -111,19 +116,27 @@ NOTE: Add the cache template to `infinispan.xml` on each node in the {jdgserver_
<3> Disables timeout for lock acquisition. <3> Disables timeout for lock acquisition.
<4> Names the backup site for the {jdgserver_name} cluster you are configuring. <4> Names the backup site for the {jdgserver_name} cluster you are configuring.
+ +
. Start {jdgserver_name} at each site. . Start {jdgserver_name} server1.
+ +
[source,bash,options="nowrap",subs=attributes+] [source,bash,options="nowrap",subs=attributes+]
---- ----
$ bin/server.sh ./server.sh -c infinispan.xml -b PUBLIC_IP_ADDRESS -k PUBLIC_IP_ADDRESS -Djgroups.mcast_addr=228.6.7.10
---- ----
+
. Start {jdgserver_name} server2.
+
[source,bash,options="nowrap",subs=attributes+]
----
./server.sh -c infinispan.xml -b PUBLIC_IP_ADDRESS -k PUBLIC_IP_ADDRESS -Djgroups.mcast_addr=228.6.7.11
----
+ +
. Check {jdgserver_name} server logs to verify the clusters form cross-site views. . Check {jdgserver_name} server logs to verify the clusters form cross-site views.
+ +
[source,options="nowrap",subs=attributes+] [source,options="nowrap",subs=attributes+]
---- ----
INFO [org.infinispan.XSITE] (jgroups-5,${server.hostname}) ISPN000439: Received new x-site view: [NYC] INFO [org.infinispan.XSITE] (jgroups-5,${server.hostname}) ISPN000439: Received new x-site view: [site1]
INFO [org.infinispan.XSITE] (jgroups-7,${server.hostname}) ISPN000439: Received new x-site view: [NYC, LON] INFO [org.infinispan.XSITE] (jgroups-7,${server.hostname}) ISPN000439: Received new x-site view: [site1, site2]
---- ----
ifeval::[{project_product}==true] ifeval::[{project_product}==true]

View file

@ -1,10 +1,10 @@
[id="proc-configuring-remote-cache-{context}"] [id="proc-configuring-remote-cache-{context}"]
= Configuring Remote Cache Stores on {project_name} = Configuring Remote Cache Stores on {project_name}
After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} data, you can configure the Infinispan subsystem to use those clusters as remote stores. After you set up remote {jdgserver_name} clusters, you configure the Infinispan subsystem on {project_name} to externalize data to those clusters through remote stores.
.Prerequisites .Prerequisites
* Set up remote {jdgserver_name} clusters that can back up {jdgserver_name} data. * Set up remote {jdgserver_name} clusters for cross-site configuration.
* Create a truststore that contains the SSL certificate with the {jdgserver_name} Server identity. * Create a truststore that contains the SSL certificate with the {jdgserver_name} Server identity.
.Procedure .Procedure
@ -15,8 +15,8 @@ After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} da
[source,xml,options="nowrap",subs=attributes+] [source,xml,options="nowrap",subs=attributes+]
---- ----
<outbound-socket-binding name="remote-cache"> <1> <outbound-socket-binding name="remote-cache"> <1>
<remote-destination host="${remote.cache.host:server_hostname}"> <2> <remote-destination host="${remote.cache.host:server_hostname}" <2>
<port="${remote.cache.port:11222}"/> <3> port="${remote.cache.port:11222}"/> <3>
</outbound-socket-binding> </outbound-socket-binding>
---- ----
<1> Names the socket binding as `remote-cache`. <1> Names the socket binding as `remote-cache`.
@ -28,25 +28,11 @@ After you set up remote {jdgserver_name} clusters to back up {jdgserver_name} da
[source,xml,options="nowrap",subs=attributes+] [source,xml,options="nowrap",subs=attributes+]
---- ----
<subsystem xmlns="urn:jboss:domain:infinispan:11.0"> <subsystem xmlns="urn:jboss:domain:infinispan:11.0">
<cache-container name="keycloak" module="org.keycloak.keycloak-model-infinispan"> <cache-container name="keycloak"
---- module="org.keycloak.keycloak-model-infinispan"/>
+
. Create a `hotrod-client.properties` file with the following content:
+
[source,xml,options="nowrap",subs=attributes+]
----
infinispan.client.hotrod.server_list = server1:11222
infinispan.client.hotrod.auth_username = myuser
infinispan.client.hotrod.auth_password = qwer1234!
infinispan.client.hotrod.auth_realm = default
infinispan.client.hotrod.auth_server_name = infinispan
infinispan.client.hotrod.sasl_mechanism = SCRAM-SHA-512
infinispan.client.hotrod.trust_store_file_name = /path/to/truststore.jks
infinispan.client.hotrod.trust_store_type = JKS
infinispan.client.hotrod.trust_store_password = password
---- ----
. Update a replicated cache named `work` that is in the Infinispan subsystem with the following configuration: . Update the `work` cache in the Infinispan subsystem so it has the following configuration:
+ +
[source,xml,options="nowrap",subs=attributes+] [source,xml,options="nowrap",subs=attributes+]
---- ----
@ -60,6 +46,19 @@ infinispan.client.hotrod.trust_store_password = password
shared="true"> shared="true">
<property name="rawValues">true</property> <property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property> <property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
<property name="infinispan.client.hotrod.auth_username">myuser</property>
<property name="infinispan.client.hotrod.auth_password">qwer1234!</property>
<property name="infinispan.client.hotrod.auth_realm">default</property>
<property name="infinispan.client.hotrod.auth_server_name">infinispan</property>
ifeval::[{project_product}==true]
<property name="infinispan.client.hotrod.sasl_mechanism">DIGEST-MD5</property>
endif::[]
ifeval::[{project_community}==true]
<property name="infinispan.client.hotrod.sasl_mechanism">SCRAM-SHA-512</property>
endif::[]
<property name="infinispan.client.hotrod.trust_store_file_name">/path/to/truststore.jks</property>
<property name="infinispan.client.hotrod.trust_store_type">JKS</property>
<property name="infinispan.client.hotrod.trust_store_password">password</property>
</remote-store> </remote-store>
</replicated-cache> </replicated-cache>
---- ----
@ -103,6 +102,19 @@ For example, add a cache named `sessions` with the following configuration:
shared="true"> shared="true">
<property name="rawValues">true</property> <property name="rawValues">true</property>
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property> <property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
<property name="infinispan.client.hotrod.auth_username">myuser</property>
<property name="infinispan.client.hotrod.auth_password">qwer1234!</property>
<property name="infinispan.client.hotrod.auth_realm">default</property>
<property name="infinispan.client.hotrod.auth_server_name">infinispan</property>
ifeval::[{project_product}==true]
<property name="infinispan.client.hotrod.sasl_mechanism">DIGEST-MD5</property>
endif::[]
ifeval::[{project_community}==true]
<property name="infinispan.client.hotrod.sasl_mechanism">SCRAM-SHA-512</property>
endif::[]
<property name="infinispan.client.hotrod.trust_store_file_name">/path/to/truststore.jks</property>
<property name="infinispan.client.hotrod.trust_store_type">JKS</property>
<property name="infinispan.client.hotrod.trust_store_password">password</property>
</remote-store> </remote-store>
</distributed-cache> </distributed-cache>
---- ----
@ -111,22 +123,98 @@ For example, add a cache named `sessions` with the following configuration:
<3> Names the corresponding cache on the remote {jdgserver_name} cluster. <3> Names the corresponding cache on the remote {jdgserver_name} cluster.
<4> Specifies the `remote-cache` socket binding. <4> Specifies the `remote-cache` socket binding.
+ +
. Start each {project_name} server with `hotrod-client.properties` on the classpath, for example:
. Copy the `NODE11` to 3 other directories referred later as `NODE12`, `NODE21` and `NODE22`.
. Start `NODE11` :
+ +
[source,xml,options="nowrap",subs=attributes+] [source,subs="+quotes"]
---- ----
cd NODE11/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \ ./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \ -Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
-P path/to/hotrod-client.properties
---- ----
+ +
. Check server logs for the following messages: If you notice the following warning messages in logs, you can safely ignore them:
+ +
[source,options="nowrap",subs=attributes+] [source,options="nowrap",subs=attributes+]
---- ----
Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12] WARN [org.infinispan.CONFIG] (MSC service thread 1-5) ISPN000292: Unrecognized attribute 'infinispan.client.hotrod.auth_password'. Please check your configuration. Ignoring!
WARN [org.infinispan.CONFIG] (MSC service thread 1-5) ISPN000292: Unrecognized attribute 'infinispan.client.hotrod.auth_username'. Please check your configuration. Ignoring!
---- ----
+
. Start `NODE12` :
+
[source,subs="+quotes"]
----
cd NODE12/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node12 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
The cluster nodes should be connected. Something like this should be in the log of both NODE11 and NODE12:
+
```
Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12]
```
NOTE: The channel name in the log might be different.
. Start `NODE21` :
+
[source,subs="+quotes"]
----
cd NODE21/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node21 -Djboss.site.name=site2 \
-Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
It shouldn't be connected to the cluster with `NODE11` and `NODE12`, but to a separate one:
+
```
Received new cluster view for channel keycloak: [node21|0] (1) [node21]
```
+
. Start `NODE22` :
+
[source,subs="+quotes"]
----
cd NODE22/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node22 -Djboss.site.name=site2 \
-Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \
-Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_
----
+
It should be in cluster with `NODE21` :
+
```
Received new cluster view for channel keycloak: [node21|1] (2) [node21, node22]
```
+
NOTE: The channel name in the log might be different.
. Test:
.. Go to `http://node11:8080/auth/` and create the initial admin user.
.. Go to `http://node11:8080/auth/admin` and login as admin to admin console.
.. Open a second browser and go to any of nodes `http://node12:8080/auth/admin` or `http://node21:8080/auth/admin` or `http://node22:8080/auth/admin`. After login, you should be able to see
the same sessions in tab `Sessions` of particular user, client or realm on all 4 servers.
.. After making a change in the {project_name} Admin Console, such as modifying a user or a relam, that change
should be immediately visible on any of the four nodes. Caches should be properly invalidated everywhere.
.. Check server.logs if needed. After login or logout, the message like this should be on all the nodes `NODEXY/standalone/log/server.log` :
+
```
2017-08-25 17:35:17,737 DEBUG [org.keycloak.models.sessions.infinispan.remotestore.RemoteCacheSessionListener] (Client-Listener-sessions-30012a77422542f5) Received event from remote store.
Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false'
```
ifeval::[{project_product}==true] ifeval::[{project_product}==true]
[role="_additional-resources"] [role="_additional-resources"]

View file

@ -35,14 +35,17 @@ EOF
+ +
[source,bash,options="nowrap",subs=attributes+] [source,bash,options="nowrap",subs=attributes+]
----- -----
$ bin/cli.sh -c http://localhost:11222 -f /tmp/caches.batch $ bin/cli.sh -c https://server1:11222 --trustall -f /tmp/caches.batch
----- -----
+
NOTE: Instead of the `--trustall` argument you can specify the truststore with the `-t` argument and the truststore password with the `-s` argument.
+
. Create the caches on the other site.
ifeval::[{project_product}==true] ifeval::[{project_product}==true]
[role="_additional-resources"] [role="_additional-resources"]
.Additional resources .Additional resources
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] + link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#start_server[Getting Started with Data Grid Server] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#create_remote_cache[Remotely Creating Caches on Data Grid Clusters] + link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_server_guide/index#create_remote_cache[Remotely Creating Caches on Data Grid Clusters] +
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_command_line_interface/index#batch_operations[Performing Batch Operations with the CLI] + link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_command_line_interface/index#batch_operations[Performing Batch Operations with the CLI]
link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/data_grid_rest_api/rest_v2_api#rest_v2_cache_exists[Verifying Caches]
endif::[] endif::[]

View file

@ -20,7 +20,11 @@ For Cross-Datacenter replication, you start by creating remote {jdgserver_name}
$ bin/cli.sh user create myuser -p "qwer1234!" $ bin/cli.sh user create myuser -p "qwer1234!"
---- ----
+ +
Note: You specify these credentials in the Hot Rod client configuration when you create remote caches on {project_name}. [NOTE]
====
You specify these credentials in the Hot Rod client configuration when you create remote caches on {project_name}.
====
+
. Create an SSL keystore and truststore to secure connections between {jdgserver_name} and {project_name}, for example: . Create an SSL keystore and truststore to secure connections between {jdgserver_name} and {project_name}, for example:
.. Create a keystore to provide an SSL identity to your {jdgserver_name} cluster .. Create a keystore to provide an SSL identity to your {jdgserver_name} cluster
+ +
@ -29,6 +33,7 @@ Note: You specify these credentials in the Hot Rod client configuration when you
keytool -genkey -alias server -keyalg RSA -keystore server.jks -keysize 2048 keytool -genkey -alias server -keyalg RSA -keystore server.jks -keysize 2048
---- ----
+ +
.. Export an SSL certificate from the keystore. .. Export an SSL certificate from the keystore.
+ +
[source,bash,options="nowrap",subs=attributes+] [source,bash,options="nowrap",subs=attributes+]
@ -36,10 +41,15 @@ keytool -genkey -alias server -keyalg RSA -keystore server.jks -keysize 2048
keytool -exportcert -keystore server.jks -alias server -file server.crt keytool -exportcert -keystore server.jks -alias server -file server.crt
---- ----
+ +
.. Import the SSL certificate into a truststore that {jdgserver_name} can use to verify the SSL identity for {jdgserver_name}. .. Import the SSL certificate into a truststore that {project_name} can use to verify the SSL identity for {jdgserver_name}.
+ +
[source,bash,options="nowrap",subs=attributes+] [source,bash,options="nowrap",subs=attributes+]
---- ----
keytool -importcert -keystore truststore.jks -alias server -file server.crt keytool -importcert -keystore truststore.jks -alias server -file server.crt
----
.. Remove `server.crt`.
+
[source,bash,options="nowrap",subs=attributes+]
----
rm server.crt rm server.crt
---- ----

View file

@ -142,7 +142,7 @@
:jdgserver_name: RHDG :jdgserver_name: RHDG
:jdgserver_version: 7.3 :jdgserver_version: 7.3
:jdgserver_version_latest: 8.1 :jdgserver_version_latest: 8.1
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/7.3/html/red_hat_data_grid_user_guide/x_site_replication :jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/data_grid_guide_to_cross-site_replication/
:fuseVersion: JBoss Fuse 6.3.0 Rollup 12 :fuseVersion: JBoss Fuse 6.3.0 Rollup 12
:fuseHawtioEAPVersion: JBoss EAP 6.4 :fuseHawtioEAPVersion: JBoss EAP 6.4