[id="proc-configuring-remote-cache-{context}"] = Configuring Remote Cache Stores on {project_name} After you set up remote {jdgserver_name} clusters, you configure the Infinispan subsystem on {project_name} to externalize data to those clusters through remote stores. .Prerequisites * Set up remote {jdgserver_name} clusters for cross-site configuration. * Create a truststore that contains the SSL certificate with the {jdgserver_name} Server identity. .Procedure . Add the truststore to the {project_name} deployment. . Create a socket binding that points to your {jdgserver_name} cluster. + [source,xml,options="nowrap",subs=attributes+] ---- <1> port="${remote.cache.port:11222}"/> <3> ---- <1> Names the socket binding as `remote-cache`. <2> Specifies one or more hostnames for the {jdgserver_name} cluster. <3> Defines the port of `11222` where the Hot Rod endpoint listens. + . Add the `org.keycloak.keycloak-model-infinispan` module to the `keycloak` cache container in the Infinispan subsystem. + [source,xml,options="nowrap",subs=attributes+] ---- ---- . Update the `work` cache in the Infinispan subsystem so it has the following configuration: + [source,xml,options="nowrap",subs=attributes+] ---- <1> remote-servers="remote-cache" <3> passivation="false" fetch-state="false" purge="false" preload="false" shared="true"> true org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory myuser qwer1234! default infinispan SCRAM-SHA-512 /path/to/truststore.jks JKS password ---- <1> Names the cache in the {jdgserver_name} configuration. <2> Names the corresponding cache on the remote {jdgserver_name} cluster. <3> Specifies the `remote-cache` socket binding. + The preceding cache configuration includes recommended settings for {jdgserver_name} caches. Hot Rod client configuration properties specify the {jdgserver_name} user credentials and SSL keystore and truststore details. + Refer to the ifeval::[{project_community}==true] https://infinispan.org/docs/11.0.x/titles/xsite/xsite.html#configure_clients-xsite[{jdgserver_name} documentation] endif::[] ifeval::[{project_product}==true] https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/data_grid_guide_to_cross-site_replication/index#configure_clients-xsite[{jdgserver_name} documentation] endif::[] for descriptions of each property. . Add distributed caches to the Infinispan subsystem for each of the following caches: + * sessions * clientSessions * offlineSessions * offlineClientSessions * actionTokens * loginFailures + For example, add a cache named `sessions` with the following configuration: + [source,xml,options="nowrap",subs=attributes+] ---- owners="1"> <2> remote-servers="remote-cache" <4> passivation="false" fetch-state="false" purge="false" preload="false" shared="true"> true org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory myuser qwer1234! default infinispan SCRAM-SHA-512 /path/to/truststore.jks JKS password ---- <1> Names the cache in the {jdgserver_name} configuration. <2> Configures one replica of each cache entry across the {jdgserver_name} cluster. <3> Names the corresponding cache on the remote {jdgserver_name} cluster. <4> Specifies the `remote-cache` socket binding. + . Copy the `NODE11` to 3 other directories referred later as `NODE12`, `NODE21` and `NODE22`. . Start `NODE11` : + [source,subs="+quotes"] ---- cd NODE11/bin ./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \ -Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \ -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ ---- + If you notice the following warning messages in logs, you can safely ignore them: + [source,options="nowrap",subs=attributes+] ---- WARN [org.infinispan.CONFIG] (MSC service thread 1-5) ISPN000292: Unrecognized attribute 'infinispan.client.hotrod.auth_password'. Please check your configuration. Ignoring! WARN [org.infinispan.CONFIG] (MSC service thread 1-5) ISPN000292: Unrecognized attribute 'infinispan.client.hotrod.auth_username'. Please check your configuration. Ignoring! ---- + . Start `NODE12` : + [source,subs="+quotes"] ---- cd NODE12/bin ./standalone.sh -c standalone-ha.xml -Djboss.node.name=node12 -Djboss.site.name=site1 \ -Djboss.default.multicast.address=234.56.78.1 -Dremote.cache.host=server1 \ -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ ---- + The cluster nodes should be connected. Something like this should be in the log of both NODE11 and NODE12: + ``` Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12] ``` NOTE: The channel name in the log might be different. . Start `NODE21` : + [source,subs="+quotes"] ---- cd NODE21/bin ./standalone.sh -c standalone-ha.xml -Djboss.node.name=node21 -Djboss.site.name=site2 \ -Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \ -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ ---- + It shouldn't be connected to the cluster with `NODE11` and `NODE12`, but to a separate one: + ``` Received new cluster view for channel keycloak: [node21|0] (1) [node21] ``` + . Start `NODE22` : + [source,subs="+quotes"] ---- cd NODE22/bin ./standalone.sh -c standalone-ha.xml -Djboss.node.name=node22 -Djboss.site.name=site2 \ -Djboss.default.multicast.address=234.56.78.2 -Dremote.cache.host=server2 \ -Djava.net.preferIPv4Stack=true -b _PUBLIC_IP_ADDRESS_ ---- + It should be in cluster with `NODE21` : + ``` Received new cluster view for channel keycloak: [node21|1] (2) [node21, node22] ``` + NOTE: The channel name in the log might be different. . Test: .. Go to `http://node11:8080/auth/` and create the initial admin user. .. Go to `http://node11:8080/auth/admin` and login as admin to admin console. .. Open a second browser and go to any of nodes `http://node12:8080/auth/admin` or `http://node21:8080/auth/admin` or `http://node22:8080/auth/admin`. After login, you should be able to see the same sessions in tab `Sessions` of particular user, client or realm on all 4 servers. .. After making a change in the {project_name} Admin Console, such as modifying a user or a realm, that change should be immediately visible on any of the four nodes. Caches should be properly invalidated everywhere. .. Check server.logs if needed. After login or logout, the message like this should be on all the nodes `NODEXY/standalone/log/server.log` : + ``` 2017-08-25 17:35:17,737 DEBUG [org.keycloak.models.sessions.infinispan.remotestore.RemoteCacheSessionListener] (Client-Listener-sessions-30012a77422542f5) Received event from remote store. Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false' ``` ifeval::[{project_product}==true] [role="_additional-resources"] .Additional resources link:https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html-single/configuring_data_grid/index[Data Grid Configuration Guide] + link:https://access.redhat.com/webassets/avalon/d/red-hat-data-grid/8.1/api/org/infinispan/client/hotrod/configuration/package-summary.html[Hot Rod Client Configuration API] + link:https://access.redhat.com/webassets/avalon/d/red-hat-data-grid/8.1/configdocs/[Data Grid Configuration Schema Reference] endif::[]