These steps are already automated for embedded Undertow, see Cross-DC tests section in [HOW-TO-RUN.md](../testsuite/integration-arquillian/HOW-TO-RUN.md) document. For Wildfly they are not yet automated.
Following instructions are related to Wildfly server.
This is the example setup simulating 2 datacenters `site1` and `site2` . Each datacenter consists of 1 infinispan server and 2 Keycloak servers.
So 2 infinispan servers and 4 Keycloak servers are totally in the testing setup.
* Site1 consists of infinispan server `jdg1` and 2 Keycloak servers `node11` and `node12` .
* Site2 consists of infinispan server `jdg2` and 2 Keycloak servers `node21` and `node22` .
* Infinispan servers `jdg1` and `jdg2` forms cluster with each other. The communication between them is the only communication between the 2 datacenters.
* Keycloak servers `node11` and `node12` forms cluster with each other, but they don't communicate with any server in `site2` . They communicate with infinispan server `jdg1` through the HotRod protocol (Remote cache).
* Same applies for `node21` and `node22` . They have cluster with each other and communicate just with `jdg2` server through the HotRod protocol.
TODO: Picture on blog
* For example when some object (realm, client, role, user, ...) is updated on `node11`, the `node11` will send invalidation message. It does it by saving special cache entry to the remote cache `work` on `jdg1` .
The `jdg1` notifies client listeners in same DC (hence on `node12`) and propagate the message to it. But `jdg1` is in replicated cache with `jdg2` .
So the entry is saved on `jdg2` too and `jdg2` will notify client listeners on nodes `node21` and `node22`.
All the nodes know that they should invalidate the updated object from their caches. The caches with the actual data (`realms`, `users` and `authorization`) are infinispan local caches.
TODO: Picture and better explanation?
* For example when some userSession is created/updated/removed on `node11` it is saved in cluster on current DC, so the `node12` can see it. But it's saved also to remote cache on `jdg1` server.
The userSession is then automatically seen on `jdg2` server because there is replicated cache `sessions` between `jdg1` and `jdg2` . Server `jdg2` then notifies nodes `node21` and `node22` through
the client listeners (Feature of Remote Cache and HotRod protocol. See infinispan docs for details). The node, who is owner of the userSession (either `node21` or `node22`) will update userSession in the cluster
on `site2` . Hence any user requests coming to Keycloak nodes on `site2` will see latest updates.
TODO: Picture and better explanation?
Example setup assumes all 6 servers are bootstrapped on localhost, but each on different ports.
Infinispan Server setup
-----------------------
1) Download Infinispan 8.2.6 server and unzip to some folder
2) Add this into `JDG1_HOME/standalone/configuration/clustered.xml` under cache-container named `clustered` :
9.3) Open 2nd browser and go to any of nodes `http://localhost:12080/auth/admin` or `http://localhost:13080/auth/admin` or `http://localhost:14080/auth/admin` . After login, you should be able to see
the same sessions in tab `Sessions` of particular user, client or realm on all 4 servers
9.4) After doing any change (eg. update some user), the update should be immediatelly visible on any of 4 nodes as caches should be properly invalidated everywhere.
9.5) Check server.logs if needed. After login or logout, the message like this should be on all the nodes `NODEXY/standalone/log/server.log` :
2017-08-25 17:35:17,737 DEBUG [org.keycloak.models.sessions.infinispan.remotestore.RemoteCacheSessionListener] (Client-Listener-sessions-30012a77422542f5) Received event from remote store.
This is just a starting point and the instructions are subject to change. We plan performance improvements especially around performance. If you
have any feedback regarding cross-dc scenario, please let us know on keycloak-user mailing list referred from [Keycloak home page](http://www.keycloak.org/community.html).