ClusteringTo improve availability and scalability Keycloak can be deployed in a cluster.It's fairly straightforward to configure a Keycloak cluster, the steps required are:
Configure a shared database
Configure Infinispan
Enable realm and user cache invalidation
Enable distributed user sessions
Start in HA mode
Loadbalancer (optional step)
Configure a shared database
Keycloak doesn't replicate realms and users, but instead relies on all nodes using the same
database. This can be a relational database or Mongo. To make sure your database doesn't become a single
point of failure you may also want to deploy your database to a cluster.
Migration lock
Keycloak locks the database during startup. This guarantees that only startup actions like migration and importing realms are not executed
concurrently by multiple nodes.
By default, the maximum timeout for lock is 900 seconds. If a node is unable to acquire the lock within 900 seconds, it fails to start.
The lock checking is done every 2 seconds by default. Typically you won't need to increase/decrease the default value, but just in case
it's possible to configure it in standalone/configuration/keycloak-server.json:
or similarly if you're using Mongo (just by replace jpa with mongo)
Configure Infinispan
Keycloak uses Infinispan caches to share information between nodes.
For realm and users Keycloak uses a invalidation cache. An invalidation cache doesn't share any data, but simply
removes stale data from remote caches and makes sure all nodes re-load data from the database when it is changed. This reduces network traffic,
as well as preventing sensitive data (such as realm keys and password hashes) from being transmitted.
User sessions and login failures supports either distributed caches or fully replicated caches. The default is a distributed
cache. A distributed cache splits user sessions into segments where each node holds one or more segment. It is possible
to replicate each segment to multiple nodes, but this is not strictly necessary since the failure of a node
will only result in users having to re-authenticate. If you need to prevent node failures from requiring users to
re-authenticate, set the owners attribute to 2 or more for the sessions cache
of infinispan/Keycloak container as described below.
For cluster configuration edit the configuration of infinispan/Keycloak container in standalone/configuration/standalone-ha.xml.
Secure Private Network
Best practice is to put intra-cluster traffic on a separate network from the network handling user request. This is both for performance reasons as
well as reducing the risk of exposing clustering to unwanted, possibly malevolent, traffic. As this is the best practice there's a separate
network interface to configure the address for clustering. This means that changing the bind address by adding -b <address>
to the startup command will only affect user request.
To configure bind address for clustering add -bprivate=<private address> to the startup command. As mentioned in the previous
paragraph you should only expose this on a secure private network.
Start in HA mode
To start the server in HA mode, start it with:
# bin/standalone --server-config=standalone-ha.xml -bpublic=<public address> -bprivate=<private address>
Replace public address with the address used for user request and private address with the address used for
cluster communication.
Enabling cluster authentication and encryption
By default anyone that has access to the secure private network is able to join the cluster and could potentially send malicious messages to the
cluster. As mentioned earlier the realm and user caches are invalidation caches so no sensitive information is transmitted. There is also limited
risk with regards to user sessions as even though a malicious node could potentially create a new user session they would need to be able to sign
associated tokens which is not possible without the realm private key. It would be possible to prevent user sessions from expiring and reset
failed login attempts if brute force protection is enabled.
If you are not able to fully isolate the network used for clustering communication from potential attackers you may want to enable authentication
and encryption of the cluster. This will have an impact on performance.
To enable encryption of cluster messages you first have to create a shared keystore (change the key and store passwords!):
-storepass \
-keyalg Blowfish -keysize 56 -keystore defaultStore.keystore -storetype JCEKS
]]>
Copy this keystore to all nodes (for example to standalone/configuration). Then configure JGroups to encrypt all
messages by adding the ENCRYPT protocol to the JGroups sub-system (this should be added after
the pbcast.GMS protocol):
...
${jboss.server.config.dir}/defaultStore.keystore
PASSWORDPASSWORDkeycloak
...
...
${jboss.server.config.dir}/defaultStore.keystore
PASSWORDPASSWORDkeycloak
...
...
]]>
See the JGroups manual for more details.
Loadbalancer setup
This is optional step, however in production, when you have more Keycloak nodes in cluster, you usually want to "hide" them behind frontent loadbalancer server, which will forward the
requests to the "backend" keycloak nodes. Consult the documentation of your loadbalancer (For example Mod cluster )
for how to configure this.
But regardless of loadbalancer implementation used, it is important that you make sure the web server sets the X-Forwarded-For and
X-Forwarded-Proto headers on the requests made to Keycloak properly. This is described in details in Reverse proxy
section.
Troubleshooting
Note that when you run cluster, you should see message similar to this in the log of both cluster nodes:
If you see just one node mentioned, it's possible that your cluster hosts are not joined together.
Usually it's best practice to have your cluster nodes on private network without firewall for communication among them.
Firewall could be enabled just on public access point to your network instead. If for some reason you still need to have firewall
enabled on cluster nodes, you will need to open some ports. Default values are UDP port 55200 and multicast port 45688
with multicast address 230.0.0.4. Note that you may need more ports opened if you want to enable additional features like diagnostics for your
JGroups stack. Keycloak delegates most of the clustering work to Infinispan/JGroups, so consult EAP or JGroups documentation for more info.