2016-02-03 10:20:22 +00:00
<!--
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
~ and other contributors as indicated by the @author tags.
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
2014-10-06 11:58:22 +00:00
<chapter id= "clustering" >
<title > Clustering</title>
<para > To improve availability and scalability Keycloak can be deployed in a cluster.</para>
<para > It's fairly straightforward to configure a Keycloak cluster, the steps required are:
<itemizedlist >
<listitem >
<para >
Configure a shared database
</para>
</listitem>
<listitem >
<para >
Configure Infinispan
</para>
</listitem>
<listitem >
<para >
Enable realm and user cache invalidation
</para>
</listitem>
<listitem >
<para >
Enable distributed user sessions
</para>
</listitem>
<listitem >
<para >
Start in HA mode
</para>
</listitem>
2016-04-28 11:01:32 +00:00
<listitem >
<para >
Loadbalancer (optional step)
</para>
</listitem>
2014-10-06 11:58:22 +00:00
</itemizedlist>
</para>
<section >
<title > Configure a shared database</title>
<para >
Keycloak doesn't replicate realms and users, but instead relies on all nodes using the same
database. This can be a relational database or Mongo. To make sure your database doesn't become a single
point of failure you may also want to deploy your database to a cluster.
</para>
2016-03-02 11:04:10 +00:00
<section >
2016-04-07 07:36:34 +00:00
<title > Migration lock</title>
<para >
Keycloak locks the database during startup. This guarantees that only startup actions like migration and importing realms are not executed
concurrently by multiple nodes.
2016-03-02 11:04:10 +00:00
</para>
<para >
2016-04-07 07:36:34 +00:00
By default, the maximum timeout for lock is 900 seconds. If a node is unable to acquire the lock within 900 seconds, it fails to start.
2016-03-02 11:04:10 +00:00
The lock checking is done every 2 seconds by default. Typically you won't need to increase/decrease the default value, but just in case
it's possible to configure it in <literal > standalone/configuration/keycloak-server.json</literal> :
<programlisting >
< ![CDATA[
"dblock": {
"jpa": {
"lockWaitTimeout": 900,
"lockRecheckTime": 2
}
}
]]>
</programlisting>
or similarly if you're using Mongo (just by replace <literal > jpa</literal> with <literal > mongo</literal> )
</para>
</section>
2014-10-06 11:58:22 +00:00
</section>
<section >
<title id= "cluster-configure-infinispan" > Configure Infinispan</title>
<para >
Keycloak uses <ulink url= "http://www.infinispan.org/" > Infinispan</ulink> caches to share information between nodes.
</para>
<para >
For realm and users Keycloak uses a invalidation cache. An invalidation cache doesn't share any data, but simply
2016-04-07 07:36:34 +00:00
removes stale data from remote caches and makes sure all nodes re-load data from the database when it is changed. This reduces network traffic,
as well as preventing sensitive data (such as realm keys and password hashes) from being transmitted.
2014-10-06 11:58:22 +00:00
</para>
<para >
2016-04-07 07:36:34 +00:00
User sessions and login failures supports either distributed caches or fully replicated caches. The default is a distributed
cache. A distributed cache splits user sessions into segments where each node holds one or more segment. It is possible
2015-08-05 11:32:40 +00:00
to replicate each segment to multiple nodes, but this is not strictly necessary since the failure of a node
2016-04-07 07:36:34 +00:00
will only result in users having to re-authenticate. If you need to prevent node failures from requiring users to
re-authenticate, set the <literal > owners</literal> attribute to 2 or more for the <literal > sessions</literal> cache
2015-08-05 11:32:40 +00:00
of <literal > infinispan/Keycloak</literal> container as described below.
2014-10-06 11:58:22 +00:00
</para>
<para >
2016-04-07 07:36:34 +00:00
For cluster configuration edit the configuration of <literal > infinispan/Keycloak</literal> container in <literal > standalone/configuration/standalone-ha.xml</literal> .
2014-10-06 11:58:22 +00:00
</para>
2016-04-07 07:36:34 +00:00
</section>
<section >
<title > Secure Private Network</title>
2014-10-06 11:58:22 +00:00
<para >
2016-04-07 07:36:34 +00:00
Best practice is to put intra-cluster traffic on a separate network from the network handling user request. This is both for performance reasons as
well as reducing the risk of exposing clustering to unwanted, possibly malevolent, traffic. As this is the best practice there's a separate
network interface to configure the address for clustering. This means that changing the bind address by adding <literal > -b < address> </literal>
to the startup command will only affect user request.
2014-10-06 11:58:22 +00:00
</para>
<para >
2016-04-07 07:36:34 +00:00
To configure bind address for clustering add <literal > -bprivate=< private address> </literal> to the startup command. As mentioned in the previous
paragraph you should only expose this on a secure private network.
2014-10-06 11:58:22 +00:00
</para>
</section>
<section >
<title > Start in HA mode</title>
<para >
To start the server in HA mode, start it with:
2016-04-07 07:36:34 +00:00
<programlisting > # bin/standalone --server-config=standalone-ha.xml -bpublic=< public address> -bprivate=< private address> </programlisting>
Replace <literal > public address</literal> with the address used for user request and <literal > private address</literal> with the address used for
cluster communication.
2014-10-06 11:58:22 +00:00
</para>
</section>
<section >
2016-04-07 07:36:34 +00:00
<title > Enabling cluster authentication and encryption</title>
2014-10-06 11:58:22 +00:00
<para >
2016-04-07 07:36:34 +00:00
By default anyone that has access to the secure private network is able to join the cluster and could potentially send malicious messages to the
cluster. As mentioned earlier the realm and user caches are invalidation caches so no sensitive information is transmitted. There is also limited
risk with regards to user sessions as even though a malicious node could potentially create a new user session they would need to be able to sign
associated tokens which is not possible without the realm private key. It would be possible to prevent user sessions from expiring and reset
failed login attempts if brute force protection is enabled.
2014-10-06 11:58:22 +00:00
</para>
<para >
2016-04-07 07:36:34 +00:00
If you are not able to fully isolate the network used for clustering communication from potential attackers you may want to enable authentication
and encryption of the cluster. This will have an impact on performance.
2014-10-06 11:58:22 +00:00
</para>
<para >
To enable encryption of cluster messages you first have to create a shared keystore (change the key and store passwords!):
<programlisting >
< ![CDATA[
# keytool -genseckey -alias keycloak -keypass <PASSWORD > -storepass <PASSWORD > \
-keyalg Blowfish -keysize 56 -keystore defaultStore.keystore -storetype JCEKS
]]>
</programlisting>
</para>
<para >
Copy this keystore to all nodes (for example to standalone/configuration). Then configure JGroups to encrypt all
messages by adding the <literal > ENCRYPT</literal> protocol to the JGroups sub-system (this should be added after
the <literal > pbcast.GMS</literal> protocol):
<programlisting >
< ![CDATA[
<subsystem xmlns= "urn:jboss:domain:jgroups:2.0" default-stack= "udp" >
<stack name= "udp" >
...
<protocol type= "pbcast.GMS" />
<protocol type= "ENCRYPT" >
<property name= "key_store_name" >
${jboss.server.config.dir}/defaultStore.keystore
</property>
<property name= "key_password" > PASSWORD</property>
<property name= "store_password" > PASSWORD</property>
<property name= "alias" > keycloak</property>
</protocol>
...
</stack>
<stack name= "tcp" >
...
<protocol type= "pbcast.GMS" />
<protocol type= "ENCRYPT" >
<property name= "key_store_name" >
${jboss.server.config.dir}/defaultStore.keystore
</property>
<property name= "key_password" > PASSWORD</property>
<property name= "store_password" > PASSWORD</property>
<property name= "alias" > keycloak</property>
</protocol>
...
</stack>
...
</subsystem>
]]>
</programlisting>
See the <ulink url= "http://www.jgroups.org/manual/index.html#ENCRYPT" > JGroups manual</ulink> for more details.
</para>
</section>
2016-04-28 11:01:32 +00:00
<section >
<title > Loadbalancer setup</title>
<para >
This is optional step, however in production, when you have more Keycloak nodes in cluster, you usually want to "hide" them behind frontent loadbalancer server, which will forward the
requests to the "backend" keycloak nodes. Consult the documentation of your loadbalancer (For example <ulink url= "http://mod-cluster.jboss.org/" > Mod cluster</ulink> )
for how to configure this.
</para>
<para >
But regardless of loadbalancer implementation used, it is important that you make sure the web server sets the <literal > X-Forwarded-For</literal> and
<literal > X-Forwarded-Proto</literal> headers on the requests made to Keycloak properly. This is described in details in <link linkend= "proxy-address-forwarding" > Reverse proxy</link>
section.
</para>
</section>
2014-12-16 13:04:04 +00:00
<section >
<title > Troubleshooting</title>
<para >
Note that when you run cluster, you should see message similar to this in the log of both cluster nodes:
<programlisting >
< ![CDATA[
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-10,shared=udp)
ISPN000094: Received new cluster view: [node1/keycloak|1] (2) [node1/keycloak, node2/keycloak]
]]>
</programlisting>
If you see just one node mentioned, it's possible that your cluster hosts are not joined together.
</para>
<para >
Usually it's best practice to have your cluster nodes on private network without firewall for communication among them.
Firewall could be enabled just on public access point to your network instead. If for some reason you still need to have firewall
enabled on cluster nodes, you will need to open some ports. Default values are UDP port 55200 and multicast port 45688
2016-04-07 07:36:34 +00:00
with multicast address 230.0.0.4. Note that you may need more ports opened if you want to enable additional features like diagnostics for your
JGroups stack. Keycloak delegates most of the clustering work to Infinispan/JGroups, so consult EAP or JGroups documentation for more info.
2014-12-16 13:04:04 +00:00
</para>
</section>
2016-04-07 07:36:34 +00:00
</chapter>