227 lines
11 KiB
XML
Executable file
227 lines
11 KiB
XML
Executable file
<!--
|
|
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
|
|
~ and other contributors as indicated by the @author tags.
|
|
~
|
|
~ Licensed under the Apache License, Version 2.0 (the "License");
|
|
~ you may not use this file except in compliance with the License.
|
|
~ You may obtain a copy of the License at
|
|
~
|
|
~ http://www.apache.org/licenses/LICENSE-2.0
|
|
~
|
|
~ Unless required by applicable law or agreed to in writing, software
|
|
~ distributed under the License is distributed on an "AS IS" BASIS,
|
|
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
~ See the License for the specific language governing permissions and
|
|
~ limitations under the License.
|
|
-->
|
|
|
|
<chapter id="clustering">
|
|
<title>Clustering</title>
|
|
|
|
<para>To improve availability and scalability Keycloak can be deployed in a cluster.</para>
|
|
|
|
<para>It's fairly straightforward to configure a Keycloak cluster, the steps required are:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para>
|
|
Configure a shared database
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Configure Infinispan
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Enable realm and user cache invalidation
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Enable distributed user sessions
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Start in HA mode
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
|
|
<section>
|
|
<title>Configure a shared database</title>
|
|
<para>
|
|
Keycloak doesn't replicate realms and users, but instead relies on all nodes using the same
|
|
database. This can be a relational database or Mongo. To make sure your database doesn't become a single
|
|
point of failure you may also want to deploy your database to a cluster.
|
|
</para>
|
|
<section>
|
|
<title>DB lock</title>
|
|
<para>Note that Keycloak supports concurrent startup by more cluster nodes at the same. This is ensured by DB lock, which prevents that some
|
|
startup actions (migrating database from previous version, importing realms at startup, initial bootstrap of admin user) are always executed just by one
|
|
cluster node at a time and other cluster nodes need to wait until the current node finishes startup actions and release the DB lock.
|
|
</para>
|
|
<para>
|
|
By default, the maximum timeout for lock is 900 seconds, so in case that second node is not able to acquire the lock within 900 seconds, it fails to start.
|
|
The lock checking is done every 2 seconds by default. Typically you won't need to increase/decrease the default value, but just in case
|
|
it's possible to configure it in <literal>standalone/configuration/keycloak-server.json</literal>:
|
|
<programlisting>
|
|
<![CDATA[
|
|
"dblock": {
|
|
"jpa": {
|
|
"lockWaitTimeout": 900,
|
|
"lockRecheckTime": 2
|
|
}
|
|
}
|
|
]]>
|
|
</programlisting>
|
|
or similarly if you're using Mongo (just by replace <literal>jpa</literal> with <literal>mongo</literal>)
|
|
</para>
|
|
</section>
|
|
</section>
|
|
|
|
<section>
|
|
<title id="cluster-configure-infinispan">Configure Infinispan</title>
|
|
<para>
|
|
Keycloak uses <ulink url="http://www.infinispan.org/">Infinispan</ulink> caches to share information between nodes.
|
|
</para>
|
|
<para>
|
|
For realm and users Keycloak uses a invalidation cache. An invalidation cache doesn't share any data, but simply
|
|
removes stale data from remote caches and makes sure all nodes re-load data from the database when it is changed. This reduces network traffic, as well as preventing sensitive data (such as
|
|
realm keys and password hashes) from being sent between the nodes.
|
|
</para>
|
|
<para>
|
|
User sessions and login failures supports either distributed caches or fully replicated caches. We recommend using a distributed
|
|
cache. A distributed
|
|
cache splits user sessions into segments where each node holds one or more segment. It is possible
|
|
to replicate each segment to multiple nodes, but this is not strictly necessary since the failure of a node
|
|
will only result in users having to log in again. If you need to prevent node failures from requiring users to
|
|
log in again, set the <literal>owners</literal> attribute to 2 or more for the <literal>sessions</literal> cache
|
|
of <literal>infinispan/Keycloak</literal> container as described below.
|
|
</para>
|
|
<para>
|
|
The infinispan container is set by default in <literal>standalone/configuration/keycloak-server.json</literal>:
|
|
<programlisting>
|
|
"connectionsInfinispan": {
|
|
"default" : {
|
|
"cacheContainer" : "java:jboss/infinispan/Keycloak"
|
|
}
|
|
}
|
|
</programlisting>
|
|
</para>
|
|
<para>As you can see in this file, the realmCache, userCache and userSession providers are configured to use infinispan by default, which applies for both cluster and non-cluster environment.</para>
|
|
<para>
|
|
For non-cluster configuration (server executed with <literal>standalone.xml</literal> ) is the infinispan container <literal>infinispan/Keycloak</literal> just uses local infinispan caches for realms, users and userSessions.
|
|
</para>
|
|
<para>
|
|
For cluster configuration, you can edit the configuration of <literal>infinispan/Keycloak</literal> container in <literal>standalone/configuration/standalone-ha.xml</literal> (or <literal>standalone-keycloak-ha.xml</literal>
|
|
if you are using overlay or demo distribution) .
|
|
</para>
|
|
</section>
|
|
|
|
<section>
|
|
<title>Start in HA mode</title>
|
|
<para>
|
|
To start the server in HA mode, start it with:
|
|
<programlisting># bin/standalone --server-config=standalone-ha.xml</programlisting>
|
|
or if you are using overlay or demo distribution with:
|
|
<programlisting># bin/standalone --server-config=standalone-keycloak-ha.xml</programlisting>
|
|
</para>
|
|
<para>
|
|
Alternatively you can copy <literal>standalone/config/standalone-ha.xml</literal> to <literal>standalone/config/standalone.xml</literal>
|
|
to make it the default server config.
|
|
</para>
|
|
</section>
|
|
|
|
<section>
|
|
<title>Enabling cluster security</title>
|
|
<para>
|
|
By default there's nothing to prevent unauthorized nodes from joining the cluster and sending potentially malicious
|
|
messages to the cluster. However, as there's no sensitive data sent there's not much that can be achieved.
|
|
For realms and users all that can be done is to send invalidation messages to make nodes load data from the
|
|
database more frequently. For user sessions it would be possible to modify existing user sessions, but creating
|
|
new sessions would have no affect as they would not be linked to any access tokens. There's not too much that
|
|
can be achieved by modifying user sessions. For example it would be possible to prevent sessions from expiring,
|
|
by changing the creation time. However, it would for example have no effect adding additional permissions to the
|
|
sessions as these are rechecked against the user and application when the token is created or refreshed.
|
|
</para>
|
|
<para>
|
|
In either case your cluster nodes should be in a private network, with a firewall protecting them from outside
|
|
attacks. Ideally isolated from workstations and laptops. You can also enable encryption of cluster messages,
|
|
this could for example be useful if you can't isolate cluster nodes from workstations and laptops on your private
|
|
network. However, encryption will obviously come at a cost of reduced performance.
|
|
</para>
|
|
<para>
|
|
To enable encryption of cluster messages you first have to create a shared keystore (change the key and store passwords!):
|
|
<programlisting>
|
|
<![CDATA[
|
|
# keytool -genseckey -alias keycloak -keypass <PASSWORD> -storepass <PASSWORD> \
|
|
-keyalg Blowfish -keysize 56 -keystore defaultStore.keystore -storetype JCEKS
|
|
]]>
|
|
</programlisting>
|
|
</para>
|
|
<para>
|
|
Copy this keystore to all nodes (for example to standalone/configuration). Then configure JGroups to encrypt all
|
|
messages by adding the <literal>ENCRYPT</literal> protocol to the JGroups sub-system (this should be added after
|
|
the <literal>pbcast.GMS</literal> protocol):
|
|
<programlisting>
|
|
<![CDATA[
|
|
<subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
|
|
<stack name="udp">
|
|
...
|
|
<protocol type="pbcast.GMS"/>
|
|
<protocol type="ENCRYPT">
|
|
<property name="key_store_name">
|
|
${jboss.server.config.dir}/defaultStore.keystore
|
|
</property>
|
|
<property name="key_password">PASSWORD</property>
|
|
<property name="store_password">PASSWORD</property>
|
|
<property name="alias">keycloak</property>
|
|
</protocol>
|
|
...
|
|
</stack>
|
|
<stack name="tcp">
|
|
...
|
|
<protocol type="pbcast.GMS"/>
|
|
<protocol type="ENCRYPT">
|
|
<property name="key_store_name">
|
|
${jboss.server.config.dir}/defaultStore.keystore
|
|
</property>
|
|
<property name="key_password">PASSWORD</property>
|
|
<property name="store_password">PASSWORD</property>
|
|
<property name="alias">keycloak</property>
|
|
</protocol>
|
|
...
|
|
</stack>
|
|
...
|
|
</subsystem>
|
|
]]>
|
|
</programlisting>
|
|
See the <ulink url="http://www.jgroups.org/manual/index.html#ENCRYPT">JGroups manual</ulink> for more details.
|
|
</para>
|
|
</section>
|
|
|
|
<section>
|
|
<title>Troubleshooting</title>
|
|
<para>
|
|
Note that when you run cluster, you should see message similar to this in the log of both cluster nodes:
|
|
<programlisting>
|
|
<![CDATA[
|
|
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-10,shared=udp)
|
|
ISPN000094: Received new cluster view: [node1/keycloak|1] (2) [node1/keycloak, node2/keycloak]
|
|
]]>
|
|
</programlisting>
|
|
If you see just one node mentioned, it's possible that your cluster hosts are not joined together.
|
|
</para>
|
|
<para>
|
|
Usually it's best practice to have your cluster nodes on private network without firewall for communication among them.
|
|
Firewall could be enabled just on public access point to your network instead. If for some reason you still need to have firewall
|
|
enabled on cluster nodes, you will need to open some ports. Default values are UDP port 55200 and multicast port 45688
|
|
with multicast address 230.0.0.4. Note that you may need more ports opened if you want to enable additional features like diagnostics for your JGroups stack.
|
|
Keycloak delegates most of the clustering work to Infinispan/JGroups, so consult EAP or JGroups documentation for more info.
|
|
</para>
|
|
</section>
|
|
|
|
</chapter>
|