Merge pull request #2535 from stianst/master

KEYCLOAK-2763
This commit is contained in:
Stian Thorgersen 2016-04-07 09:37:00 +02:00
commit d98cd4235c

View file

@ -58,13 +58,13 @@
point of failure you may also want to deploy your database to a cluster. point of failure you may also want to deploy your database to a cluster.
</para> </para>
<section> <section>
<title>DB lock</title> <title>Migration lock</title>
<para>Note that Keycloak supports concurrent startup by more cluster nodes at the same. This is ensured by DB lock, which prevents that some <para>
startup actions (migrating database from previous version, importing realms at startup, initial bootstrap of admin user) are always executed just by one Keycloak locks the database during startup. This guarantees that only startup actions like migration and importing realms are not executed
cluster node at a time and other cluster nodes need to wait until the current node finishes startup actions and release the DB lock. concurrently by multiple nodes.
</para> </para>
<para> <para>
By default, the maximum timeout for lock is 900 seconds, so in case that second node is not able to acquire the lock within 900 seconds, it fails to start. By default, the maximum timeout for lock is 900 seconds. If a node is unable to acquire the lock within 900 seconds, it fails to start.
The lock checking is done every 2 seconds by default. Typically you won't need to increase/decrease the default value, but just in case The lock checking is done every 2 seconds by default. Typically you won't need to increase/decrease the default value, but just in case
it's possible to configure it in <literal>standalone/configuration/keycloak-server.json</literal>: it's possible to configure it in <literal>standalone/configuration/keycloak-server.json</literal>:
<programlisting> <programlisting>
@ -89,35 +89,33 @@
</para> </para>
<para> <para>
For realm and users Keycloak uses a invalidation cache. An invalidation cache doesn't share any data, but simply For realm and users Keycloak uses a invalidation cache. An invalidation cache doesn't share any data, but simply
removes stale data from remote caches and makes sure all nodes re-load data from the database when it is changed. This reduces network traffic, as well as preventing sensitive data (such as removes stale data from remote caches and makes sure all nodes re-load data from the database when it is changed. This reduces network traffic,
realm keys and password hashes) from being sent between the nodes. as well as preventing sensitive data (such as realm keys and password hashes) from being transmitted.
</para> </para>
<para> <para>
User sessions and login failures supports either distributed caches or fully replicated caches. We recommend using a distributed User sessions and login failures supports either distributed caches or fully replicated caches. The default is a distributed
cache. A distributed cache. A distributed cache splits user sessions into segments where each node holds one or more segment. It is possible
cache splits user sessions into segments where each node holds one or more segment. It is possible
to replicate each segment to multiple nodes, but this is not strictly necessary since the failure of a node to replicate each segment to multiple nodes, but this is not strictly necessary since the failure of a node
will only result in users having to log in again. If you need to prevent node failures from requiring users to will only result in users having to re-authenticate. If you need to prevent node failures from requiring users to
log in again, set the <literal>owners</literal> attribute to 2 or more for the <literal>sessions</literal> cache re-authenticate, set the <literal>owners</literal> attribute to 2 or more for the <literal>sessions</literal> cache
of <literal>infinispan/Keycloak</literal> container as described below. of <literal>infinispan/Keycloak</literal> container as described below.
</para> </para>
<para> <para>
The infinispan container is set by default in <literal>standalone/configuration/keycloak-server.json</literal>: For cluster configuration edit the configuration of <literal>infinispan/Keycloak</literal> container in <literal>standalone/configuration/standalone-ha.xml</literal>.
<programlisting>
"connectionsInfinispan": {
"default" : {
"cacheContainer" : "java:jboss/infinispan/Keycloak"
}
}
</programlisting>
</para> </para>
<para>As you can see in this file, the realmCache, userCache and userSession providers are configured to use infinispan by default, which applies for both cluster and non-cluster environment.</para> </section>
<section>
<title>Secure Private Network</title>
<para> <para>
For non-cluster configuration (server executed with <literal>standalone.xml</literal> ) is the infinispan container <literal>infinispan/Keycloak</literal> just uses local infinispan caches for realms, users and userSessions. Best practice is to put intra-cluster traffic on a separate network from the network handling user request. This is both for performance reasons as
well as reducing the risk of exposing clustering to unwanted, possibly malevolent, traffic. As this is the best practice there's a separate
network interface to configure the address for clustering. This means that changing the bind address by adding <literal>-b &lt;address&gt;</literal>
to the startup command will only affect user request.
</para> </para>
<para> <para>
For cluster configuration, you can edit the configuration of <literal>infinispan/Keycloak</literal> container in <literal>standalone/configuration/standalone-ha.xml</literal> (or <literal>standalone-keycloak-ha.xml</literal> To configure bind address for clustering add <literal>-bprivate=&lt;private address&gt;</literal> to the startup command. As mentioned in the previous
if you are using overlay or demo distribution) . paragraph you should only expose this on a secure private network.
</para> </para>
</section> </section>
@ -125,33 +123,24 @@
<title>Start in HA mode</title> <title>Start in HA mode</title>
<para> <para>
To start the server in HA mode, start it with: To start the server in HA mode, start it with:
<programlisting># bin/standalone --server-config=standalone-ha.xml</programlisting> <programlisting># bin/standalone --server-config=standalone-ha.xml -bpublic=&lt;public address&gt; -bprivate=&lt;private address&gt;</programlisting>
or if you are using overlay or demo distribution with: Replace <literal>public address</literal> with the address used for user request and <literal>private address</literal> with the address used for
<programlisting># bin/standalone --server-config=standalone-keycloak-ha.xml</programlisting> cluster communication.
</para>
<para>
Alternatively you can copy <literal>standalone/config/standalone-ha.xml</literal> to <literal>standalone/config/standalone.xml</literal>
to make it the default server config.
</para> </para>
</section> </section>
<section> <section>
<title>Enabling cluster security</title> <title>Enabling cluster authentication and encryption</title>
<para> <para>
By default there's nothing to prevent unauthorized nodes from joining the cluster and sending potentially malicious By default anyone that has access to the secure private network is able to join the cluster and could potentially send malicious messages to the
messages to the cluster. However, as there's no sensitive data sent there's not much that can be achieved. cluster. As mentioned earlier the realm and user caches are invalidation caches so no sensitive information is transmitted. There is also limited
For realms and users all that can be done is to send invalidation messages to make nodes load data from the risk with regards to user sessions as even though a malicious node could potentially create a new user session they would need to be able to sign
database more frequently. For user sessions it would be possible to modify existing user sessions, but creating associated tokens which is not possible without the realm private key. It would be possible to prevent user sessions from expiring and reset
new sessions would have no affect as they would not be linked to any access tokens. There's not too much that failed login attempts if brute force protection is enabled.
can be achieved by modifying user sessions. For example it would be possible to prevent sessions from expiring,
by changing the creation time. However, it would for example have no effect adding additional permissions to the
sessions as these are rechecked against the user and application when the token is created or refreshed.
</para> </para>
<para> <para>
In either case your cluster nodes should be in a private network, with a firewall protecting them from outside If you are not able to fully isolate the network used for clustering communication from potential attackers you may want to enable authentication
attacks. Ideally isolated from workstations and laptops. You can also enable encryption of cluster messages, and encryption of the cluster. This will have an impact on performance.
this could for example be useful if you can't isolate cluster nodes from workstations and laptops on your private
network. However, encryption will obviously come at a cost of reduced performance.
</para> </para>
<para> <para>
To enable encryption of cluster messages you first have to create a shared keystore (change the key and store passwords!): To enable encryption of cluster messages you first have to create a shared keystore (change the key and store passwords!):
@ -219,8 +208,8 @@ ISPN000094: Received new cluster view: [node1/keycloak|1] (2) [node1/keycloak, n
Usually it's best practice to have your cluster nodes on private network without firewall for communication among them. Usually it's best practice to have your cluster nodes on private network without firewall for communication among them.
Firewall could be enabled just on public access point to your network instead. If for some reason you still need to have firewall Firewall could be enabled just on public access point to your network instead. If for some reason you still need to have firewall
enabled on cluster nodes, you will need to open some ports. Default values are UDP port 55200 and multicast port 45688 enabled on cluster nodes, you will need to open some ports. Default values are UDP port 55200 and multicast port 45688
with multicast address 230.0.0.4. Note that you may need more ports opened if you want to enable additional features like diagnostics for your JGroups stack. with multicast address 230.0.0.4. Note that you may need more ports opened if you want to enable additional features like diagnostics for your
Keycloak delegates most of the clustering work to Infinispan/JGroups, so consult EAP or JGroups documentation for more info. JGroups stack. Keycloak delegates most of the clustering work to Infinispan/JGroups, so consult EAP or JGroups documentation for more info.
</para> </para>
</section> </section>