clustering
This commit is contained in:
parent
394592068d
commit
fcc7ad7822
13 changed files with 147 additions and 148 deletions
|
@ -31,6 +31,13 @@
|
||||||
.. link:topics/network/https.adoc[HTTPS/SSL Setup]
|
.. link:topics/network/https.adoc[HTTPS/SSL Setup]
|
||||||
.. link:topics/network/outgoing.adoc[Outgoing HTTP Requests]
|
.. link:topics/network/outgoing.adoc[Outgoing HTTP Requests]
|
||||||
. link:topics/clustering.adoc[Clustering]
|
. link:topics/clustering.adoc[Clustering]
|
||||||
|
.. link:topics/clustering/recommended.adoc[Recommended Network Architecture]
|
||||||
|
.. link:topics/clustering/load-balancer.adoc[Setting Up a Load Balancer]
|
||||||
|
.. link:topics/clustering/multicast.adoc[Multicast Network Setup]
|
||||||
|
.. link:topics/clustering/serialized.adoc[Serialized Cluster Startup]
|
||||||
|
.. link:topics/clustering/booting.adoc[Booting the Cluster]
|
||||||
|
.. link:topics/clustering/example.adoc[Cluster Example]
|
||||||
|
.. link:topics/clustering/troubleshooting.adoc[Trouble Shooting]
|
||||||
. link:topics/cache.adoc[Server Cache Configuration]
|
. link:topics/cache.adoc[Server Cache Configuration]
|
||||||
{% if book.community %}
|
{% if book.community %}
|
||||||
. link:topics/proxy.adoc[Keycloak Security Proxy]
|
. link:topics/proxy.adoc[Keycloak Security Proxy]
|
||||||
|
|
12
book.json
12
book.json
|
@ -30,6 +30,14 @@
|
||||||
"socket": {
|
"socket": {
|
||||||
"name": "JBoss EAP Administration and Configuration Guide",
|
"name": "JBoss EAP Administration and Configuration Guide",
|
||||||
"link": "https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/sect-Socket_Binding_Groups.html"
|
"link": "https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/sect-Socket_Binding_Groups.html"
|
||||||
|
},
|
||||||
|
"loadbalancer": {
|
||||||
|
"name": "JBoss EAP Administration and Configuration Guide",
|
||||||
|
"link": "https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/sect-Web_HTTP_Connectors_and_HTTP_Clustering.html"
|
||||||
|
},
|
||||||
|
"jgroups": {
|
||||||
|
"name": "JBoss EAP Administration and Configuration Guide",
|
||||||
|
"link": "https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/sect-JGroups.html"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"caching": {
|
"caching": {
|
||||||
|
@ -49,6 +57,10 @@
|
||||||
}
|
}
|
||||||
|
|
||||||
},
|
},
|
||||||
|
"adminguide": {
|
||||||
|
"name": "Keycloak Adminstration Guide",
|
||||||
|
"link": "https://keycloak.gitbooks.io/server-adminstration-guide/content/"
|
||||||
|
}
|
||||||
"project": {
|
"project": {
|
||||||
"name": "Keycloak",
|
"name": "Keycloak",
|
||||||
"version": "1.9.3.Final-SNAPSHOT"
|
"version": "1.9.3.Final-SNAPSHOT"
|
||||||
|
|
|
@ -1,154 +1,18 @@
|
||||||
[[_clustering]]
|
[[_clustering]]
|
||||||
== Clustering
|
== Clustering
|
||||||
|
|
||||||
Keycloak doesn't replicate realms and users, but instead relies on all nodes using the same database.
|
This section covers some of the aspects of configuring {{book.project.name}} to run in a cluster. There's a number
|
||||||
This can be a relational database or Mongo.
|
of things you have to do to provide and prepare for to run in a cluster, specifically:
|
||||||
To make sure your database doesn't become a single point of failure you may also want to deploy your database to a cluster.
|
|
||||||
|
|
||||||
[[_clustering_db_lock]]
|
* <<fake/../operating-mode.adoc#_operating-mode,Picking an operation mode>>
|
||||||
=== DB lock
|
* <<fake/../database.adoc#_database,Configuring a shared external database>>
|
||||||
|
* Setting up a load balancer
|
||||||
|
* Supplying a private network that supports IP multicast
|
||||||
|
|
||||||
Note that Keycloak supports concurrent startup by more cluster nodes at the same.
|
Picking an operation mode and configuring a shared database have been discussed earlier in this guide. In this chapter
|
||||||
This is ensured by DB lock, which prevents that some startup actions (migrating database from previous version, importing realms at startup, initial bootstrap of admin user) are always executed just by one cluster node at a time and other cluster nodes need to wait until the current node finishes startup actions and release the DB lock.
|
we'll discuss setting up a load balancer and supplying a private network. We'll also discuss some issues around
|
||||||
|
when bringing up a cluster.
|
||||||
|
|
||||||
By default, the maximum timeout for lock is 900 seconds, so in case that second node is not able to acquire the lock within 900 seconds, it fails to start.
|
Note: It is possible to cluster {{book.project.name}} without IP Multicast, but this topic is beyond the
|
||||||
The lock checking is done every 2 seconds by default.
|
scope of this guide. Please see the link:{{book.appserver.jgroups.link}}[JGroups] chapter of the {{book.appserver.jgroups.name}}.
|
||||||
Typically you won't need to increase/decrease the default value, but just in case it's possible to configure it in `standalone/configuration/keycloak-server.json`:
|
|
||||||
|
|
||||||
[source,json]
|
|
||||||
----
|
|
||||||
"dblock": {
|
|
||||||
"jpa": {
|
|
||||||
"lockWaitTimeout": 900,
|
|
||||||
"lockRecheckTime": 2
|
|
||||||
}
|
|
||||||
}
|
|
||||||
----
|
|
||||||
or similarly if you're using Mongo (just by replace `jpa` with `mongo`)
|
|
||||||
|
|
||||||
=== Configure Infinispan
|
|
||||||
|
|
||||||
Keycloak uses http://www.infinispan.org/[Infinispan] caches to share information between nodes.
|
|
||||||
|
|
||||||
For realm and users Keycloak uses a invalidation cache.
|
|
||||||
An invalidation cache doesn't share any data, but simply removes stale data from remote caches and makes sure all nodes re-load data from the database when it is changed.
|
|
||||||
This reduces network traffic, as well as preventing sensitive data (such as realm keys and password hashes) from being sent between the nodes.
|
|
||||||
|
|
||||||
User sessions and login failures supports either distributed caches or fully replicated caches.
|
|
||||||
We recommend using a distributed cache.
|
|
||||||
A distributed cache splits user sessions into segments where each node holds one or more segment.
|
|
||||||
It is possible to replicate each segment to multiple nodes, but this is not strictly necessary since the failure of a node will only result in users having to log in again.
|
|
||||||
If you need to prevent node failures from requiring users to log in again, set the `owners` attribute to 2 or more for the `sessions` cache of `infinispan/Keycloak` container as described below.
|
|
||||||
|
|
||||||
The infinispan container is set by default in `standalone/configuration/keycloak-server.json`:
|
|
||||||
|
|
||||||
[source,json]
|
|
||||||
----
|
|
||||||
|
|
||||||
"connectionsInfinispan": {
|
|
||||||
"default" : {
|
|
||||||
"cacheContainer" : "java:jboss/infinispan/Keycloak"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
----
|
|
||||||
|
|
||||||
As you can see in this file, the realmCache, userCache and userSession providers are configured to use infinispan by default, which applies for both cluster and non-cluster environment.
|
|
||||||
|
|
||||||
For non-cluster configuration (server executed with `standalone.xml` ) is the infinispan container `infinispan/Keycloak` just uses local infinispan caches for realms, users and userSessions.
|
|
||||||
|
|
||||||
For cluster configuration, you can edit the configuration of `infinispan/Keycloak` container in `standalone/configuration/standalone-ha.xml` (or `standalone-keycloak-ha.xml` if you are using overlay or demo distribution) .
|
|
||||||
|
|
||||||
=== Start in HA mode
|
|
||||||
|
|
||||||
To start the server in HA mode, start it with:
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
# bin/standalone --server-config=standalone-ha.xml
|
|
||||||
----
|
|
||||||
or if you are using overlay or demo distribution with:
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
# bin/standalone --server-config=standalone-keycloak-ha.xml
|
|
||||||
----
|
|
||||||
|
|
||||||
Alternatively you can copy `standalone/config/standalone-ha.xml` to `standalone/config/standalone.xml` to make it the default server config.
|
|
||||||
|
|
||||||
=== Enabling cluster security
|
|
||||||
|
|
||||||
By default there's nothing to prevent unauthorized nodes from joining the cluster and sending potentially malicious messages to the cluster.
|
|
||||||
However, as there's no sensitive data sent there's not much that can be achieved.
|
|
||||||
For realms and users all that can be done is to send invalidation messages to make nodes load data from the database more frequently.
|
|
||||||
For user sessions it would be possible to modify existing user sessions, but creating new sessions would have no affect as they would not be linked to any access tokens.
|
|
||||||
There's not too much that can be achieved by modifying user sessions.
|
|
||||||
For example it would be possible to prevent sessions from expiring, by changing the creation time.
|
|
||||||
However, it would for example have no effect adding additional permissions to the sessions as these are rechecked against the user and application when the token is created or refreshed.
|
|
||||||
|
|
||||||
In either case your cluster nodes should be in a private network, with a firewall protecting them from outside attacks.
|
|
||||||
Ideally isolated from workstations and laptops.
|
|
||||||
You can also enable encryption of cluster messages, this could for example be useful if you can't isolate cluster nodes from workstations and laptops on your private network.
|
|
||||||
However, encryption will obviously come at a cost of reduced performance.
|
|
||||||
|
|
||||||
To enable encryption of cluster messages you first have to create a shared keystore (change the key and store passwords!):
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
|
|
||||||
# keytool -genseckey -alias keycloak -keypass <PASSWORD> -storepass <PASSWORD> \
|
|
||||||
-keyalg Blowfish -keysize 56 -keystore defaultStore.keystore -storetype JCEKS
|
|
||||||
----
|
|
||||||
|
|
||||||
Copy this keystore to all nodes (for example to standalone/configuration). Then configure JGroups to encrypt all messages by adding the `ENCRYPT` protocol to the JGroups sub-system (this should be added after the `pbcast.GMS` protocol):
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
<subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">
|
|
||||||
<stack name="udp">
|
|
||||||
...
|
|
||||||
<protocol type="pbcast.GMS"/>
|
|
||||||
<protocol type="ENCRYPT">
|
|
||||||
<property name="key_store_name">
|
|
||||||
${jboss.server.config.dir}/defaultStore.keystore
|
|
||||||
</property>
|
|
||||||
<property name="key_password">PASSWORD</property>
|
|
||||||
<property name="store_password">PASSWORD</property>
|
|
||||||
<property name="alias">keycloak</property>
|
|
||||||
</protocol>
|
|
||||||
...
|
|
||||||
</stack>
|
|
||||||
<stack name="tcp">
|
|
||||||
...
|
|
||||||
<protocol type="pbcast.GMS"/>
|
|
||||||
<protocol type="ENCRYPT">
|
|
||||||
<property name="key_store_name">
|
|
||||||
${jboss.server.config.dir}/defaultStore.keystore
|
|
||||||
</property>
|
|
||||||
<property name="key_password">PASSWORD</property>
|
|
||||||
<property name="store_password">PASSWORD</property>
|
|
||||||
<property name="alias">keycloak</property>
|
|
||||||
</protocol>
|
|
||||||
...
|
|
||||||
</stack>
|
|
||||||
...
|
|
||||||
</subsystem>
|
|
||||||
----
|
|
||||||
See the http://www.jgroups.org/manual/index.html#ENCRYPT[JGroups manual] for more details.
|
|
||||||
|
|
||||||
=== Troubleshooting
|
|
||||||
|
|
||||||
Note that when you run cluster, you should see message similar to this in the log of both cluster nodes:
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-10,shared=udp)
|
|
||||||
ISPN000094: Received new cluster view: [node1/keycloak|1] (2) [node1/keycloak, node2/keycloak]
|
|
||||||
----
|
|
||||||
If you see just one node mentioned, it's possible that your cluster hosts are not joined together.
|
|
||||||
|
|
||||||
Usually it's best practice to have your cluster nodes on private network without firewall for communication among them.
|
|
||||||
Firewall could be enabled just on public access point to your network instead.
|
|
||||||
If for some reason you still need to have firewall enabled on cluster nodes, you will need to open some ports.
|
|
||||||
Default values are UDP port 55200 and multicast port 45688 with multicast address 230.0.0.4.
|
|
||||||
Note that you may need more ports opened if you want to enable additional features like diagnostics for your JGroups stack.
|
|
||||||
Keycloak delegates most of the clustering work to Infinispan/JGroups, so consult EAP or JGroups documentation for more info.
|
|
||||||
|
|
17
topics/clustering/booting.adoc
Executable file
17
topics/clustering/booting.adoc
Executable file
|
@ -0,0 +1,17 @@
|
||||||
|
=== Booting the Cluster
|
||||||
|
|
||||||
|
Booting {{book.project.name}} in a cluster depends on your <<fake/../../operating-mode.adoc#_operating-mode, operating mode>>
|
||||||
|
|
||||||
|
.Standalone Mode
|
||||||
|
[source]
|
||||||
|
----
|
||||||
|
$ bin/standalone.sh --server-config=standalone-ha.xml
|
||||||
|
----
|
||||||
|
|
||||||
|
.Domain Mode
|
||||||
|
[source]
|
||||||
|
----
|
||||||
|
$ bin/domain.sh --host-config=host-master.xml
|
||||||
|
$ bin/domain.sh --host-config=host-slave.xml
|
||||||
|
----
|
||||||
|
|
5
topics/clustering/example.adoc
Executable file
5
topics/clustering/example.adoc
Executable file
|
@ -0,0 +1,5 @@
|
||||||
|
=== Clustering Example
|
||||||
|
|
||||||
|
{{book.project.name}} does come with an out of the box clustering demo that leverages domain mode. Review the
|
||||||
|
<<fake/../../operating-mode/domain.adoc#_clustered-domain-example, Clustered Domain Example>> chapter for more details.
|
||||||
|
|
6
topics/clustering/load-balancer.adoc
Executable file
6
topics/clustering/load-balancer.adoc
Executable file
|
@ -0,0 +1,6 @@
|
||||||
|
=== Setting Up a Load Balancer
|
||||||
|
|
||||||
|
This guide does not go over how to set up a load balancer as it is already documented in detail within the
|
||||||
|
link:{{book.appserver.loadbalancer.link}}[the load balancer] chapter of the {{book.appserver.loadbalancer.name}}. It will
|
||||||
|
involve configuring the <<fake/../../network/bind-address.adoc#_bind-address,network interfaces>> and <<fake/../../network/ports.adoc#_ports,ports>> discussed in previous chapters in this guide.
|
||||||
|
|
35
topics/clustering/multicast.adoc
Executable file
35
topics/clustering/multicast.adoc
Executable file
|
@ -0,0 +1,35 @@
|
||||||
|
=== Multicast Network Setup
|
||||||
|
|
||||||
|
Out of the box clustering support has a need to for IP Multicast. Multicast is a network broadcast protocol. This protocol
|
||||||
|
is used at boot time to discover and join the cluster. It is also used to broadcast message for the replication and invalidation
|
||||||
|
distributed caches used by {{book.project.name}}.
|
||||||
|
|
||||||
|
The clustering subsystem for {{book.project.name}} runs on the JGroups stack. Out of the box, the bind addresses for clustering are bound to a private network interface with a default IP address of 127.0.0.1.
|
||||||
|
You'll have to edit your the _standalone-ha.xml_ or _domain.xml_ sections discussed in <<fake/../network/bind-address.adoc#_bind-address,the Bind Address chapter>>
|
||||||
|
|
||||||
|
.private network config
|
||||||
|
[source,xml]
|
||||||
|
----
|
||||||
|
<interfaces>
|
||||||
|
...
|
||||||
|
<interface name="private">
|
||||||
|
<inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
|
||||||
|
</interface>
|
||||||
|
</interfaces>
|
||||||
|
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
|
||||||
|
...
|
||||||
|
<socket-binding name="jgroups-mping" interface="private" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
|
||||||
|
<socket-binding name="jgroups-tcp" interface="private" port="7600"/>
|
||||||
|
<socket-binding name="jgroups-tcp-fd" interface="private" port="57600"/>
|
||||||
|
<socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
|
||||||
|
<socket-binding name="jgroups-udp-fd" interface="private" port="54200"/>
|
||||||
|
<socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/>
|
||||||
|
...
|
||||||
|
</socket-binding-group>
|
||||||
|
----
|
||||||
|
|
||||||
|
Things you'll want to configure are the +jboss.bind.address.private+ and +jboss.default.multicast.address+ as well as the ports of the services on the clustering stack.
|
||||||
|
Please see the link:{{book.appserver.jgroups.link}}[JGroups] chapter of the {{book.appserver.jgroups.name}}.
|
||||||
|
|
||||||
|
Note: It is possible to cluster {{book.project.name}} without IP Multicast, but this topic is beyond the
|
||||||
|
scope of this guide. Please see the link:{{book.appserver.jgroups.link}}[JGroups] chapter of the {{book.appserver.jgroups.name}}.
|
9
topics/clustering/recommended.adoc
Executable file
9
topics/clustering/recommended.adoc
Executable file
|
@ -0,0 +1,9 @@
|
||||||
|
=== Recommended Network Architecture
|
||||||
|
|
||||||
|
The recommended network architecture for deploying {{book.project.name}} is to set up an HTTP/HTTPS load balancer on
|
||||||
|
a public IP address and network routing requests to {{book.project.name}} servers sitting on a private network. This
|
||||||
|
isolates all clustering connections and provides a nice means of protecting the servers.
|
||||||
|
|
||||||
|
By default, there is nothing to prevent unauthorized nodes from joining the cluster and broadcasting multicast messages.
|
||||||
|
This is why cluster nodes should be in a private network, with a firewall protecting them from outside attacks.
|
||||||
|
|
23
topics/clustering/serialized.adoc
Executable file
23
topics/clustering/serialized.adoc
Executable file
|
@ -0,0 +1,23 @@
|
||||||
|
[[_clustering_db_lock]]
|
||||||
|
=== Serialized Cluster Startup
|
||||||
|
|
||||||
|
Note that {{book.project.name}} supports concurrent startup by more cluster nodes at the same.
|
||||||
|
When {{book.project.name}} boots up it may do some database migration, importing, or first time initializations. The server
|
||||||
|
makes use of a DB lock which prevents these startup actions from running and conflicting with eachother when nodes are
|
||||||
|
booted concurrently.
|
||||||
|
|
||||||
|
By default, the maximum timeout for lock is 900 seconds, so in case that second node is not able to acquire the lock within 900 seconds, it fails to start.
|
||||||
|
The lock checking is done every 2 seconds by default.
|
||||||
|
Typically you won't need to increase/decrease the default value, but just in case it's possible to configure it in `keycloak-server.json`:
|
||||||
|
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
"dblock": {
|
||||||
|
"jpa": {
|
||||||
|
"lockWaitTimeout": 900,
|
||||||
|
"lockRecheckTime": 2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
----
|
||||||
|
or similarly if you're using Mongo (just by replace `jpa` with `mongo`)
|
||||||
|
|
17
topics/clustering/troubleshooting.adoc
Executable file
17
topics/clustering/troubleshooting.adoc
Executable file
|
@ -0,0 +1,17 @@
|
||||||
|
=== Troubleshooting
|
||||||
|
|
||||||
|
Note that when you run cluster, you should see message similar to this in the log of both cluster nodes:
|
||||||
|
|
||||||
|
[source]
|
||||||
|
----
|
||||||
|
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-10,shared=udp)
|
||||||
|
ISPN000094: Received new cluster view: [node1/keycloak|1] (2) [node1/keycloak, node2/keycloak]
|
||||||
|
----
|
||||||
|
If you see just one node mentioned, it's possible that your cluster hosts are not joined together.
|
||||||
|
|
||||||
|
Usually it's best practice to have your cluster nodes on private network without firewall for communication among them.
|
||||||
|
Firewall could be enabled just on public access point to your network instead.
|
||||||
|
If for some reason you still need to have firewall enabled on cluster nodes, you will need to open some ports.
|
||||||
|
Default values are UDP port 55200 and multicast port 45688 with multicast address 230.0.0.4.
|
||||||
|
Note that you may need more ports opened if you want to enable additional features like diagnostics for your JGroups stack.
|
||||||
|
{{book.project.name}} delegates most of the clustering work to Infinispan/JGroups. Please consult the link:{{book.appserver.jgroups.link}}[JGroups] chapter of the {{book.appserver.jgroups.name}.
|
|
@ -1,3 +1,5 @@
|
||||||
|
[[_bind-address]]
|
||||||
|
|
||||||
=== Bind Addresses
|
=== Bind Addresses
|
||||||
|
|
||||||
By default {{book.project.name}} binds to the localhost loopback address 127.0.0.1. That's not a very useful default if
|
By default {{book.project.name}} binds to the localhost loopback address 127.0.0.1. That's not a very useful default if
|
||||||
|
|
|
@ -1,3 +1,5 @@
|
||||||
|
[[_ports]]
|
||||||
|
|
||||||
=== Socket Port Bindings
|
=== Socket Port Bindings
|
||||||
|
|
||||||
The ports for each socket also have a pre-defined default that can be overriden at the command line or within configuration.
|
The ports for each socket also have a pre-defined default that can be overriden at the command line or within configuration.
|
||||||
|
|
|
@ -172,7 +172,7 @@ $ .../bin/domain.sh --host-config=host-master.xml
|
||||||
When running the boot script you will need pass in the host controlling configuration file you are going to use via the
|
When running the boot script you will need pass in the host controlling configuration file you are going to use via the
|
||||||
+--host-config+ switch.
|
+--host-config+ switch.
|
||||||
|
|
||||||
|
[[_clustered-domain-example]]
|
||||||
==== Running the Clustered Domain Example
|
==== Running the Clustered Domain Example
|
||||||
|
|
||||||
The example domain that comes with {{book.project.name}} was meant to run on one machine. It starts a:
|
The example domain that comes with {{book.project.name}} was meant to run on one machine. It starts a:
|
||||||
|
|
Loading…
Reference in a new issue