keycloak-scim/server_installation/topics/clustering/sticky-sessions.adoc

176 lines
8.8 KiB
Text
Raw Normal View History

[[sticky-sessions]]
2017-05-26 20:25:01 +00:00
=== Sticky sessions
2017-08-28 12:50:14 +00:00
Typical cluster deployment consists of the load balancer (reverse proxy) and 2 or more {project_name} servers on private network. For performance purposes,
it may be useful if load balancer forwards all requests related to particular browser session to the same {project_name} backend node.
2017-05-26 20:25:01 +00:00
2018-01-26 14:52:39 +00:00
The reason is, that {project_name} is using Infinispan distributed cache under the covers for save data related to current authentication session and user session.
2017-05-26 20:25:01 +00:00
The Infinispan distributed caches are configured with one owner by default. That means that particular session is saved just on one cluster node and the other nodes need
to lookup the session remotely if they want to access it.
2018-01-26 14:52:39 +00:00
For example if authentication session with ID `123` is saved in the Infinispan cache on `node1`, and then `node2` needs to lookup this session,
2017-05-26 20:25:01 +00:00
it needs to send the request to `node1` over the network to return the particular session entity.
It is beneficial if particular session entity is always available locally, which can be done with the help of sticky sessions.
2017-08-28 12:50:14 +00:00
The workflow in the cluster environment with the public frontend load balancer and two backend {project_name} nodes can be like this:
2017-05-26 20:25:01 +00:00
2017-08-28 12:50:14 +00:00
* User sends initial request to see the {project_name} login screen
2017-05-26 20:25:01 +00:00
* This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn't need to be random,
but can be chosen according to some other criterias (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy).
2017-08-28 12:50:14 +00:00
* {project_name} creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache.
2017-05-26 20:25:01 +00:00
* Infinispan distributed cache assigns the primary owner of the session based on the hash of session ID.
See link:http://infinispan.org/docs/8.2.x/user_guide/user_guide.html#distribution_mode[Infinispan documentation] for more details around this.
2018-01-26 14:52:39 +00:00
Let's assume that Infinispan assigned `node2` to be the owner of this session.
2017-08-28 12:50:14 +00:00
* {project_name} creates the cookie `AUTH_SESSION_ID` with the format like `<session-id>.<owner-node-id>` . In our example case, it will be `123.node2` .
* Response is returned to the user with the {project_name} login screen and the AUTH_SESSION_ID cookie in the browser
2017-05-26 20:25:01 +00:00
From this point, it is beneficial if load balancer forwards all the next requests to the `node2` as this is the node, who is owner of the authentication session with ID `123`
and hence Infinispan can lookup this session locally. After authentication is finished, the authentication session is converted to user session, which will be also saved on
`node2` because it has same ID `123` .
The sticky session is not mandatory for the cluster setup, however it is good for performance for the reasons mentioned above. You need to configure your loadbalancer to sticky
over the `AUTH_SESSION_ID` cookie. How exactly do this is dependent on your loadbalancer.
It is recommended on the {project_name} side to use the system property `jboss.node.name` during startup, with the value corresponding
to the name of your route. For example, `-Djboss.node.name=node1` will use `node1` to identify the route. This route will be used by
Infinispan caches and will be attached to the AUTH_SESSION_ID cookie when the node is the owner of the particular key. An example of the
start up command with this system property can be seen in <<_example-setup-with-mod-cluster,Mod Cluster Example>>.
2018-01-26 14:52:39 +00:00
Typically, the route name should be the same name as your backend host, but it is not necessary. You can use a different route name,
for example if you want to hide the host name of your {project_name} server inside your private network.
==== Disable adding the route
Some load balancers can be configured to add the route information by themselves instead of relying on the back end {project_name} node.
However, as described above, adding the route by the {project_name} is recommended. This is because when done this way performance improves,
since {project_name} is aware of the entity that is the owner of particular session and can route to that node, which is not necessarily the local node.
You are permitted to disable adding route information to the AUTH_SESSION_ID cookie by {project_name}, if you prefer, by adding the following
into your `RHSSO_HOME/standalone/configuration/standalone-ha.xml` file in the {project_name} subsystem configuration:
[source,xml]
----
<subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">
...
<spi name="stickySessionEncoder">
<provider name="infinispan" enabled="true">
<properties>
<property name="shouldAttachRoute" value="false"/>
</properties>
</provider>
</spi>
</subsystem>
----
2017-05-26 20:25:01 +00:00
[[_example-setup-with-mod-cluster]]
2017-05-26 20:25:01 +00:00
==== Example cluster setup with mod_cluster
In the example, we will use link:http://mod-cluster.jboss.org/[Mod Cluster] as load balancer. One of the key features of mod cluster is, that there is not much
configuration on the load balancer side. Instead it requires support on the backend node side. Backend nodes communicate with the load balancer through the
dedicated protocol called MCMP and they notify loadbalancer about various events (eg. node joined or left cluster, new application was deployed etc).
2017-08-28 12:50:14 +00:00
Example setup will consist of the one {appserver_name} {appserver_version} load balancer node and two {project_name} nodes.
2017-05-26 20:25:01 +00:00
2018-01-26 14:52:39 +00:00
Clustering example require MULTICAST to be enabled on machine's loopback network interface. This can be done by running the following commands under root privileges (on Linux):
2017-05-26 20:25:01 +00:00
[source]
----
route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
ifconfig lo multicast
----
===== Load Balancer Configuration
2017-08-28 12:50:14 +00:00
Unzip the {appserver_name} {appserver_version} server somewhere. Assumption is location `EAP_LB`
2017-05-26 20:25:01 +00:00
Edit `EAP_LB/standalone/configuration/standalone.xml` file. In the undertow subsystem add the mod_cluster configuration under filters like this:
[source,xml]
----
<subsystem xmlns="urn:jboss:domain:undertow:4.0">
...
<filters>
...
<mod-cluster name="modcluster" advertise-socket-binding="modcluster"
advertise-frequency="${modcluster.advertise-frequency:2000}"
management-socket-binding="http" enable-http2="true"/>
</filters>
----
and `filter-ref` under `default-host` like this:
[source,xml]
----
<host name="default-host" alias="localhost">
...
<filter-ref name="modcluster"/>
</host>
----
Then under `socket-binding-group` add this group:
[source,xml]
----
<socket-binding name="modcluster" port="0"
multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}"
multicast-port="23364"/>
----
Save the file and run the server:
[source]
----
cd $WILDFLY_LB/bin
./standalone.sh
----
===== Backend node configuration
2017-08-28 12:50:14 +00:00
Unzip the {project_name} server distribution to some location. Assuming location is `RHSSO_NODE1` .
2017-05-26 20:25:01 +00:00
Edit `RHSSO_NODE1/standalone/configuration/standalone-ha.xml` and configure datasource against the shared database.
2017-08-28 12:50:14 +00:00
See <<_rdbms-setup-checklist, Database chapter >> for more details.
2017-05-26 20:25:01 +00:00
In the undertow subsystem, add the `session-config` under the `servlet-container` element:
[source,xml]
----
<servlet-container name="default">
<session-cookie name="AUTH_SESSION_ID" http-only="true" />
...
</servlet-container>
----
WARNING: Use the `session-config` only with the mod_cluster load balancer. In other cases, configure sticky session over the `AUTH_SESSION_ID` cookie directly in a load balancer.
2017-05-26 20:25:01 +00:00
2017-08-28 12:50:14 +00:00
Then you can configure `proxy-address-forwarding` as described in the chapter <<_setting-up-a-load-balancer-or-proxy, Load Balancer >> .
2017-05-26 20:25:01 +00:00
Note that mod_cluster uses AJP connector by default, so you need to configure that one.
That's all as mod_cluster is already configured.
2017-08-28 12:50:14 +00:00
The node name of the {project_name} can be detected automatically based on the hostname of current server. However for more fine grained control,
2017-05-26 20:25:01 +00:00
it is recommended to use system property `jboss.node.name` to specify the node name directly. It is especially useful in case that you test with 2 backend nodes on
same physical server etc. So you can run the startup command like this:
[source]
----
cd $RHSSO_NODE1
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
----
Configure the second backend server in same way and run with different port offset and node name.
[source]
----
cd $RHSSO_NODE2
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
----
Access the server on `http://localhost:8080/auth` . Creation of admin user is possible just from local address and without load balancer (proxy) access,
so you first need to access backend node directly on `http://localhost:8180/auth` to create admin user.
WARNING: When using mod_cluster you should always access the cluster through the load balancer and not directly through the backend node.