KEYCLOAK-8300 Wildfly 14 upgrade
This commit is contained in:
parent
ba86243ffe
commit
692ccf82d6
5 changed files with 73 additions and 117 deletions
|
@ -33,11 +33,16 @@ over the `AUTH_SESSION_ID` cookie. How exactly do this is dependent on your load
|
||||||
|
|
||||||
It is recommended on the {project_name} side to use the system property `jboss.node.name` during startup, with the value corresponding
|
It is recommended on the {project_name} side to use the system property `jboss.node.name` during startup, with the value corresponding
|
||||||
to the name of your route. For example, `-Djboss.node.name=node1` will use `node1` to identify the route. This route will be used by
|
to the name of your route. For example, `-Djboss.node.name=node1` will use `node1` to identify the route. This route will be used by
|
||||||
Infinispan caches and will be attached to the AUTH_SESSION_ID cookie when the node is the owner of the particular key. An example of the
|
Infinispan caches and will be attached to the AUTH_SESSION_ID cookie when the node is the owner of the particular key. Here is
|
||||||
start up command with this system property can be seen in <<_example-setup-with-mod-cluster,Mod Cluster Example>>.
|
an example of the start up command using this system property:
|
||||||
|
[source]
|
||||||
|
----
|
||||||
|
cd $RHSSO_NODE1
|
||||||
|
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
|
||||||
|
----
|
||||||
|
|
||||||
Typically, the route name should be the same name as your backend host, but it is not necessary. You can use a different route name,
|
Typically in production environment the route name should use the same name as your backend host, but it is not required. You can
|
||||||
for example if you want to hide the host name of your {project_name} server inside your private network.
|
use a different route name. For example, if you want to hide the host name of your {project_name} server inside your private network.
|
||||||
|
|
||||||
==== Disable adding the route
|
==== Disable adding the route
|
||||||
|
|
||||||
|
@ -62,114 +67,3 @@ into your `RHSSO_HOME/standalone/configuration/standalone-ha.xml` file in the {p
|
||||||
|
|
||||||
</subsystem>
|
</subsystem>
|
||||||
----
|
----
|
||||||
|
|
||||||
[[_example-setup-with-mod-cluster]]
|
|
||||||
==== Example cluster setup with mod_cluster
|
|
||||||
|
|
||||||
In the example, we will use link:http://mod-cluster.jboss.org/[Mod Cluster] as load balancer. One of the key features of mod cluster is, that there is not much
|
|
||||||
configuration on the load balancer side. Instead it requires support on the backend node side. Backend nodes communicate with the load balancer through the
|
|
||||||
dedicated protocol called MCMP and they notify loadbalancer about various events (eg. node joined or left cluster, new application was deployed etc).
|
|
||||||
|
|
||||||
Example setup will consist of the one {appserver_name} {appserver_version} load balancer node and two {project_name} nodes.
|
|
||||||
|
|
||||||
Clustering example require MULTICAST to be enabled on machine's loopback network interface. This can be done by running the following commands under root privileges (on Linux):
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
|
|
||||||
ifconfig lo multicast
|
|
||||||
----
|
|
||||||
|
|
||||||
|
|
||||||
===== Load Balancer Configuration
|
|
||||||
|
|
||||||
Unzip the {appserver_name} {appserver_version} server somewhere. Assumption is location `EAP_LB`
|
|
||||||
|
|
||||||
Edit `EAP_LB/standalone/configuration/standalone.xml` file. In the undertow subsystem add the mod_cluster configuration under filters like this:
|
|
||||||
|
|
||||||
[source,xml,subs="attributes+"]
|
|
||||||
----
|
|
||||||
<subsystem xmlns="{subsystem_undertow_xml_urn}">
|
|
||||||
...
|
|
||||||
<filters>
|
|
||||||
...
|
|
||||||
<mod-cluster name="modcluster" advertise-socket-binding="modcluster"
|
|
||||||
advertise-frequency="${modcluster.advertise-frequency:2000}"
|
|
||||||
management-socket-binding="http" enable-http2="true"/>
|
|
||||||
</filters>
|
|
||||||
----
|
|
||||||
|
|
||||||
and `filter-ref` under `default-host` like this:
|
|
||||||
|
|
||||||
[source,xml]
|
|
||||||
----
|
|
||||||
<host name="default-host" alias="localhost">
|
|
||||||
...
|
|
||||||
<filter-ref name="modcluster"/>
|
|
||||||
</host>
|
|
||||||
----
|
|
||||||
|
|
||||||
Then under `socket-binding-group` add this group:
|
|
||||||
|
|
||||||
[source,xml]
|
|
||||||
----
|
|
||||||
<socket-binding name="modcluster" port="0"
|
|
||||||
multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}"
|
|
||||||
multicast-port="23364"/>
|
|
||||||
----
|
|
||||||
|
|
||||||
Save the file and run the server:
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
cd $WILDFLY_LB/bin
|
|
||||||
./standalone.sh
|
|
||||||
----
|
|
||||||
|
|
||||||
|
|
||||||
===== Backend node configuration
|
|
||||||
|
|
||||||
Unzip the {project_name} server distribution to some location. Assuming location is `RHSSO_NODE1` .
|
|
||||||
|
|
||||||
Edit `RHSSO_NODE1/standalone/configuration/standalone-ha.xml` and configure datasource against the shared database.
|
|
||||||
See <<_rdbms-setup-checklist, Database chapter >> for more details.
|
|
||||||
|
|
||||||
In the undertow subsystem, add the `session-config` under the `servlet-container` element:
|
|
||||||
|
|
||||||
[source,xml]
|
|
||||||
----
|
|
||||||
<servlet-container name="default">
|
|
||||||
<session-cookie name="AUTH_SESSION_ID" http-only="true" />
|
|
||||||
...
|
|
||||||
</servlet-container>
|
|
||||||
----
|
|
||||||
|
|
||||||
WARNING: Use the `session-config` only with the mod_cluster load balancer. In other cases, configure sticky session over the `AUTH_SESSION_ID` cookie directly in a load balancer.
|
|
||||||
|
|
||||||
Then you can configure `proxy-address-forwarding` as described in the chapter <<_setting-up-a-load-balancer-or-proxy, Load Balancer >> .
|
|
||||||
Note that mod_cluster uses AJP connector by default, so you need to configure that one.
|
|
||||||
|
|
||||||
That's all as mod_cluster is already configured.
|
|
||||||
|
|
||||||
The node name of the {project_name} can be detected automatically based on the hostname of current server. However for more fine grained control,
|
|
||||||
it is recommended to use system property `jboss.node.name` to specify the node name directly. It is especially useful in case that you test with 2 backend nodes on
|
|
||||||
same physical server etc. So you can run the startup command like this:
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
cd $RHSSO_NODE1
|
|
||||||
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
|
|
||||||
----
|
|
||||||
|
|
||||||
Configure the second backend server in same way and run with different port offset and node name.
|
|
||||||
|
|
||||||
[source]
|
|
||||||
----
|
|
||||||
cd $RHSSO_NODE2
|
|
||||||
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
|
|
||||||
----
|
|
||||||
|
|
||||||
Access the server on `http://localhost:8080/auth` . Creation of admin user is possible just from local address and without load balancer (proxy) access,
|
|
||||||
so you first need to access backend node directly on `http://localhost:8180/auth` to create admin user.
|
|
||||||
|
|
||||||
WARNING: When using mod_cluster you should always access the cluster through the load balancer and not directly through the backend node.
|
|
||||||
|
|
|
@ -400,6 +400,9 @@ In production you will likely need to have a separate database server in every d
|
||||||
<remote-store cache="work" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
<remote-store cache="work" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
||||||
<property name="rawValues">true</property>
|
<property name="rawValues">true</property>
|
||||||
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
||||||
|
ifeval::[{project_product}==true]
|
||||||
|
<property name="protocolVersion">2.6</property>
|
||||||
|
endif::[]
|
||||||
</remote-store>
|
</remote-store>
|
||||||
</replicated-cache>
|
</replicated-cache>
|
||||||
```
|
```
|
||||||
|
@ -412,6 +415,9 @@ In production you will likely need to have a separate database server in every d
|
||||||
<remote-store cache="sessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
<remote-store cache="sessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
||||||
<property name="rawValues">true</property>
|
<property name="rawValues">true</property>
|
||||||
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
||||||
|
ifeval::[{project_product}==true]
|
||||||
|
<property name="protocolVersion">2.6</property>
|
||||||
|
endif::[]
|
||||||
</remote-store>
|
</remote-store>
|
||||||
</distributed-cache>
|
</distributed-cache>
|
||||||
```
|
```
|
||||||
|
@ -424,6 +430,9 @@ In production you will likely need to have a separate database server in every d
|
||||||
<remote-store cache="offlineSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
<remote-store cache="offlineSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
||||||
<property name="rawValues">true</property>
|
<property name="rawValues">true</property>
|
||||||
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
||||||
|
ifeval::[{project_product}==true]
|
||||||
|
<property name="protocolVersion">2.6</property>
|
||||||
|
endif::[]
|
||||||
</remote-store>
|
</remote-store>
|
||||||
</distributed-cache>
|
</distributed-cache>
|
||||||
|
|
||||||
|
@ -431,6 +440,9 @@ In production you will likely need to have a separate database server in every d
|
||||||
<remote-store cache="clientSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
<remote-store cache="clientSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
||||||
<property name="rawValues">true</property>
|
<property name="rawValues">true</property>
|
||||||
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
||||||
|
ifeval::[{project_product}==true]
|
||||||
|
<property name="protocolVersion">2.6</property>
|
||||||
|
endif::[]
|
||||||
</remote-store>
|
</remote-store>
|
||||||
</distributed-cache>
|
</distributed-cache>
|
||||||
|
|
||||||
|
@ -438,6 +450,9 @@ In production you will likely need to have a separate database server in every d
|
||||||
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
<remote-store cache="offlineClientSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
||||||
<property name="rawValues">true</property>
|
<property name="rawValues">true</property>
|
||||||
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
||||||
|
ifeval::[{project_product}==true]
|
||||||
|
<property name="protocolVersion">2.6</property>
|
||||||
|
endif::[]
|
||||||
</remote-store>
|
</remote-store>
|
||||||
</distributed-cache>
|
</distributed-cache>
|
||||||
|
|
||||||
|
@ -445,6 +460,9 @@ In production you will likely need to have a separate database server in every d
|
||||||
<remote-store cache="loginFailures" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
<remote-store cache="loginFailures" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
|
||||||
<property name="rawValues">true</property>
|
<property name="rawValues">true</property>
|
||||||
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
||||||
|
ifeval::[{project_product}==true]
|
||||||
|
<property name="protocolVersion">2.6</property>
|
||||||
|
endif::[]
|
||||||
</remote-store>
|
</remote-store>
|
||||||
</distributed-cache>
|
</distributed-cache>
|
||||||
|
|
||||||
|
@ -454,6 +472,9 @@ In production you will likely need to have a separate database server in every d
|
||||||
<remote-store cache="actionTokens" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="true" shared="true">
|
<remote-store cache="actionTokens" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="true" shared="true">
|
||||||
<property name="rawValues">true</property>
|
<property name="rawValues">true</property>
|
||||||
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
<property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
|
||||||
|
ifeval::[{project_product}==true]
|
||||||
|
<property name="protocolVersion">2.6</property>
|
||||||
|
endif::[]
|
||||||
</remote-store>
|
</remote-store>
|
||||||
</distributed-cache>
|
</distributed-cache>
|
||||||
```
|
```
|
||||||
|
@ -862,3 +883,23 @@ which is caused by cache items not being properly expired. In that case, update
|
||||||
+
|
+
|
||||||
you can just ignore them. To avoid the warning, the caches on {jdgserver_name} server side could be changed to transactional caches, but this is not recommended as it can cause some
|
you can just ignore them. To avoid the warning, the caches on {jdgserver_name} server side could be changed to transactional caches, but this is not recommended as it can cause some
|
||||||
other issues caused by the bug https://issues.jboss.org/browse/ISPN-9323. So for now, the warnings just need to be ignored.
|
other issues caused by the bug https://issues.jboss.org/browse/ISPN-9323. So for now, the warnings just need to be ignored.
|
||||||
|
|
||||||
|
* If you see errors in the {jdgserver_name} server log like:
|
||||||
|
+
|
||||||
|
```
|
||||||
|
12:08:32,921 ERROR [org.infinispan.server.hotrod.CacheDecodeContext] (HotRod-ServerWorker-7-11) ISPN005003: Exception reported: org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 7
|
||||||
|
at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.java:184)
|
||||||
|
at org.infinispan.server.hotrod.HotRodDecoder.decodeHeader(HotRodDecoder.java:133)
|
||||||
|
at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:92)
|
||||||
|
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
|
||||||
|
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
|
||||||
|
```
|
||||||
|
+
|
||||||
|
and you see some similar errors in the {project_name} log, it can indicate that there are incompatible versions of the HotRod protocol being used.
|
||||||
|
This is likely happen when you try to use {project_name} with the JDG 7.2 server or an old version of the Infinispan server. It
|
||||||
|
will help if you add the `protocolVersion` property as an additional property to the `remote-store` element in the {project_name}
|
||||||
|
configuration file. For example:
|
||||||
|
+
|
||||||
|
```xml
|
||||||
|
<property name="protocolVersion">2.6</property>
|
||||||
|
```
|
||||||
|
|
|
@ -86,7 +86,7 @@
|
||||||
:appserver_loadbalancer_name: {appserver_name} {appserver_version} Documentation
|
:appserver_loadbalancer_name: {appserver_name} {appserver_version} Documentation
|
||||||
|
|
||||||
:jdgserver_name: Infinispan
|
:jdgserver_name: Infinispan
|
||||||
:jdgserver_version: 9.2.4
|
:jdgserver_version: 9.3.1
|
||||||
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_data_grid/7.2/html/administration_and_configuration_guide/set_up_cross_datacenter_replication
|
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_data_grid/7.2/html/administration_and_configuration_guide/set_up_cross_datacenter_replication
|
||||||
|
|
||||||
:fuseVersion: JBoss Fuse 6.3.0 Rollup 5
|
:fuseVersion: JBoss Fuse 6.3.0 Rollup 5
|
||||||
|
|
|
@ -105,7 +105,7 @@ endif::[]
|
||||||
|
|
||||||
|
|
||||||
:jdgserver_name: JDG
|
:jdgserver_name: JDG
|
||||||
:jdgserver_version: 7.2.0
|
:jdgserver_version: 7.2.3
|
||||||
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_data_grid/7.2/html/administration_and_configuration_guide/set_up_cross_datacenter_replication
|
:jdgserver_crossdcdocs_link: https://access.redhat.com/documentation/en-us/red_hat_jboss_data_grid/7.2/html/administration_and_configuration_guide/set_up_cross_datacenter_replication
|
||||||
|
|
||||||
:fuseVersion: JBoss Fuse 6.3.0 Rollup 5
|
:fuseVersion: JBoss Fuse 6.3.0 Rollup 5
|
||||||
|
|
|
@ -41,6 +41,27 @@ due to changes in the id format of the applications. If you run into an error sa
|
||||||
was not found in the directory, you will have to register the client application again in the
|
was not found in the directory, you will have to register the client application again in the
|
||||||
https://account.live.com/developers/applications/create[Microsoft Application Registration] portal to obtain a new application id.
|
https://account.live.com/developers/applications/create[Microsoft Application Registration] portal to obtain a new application id.
|
||||||
|
|
||||||
|
==== Upgrade to Wildfly 14
|
||||||
|
|
||||||
|
The {project_name} server was upgraded to use Wildfly 14 as the underlying container. This does not directly involve any
|
||||||
|
specific {project_name} server functionality, but there are few changes related to the migration, which worth mentioning.
|
||||||
|
|
||||||
|
Dependency updates::
|
||||||
|
The dependencies were updated to the versions used by Wildfly 14 server. For example, Infinispan is now 9.3.1.Final.
|
||||||
|
|
||||||
|
Configuration changes::
|
||||||
|
There are few configuration changes in the `standalone(-ha).xml` and `domain.xml` files. You should follow the <<_install_new_version>>
|
||||||
|
section to handle the migration of configuration files automatically.
|
||||||
|
|
||||||
|
Cross-Datacenter Replication changes::
|
||||||
|
* You will need to upgrade {jdgserver_name} server to version {jdgserver_version}. The older version may still work, but it is
|
||||||
|
not guaranteed as we don't test it anymore.
|
||||||
|
ifeval::[{project_product}==true]
|
||||||
|
* There is a need to add `protocolVersion` property with the value `2.6` to the configuration of the `remote-store` element in the
|
||||||
|
{project_name} configuration. This is required as there is a need to downgrade the version of HotRod protocol to be compatible
|
||||||
|
with the version used by {jdgserver_name} {jdgserver_version}.
|
||||||
|
endif::[]
|
||||||
|
|
||||||
=== Migrating to 4.4.0
|
=== Migrating to 4.4.0
|
||||||
|
|
||||||
==== Upgrade to Wildfly 13
|
==== Upgrade to Wildfly 13
|
||||||
|
|
Loading…
Reference in a new issue