Remove JGroups thread pool docs from HA Guide

Clustering is disabled with multi-site deployment and there is no
JGroups thread pool to configure.

Closes #34715

Signed-off-by: Pedro Ruivo <pruivo@redhat.com>
Signed-off-by: Alexander Schwartz <aschwart@redhat.com>
Co-authored-by: Alexander Schwartz <aschwart@redhat.com>
This commit is contained in:
Pedro Ruivo 2024-11-07 09:00:48 +00:00 committed by GitHub
parent 226daa41c7
commit 33cae33ae4
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 20 additions and 38 deletions

View file

@ -22,7 +22,7 @@ These APIs are no longer needed as initialization is done automatically on deman
= Virtual Threads enabled for Infinispan and JGroups thread pools
Starting from this release, {project_name} automatically enables the virtual thread pool support in both the embedded Infinispan and JGroups when running on OpenJDK 21.
This removes the need to configure the thread pool and reduces overall memory footprint.
This removes the need to configure the JGroups thread pool, the need to align the JGroups thread pool with the HTTP worker thread pool, and reduces the overall memory footprint.
To disable the virtual threads, add one of the Java system properties combinations to your deployment:
* `-Dorg.infinispan.threads.virtual=false`: disables virtual thread in both Infinispan and JGroups.

View file

@ -12,6 +12,15 @@ For a configuration where this is applied, visit <@links.ha id="deploy-keycloak-
== Concepts
=== JGroups communications
// remove this paragraph once OpenJDK 17 is no longer supported on the server side.
// https://github.com/keycloak/keycloak/issues/31101
JGroups communications, which is used in single-site setups for the communication between {project_name} nodes, benefits from the use of virtual threads which are available in OpenJDK 21.
This reduces the memory usage and removes the need to configure thread pool sizes.
Therefore, the use of OpenJDK 21 is recommended.
=== Quarkus executor pool
{project_name} requests, as well as blocking probes, are handled by an executor pool. Depending on the available CPU cores, it has a maximum size of 50 or more threads.
@ -31,32 +40,6 @@ If you increase the number of database connections and the number of threads too
The number of database connections is configured via the link:{links_server_all-config_url}?q=db-pool[`Database` settings `db-pool-initial-size`, `db-pool-min-size` and `db-pool-max-size`] respectively.
Low numbers ensure fast response times for all clients, even if there is an occasionally failing request when there is a load spike.
=== JGroups connection pool
[NOTE]
====
* This currently applies to single-site setups only.
In a multi-site setup with an external {jdgserver_name} this is not a restriction.
* This currently applies if virtual threads are disabled.
Since {project_name} 26.1, virtual threads are enabled in both embedded Infinispan and JGroups if running on OpenJDK 21 or higher.
====
The combined number of executor threads in all {project_name} nodes in the cluster should not exceed too much the number of threads available in JGroups thread pool to avoid the warning `thread pool is full (max=<value>, active=<value>)`.
The warning includes a thread dump when the Java system property `-Djgroups.thread_dumps_enabled=true` is set.
It may incur in a penalty in performance collecting those thread dumps.
--
include::partials/threads/executor-jgroups-thread-calculation.adoc[]
--
Use metrics to monitor the total JGroup threads in the pool and for the threads active in the pool.
When using TCP as the JGroups transport protocol, the metrics `vendor_jgroups_tcp_get_thread_pool_size` and `vendor_jgroups_tcp_get_thread_pool_size_active` are available for monitoring. When using UDP, the metrics `vendor_jgroups_udp_get_thread_pool_size` and `vendor_jgroups_udp_get_thread_pool_size_active` are available.
This is useful to monitor that limiting the Quarkus thread pool size keeps the number of active JGroup threads below the maximum JGroup thread pool size.
WARNING: The metrics are not available when virtual threads are enabled in JGroups.
[#load-shedding]
=== Load Shedding

View file

@ -17,6 +17,7 @@ Use it together with the other building blocks outlined in the <@links.ha id="bb
* Understanding of a <@links.operator id="basic-deployment" /> of {project_name} with the {project_name} Operator.
* Aurora AWS database deployed using the <@links.ha id="deploy-aurora-multi-az" /> {section}.
* {jdgserver_name} server deployed using the <@links.ha id="deploy-infinispan-kubernetes-crossdc" /> {section}.
* Running {project_name} with OpenJDK 21, which is the default for the containers distributed for {project_name}, as this enabled virtual threads for the JGroups communication.
== Procedure
@ -46,8 +47,6 @@ See the <@links.ha id="concepts-database-connections" /> {section} for details.
<5> To be able to analyze the system under load, enable the metrics endpoint.
The disadvantage of the setting is that the metrics will be available at the external {project_name} endpoint, so you must add a filter so that the endpoint is not available from the outside.
Use a reverse proxy in front of {project_name} to filter out those URLs.
<6> You might consider limiting the number of {project_name} threads further because multiple concurrent threads will lead to throttling by Kubernetes once the requested CPU limit is reached.
See the <@links.ha id="concepts-threads" /> {section} for details.
== Verifying the deployment
@ -70,7 +69,11 @@ spec:
additionalOptions:
include::examples/generated/keycloak.yaml[tag=keycloak-queue-size]
----
All exceeding requests are served with an HTTP 503.
You might consider limiting the value for `http-pool-max-threads` further because multiple concurrent threads will lead to throttling by Kubernetes once the requested CPU limit is reached.
See the <@links.ha id="concepts-threads" /> {section} about load shedding for details.
== Optional: Disable sticky sessions

View file

@ -464,16 +464,16 @@ spec:
- name: http-metrics-slos
value: '5,10,25,50,250,500'
# tag::keycloak[]
# end::keycloak[]
# tag::keycloak-queue-size[]
- name: http-max-queued-requests
value: "1000"
# end::keycloak-queue-size[]
# tag::keycloak[]
- name: log-console-output
value: json
- name: metrics-enabled # <5>
value: 'true'
- name: http-pool-max-threads # <6>
value: "200"
# tag::keycloak-ispn[]
- name: cache-remote-host # <1>
value: "infinispan.keycloak.svc"

View file

@ -453,16 +453,16 @@ spec:
- name: http-metrics-slos
value: '5,10,25,50,250,500'
# tag::keycloak[]
# end::keycloak[]
# tag::keycloak-queue-size[]
- name: http-max-queued-requests
value: "1000"
# end::keycloak-queue-size[]
# tag::keycloak[]
- name: log-console-output
value: json
- name: metrics-enabled # <5>
value: 'true'
- name: http-pool-max-threads # <6>
value: "66"
# end::keycloak[]
# This block is just for documentation purposes as we need both versions of Infinispan config, with and without numbers to corresponding options
# tag::keycloak[]
@ -510,6 +510,7 @@ spec:
- name: JAVA_OPTS_APPEND # <5>
value: ""
ports:
# end::keycloak[]
# readinessProbe:
# exec:
# command:

View file

@ -1,5 +0,0 @@
The number of JGroup threads is `200` by default.
While it can be configured using the property Java system property `jgroups.thread_pool.max_threads`, we advise keeping it at this value.
As shown in experiments, the total number of Quarkus worker threads in the cluster should not exceed the number of threads in the JGroup thread pool of `200` in each node to avoid requests being dropped in the JGroups communication.
Given a {project_name} cluster with four nodes, each node should then have around 50 Quarkus worker threads.
Use the {project_name} configuration option `http-pool-max-threads` to configure the maximum number of Quarkus worker threads.