0f48027ffb
Closes #26068 Signed-off-by: Alexander Schwartz <aschwart@redhat.com>
76 lines
4.8 KiB
Text
76 lines
4.8 KiB
Text
<#import "/templates/guide.adoc" as tmpl>
|
|
<#import "/templates/links.adoc" as links>
|
|
|
|
<@tmpl.guide
|
|
title="Concepts for configuring thread pools"
|
|
summary="Understand these concepts to avoid resource exhaustion and congestion"
|
|
preview="true"
|
|
previewDiscussionLink="https://github.com/keycloak/keycloak/discussions/25269"
|
|
tileVisible="false" >
|
|
|
|
|
|
This section is intended when you want to understand the considerations and best practices on how to configure thread pools connection pools for {project_name}.
|
|
For a configuration where this is applied, visit <@links.ha id="deploy-keycloak-kubernetes" />.
|
|
|
|
== Concepts
|
|
|
|
=== Quarkus executor pool
|
|
|
|
{project_name} requests, as well as all probes, are handled by the Quarkus executor pool.
|
|
|
|
The Quarkus executor thread pool is configured in https://quarkus.io/guides/all-config#quarkus-core_quarkus.thread-pool.max-threads[`quarkus.thread-pool.max-threads`] and has a maximum size of at least 200 threads.
|
|
Depending on the available CPU cores, it can grow even larger.
|
|
Threads are created as needed, and will end when no longer needed, so the system will scale up and down as needed.
|
|
|
|
When running on Kubernetes, adjust the number of worker threads to avoid creating more load than what the CPU limit for the Pod to avoid throttling, which would lead to congestion.
|
|
When running on physical machines, adjust the number of worker threads to avoid creating more load than the node can handle to avoid congestion.
|
|
Congestion would result in longer response times and an increased memory usage, and eventually an unstable system.
|
|
|
|
Ideally, you should start with a low limit of threads and adjust it accordingly to the target throughput and response time.
|
|
When the load and the number of threads increases, the bottleneck can become be the database connections.
|
|
Once a request cannot acquire a database connection within 5 seconds, it will fail with a message in the log like `Unable to acquire JDBC Connection`.
|
|
The caller will receive a response with a 5xx HTTP status code indicating a server side error.
|
|
|
|
If you increase the number of database connections and the number of threads too much, the system will be congested under a high load with requests queueing up and all callers getting a bad performance.
|
|
Low numbers ensure fast response times for all clients, even if there is an occasionally failing request when there is a load spike.
|
|
|
|
=== JGroups connection pool
|
|
|
|
The combined number of executor threads in all {project_name} nodes in the cluster should not exceed the number of threads available in JGroups thread pool to avoid the error `org.jgroups.util.ThreadPool: thread pool is full`.
|
|
To see the error the first time it happens, the system property `jgroups.thread_dumps_threshold` needs to be set to `1`, as otherwise the message appears only after 10000 threads have been rejected.
|
|
|
|
--
|
|
include::partials/threads/executor-jgroups-thread-calculation.adoc[]
|
|
--
|
|
|
|
Use the metrics `vendor_jgroups_tcp_get_thread_pool_size` to monitor the total JGroup threads in the pool and `vendor_jgroups_tcp_get_thread_pool_size_active` for the threads active in the pool.
|
|
This is useful to monitor that limiting the Quarkus thread pool size keeps the number of active JGroup threads below the maximum JGroup thread pool size.
|
|
|
|
[#load-shedding]
|
|
=== Load Shedding
|
|
|
|
By default, {project_name} will queue all incoming requests infinitely, even if the request processing stalls.
|
|
This will use additional memory in the Pod, can exhaust resources in the load balancers, and the requests will eventually time out on the client side without the client knowing if the request has been processed.
|
|
To limit the number of queued requests in {project_name}, set an additional Quarkus configuration option.
|
|
|
|
Configure `quarkus.thread-pool.queue-size` to specify a maximum queue length to allow for effective load shedding once this queue size is exceeded.
|
|
Assuming a {project_name} Pod processes around 200 requests per second, a queue of 1000 would lead to maximum waiting times of around 5 seconds.
|
|
|
|
// KC22.0.6 - this is still 500
|
|
When this setting is active, requests that exceed the number of queued requests will return with an HTTP 503 error.
|
|
{project_name} logs the error message in its log.
|
|
|
|
[#probes]
|
|
=== Probes
|
|
|
|
All health probes are handled in the Quarkus executor worker pool by default.
|
|
Only the liveness probe is non-blocking.
|
|
Future version of {project_name} and Quarkus plan to have other probes also being non-blocking.
|
|
|
|
=== OS Resources
|
|
|
|
In order for Java to create threads, when running on Linux it needs to have file handles available.
|
|
Therefore, the number of open files (as retrieved as `ulimit -n` on Linux) need to provide head-space for {project_name} to increase the number of threads needed.
|
|
Each thread will also consume memory, and the container memory limits need to be set to a value that allows for this or the Pod will be killed by Kubernetes.
|
|
|
|
</@tmpl.guide>
|