This section is intended when you want to understand the considerations and best practices on how to configure thread pools connection pools for {project_name}.
For a configuration where this is applied, visit <@links.ha id="deploy-keycloak-kubernetes" />.
== Concepts
=== Quarkus executor pool
{project_name} requests, as well as all probes, are handled by the Quarkus executor pool.
The Quarkus executor thread pool is configured in https://quarkus.io/guides/all-config#quarkus-core_quarkus.thread-pool.max-threads[`quarkus.thread-pool.max-threads`] and has a maximum size of at least 200 threads.
Depending on the available CPU cores, it can grow even larger.
When running on Kubernetes, adjust the number of worker threads to avoid creating more load than what the CPU limit allows for the Pod to avoid throttling, which would lead to congestion.
If you increase the number of database connections and the number of threads too much, the system will be congested under a high load with requests queueing up, which leads to a bad performance.
The number of database connections is configured via the https://www.keycloak.org/server/all-config#category-database[`Database`] settings `db-pool-initial-size`, `db-pool-min-size` and `db-pool-max-size` respectively.
The combined number of executor threads in all {project_name} nodes in the cluster should not exceed the number of threads available in JGroups thread pool to avoid the error `org.jgroups.util.ThreadPool: thread pool is full`.
To see the error the first time it happens, the system property `jgroups.thread_dumps_threshold` needs to be set to `1`, as otherwise the message appears only after 10000 threads have been rejected.
Use the metrics `vendor_jgroups_tcp_get_thread_pool_size` to monitor the total JGroup threads in the pool and `vendor_jgroups_tcp_get_thread_pool_size_active` for the threads active in the pool.
This is useful to monitor that limiting the Quarkus thread pool size keeps the number of active JGroup threads below the maximum JGroup thread pool size.
[#load-shedding]
=== Load Shedding
By default, {project_name} will queue all incoming requests infinitely, even if the request processing stalls.
This will use additional memory in the Pod, can exhaust resources in the load balancers, and the requests will eventually time out on the client side without the client knowing if the request has been processed.
To limit the number of queued requests in {project_name}, set an additional Quarkus configuration option.
Assuming a {project_name} Pod processes around 200 requests per second, a queue of 1000 would lead to maximum waiting times of around 5 seconds.
When this setting is active, requests that exceed the number of queued requests will return with an HTTP 503 error.
{project_name} logs the error message in its log.
[#probes]
=== Probes
All health probes are handled in the Quarkus executor worker pool by default.
Only the liveness probe is non-blocking.
Future version of {project_name} and Quarkus plan to have other probes also being non-blocking.
=== OS Resources
In order for Java to create threads, when running on Linux it needs to have file handles available.
Therefore, the number of open files (as retrieved as `ulimit -n` on Linux) need to provide head-space for {project_name} to increase the number of threads needed.
Each thread will also consume memory, and the container memory limits need to be set to a value that allows for this or the Pod will be killed by Kubernetes.