Revisit Multi Site Guide
Closes #32745 Signed-off-by: Pedro Ruivo <pruivo@redhat.com> Signed-off-by: Alexander Schwartz <aschwart@redhat.com> Co-authored-by: Alexander Schwartz <aschwart@redhat.com>
This commit is contained in:
parent
d9c567a5e4
commit
2d6140cb34
4 changed files with 15 additions and 14 deletions
|
@ -44,8 +44,8 @@ Most CPU time goes into creating new TLS connections, as each client runs only a
|
|||
|
||||
* For each 120 refresh token requests per second, 1 vCPU to the cluster (tested with up to 435 refresh token requests per second).
|
||||
|
||||
* Leave 200% extra head-room for CPU usage to handle spikes in the load.
|
||||
This ensures a fast startup of the node, and sufficient capacity to handle failover tasks like, for example, re-balancing Infinispan caches, when one node fails.
|
||||
* Leave 150% extra head-room for CPU usage to handle spikes in the load.
|
||||
This ensures a fast startup of the node, and sufficient capacity to handle failover tasks.
|
||||
Performance of {project_name} dropped significantly when its Pods were throttled in our tests.
|
||||
|
||||
{project_name}, which by default stores user sessions in the database, requires the following resources for optimal performance on an Aurora PostgreSQL multi-AZ database:
|
||||
|
@ -73,9 +73,9 @@ Limits calculated:
|
|||
+
|
||||
(45 logins per second = 3 vCPU, 600 client credential grants per second = 3 vCPU, 345 refresh token = 3 vCPU. This sums up to 9 vCPU total. With 3 Pods running in the cluster, each Pod then requests 3 vCPU)
|
||||
|
||||
* CPU limit per Pod: 9 vCPU
|
||||
* CPU limit per Pod: 7.5 vCPU
|
||||
+
|
||||
(Allow for three times the CPU requested to handle peaks, startups and failover tasks)
|
||||
(Allow for an additional 150% CPU requested to handle peaks, startups and failover tasks)
|
||||
|
||||
* Memory requested per Pod: 1250 MB
|
||||
+
|
||||
|
|
|
@ -16,7 +16,7 @@ Use this setup to provide {project_name} deployments that are able to tolerate s
|
|||
Two independent {project_name} deployments running in different sites are connected with a low latency network connection.
|
||||
Users, realms, clients, sessions, and other entities are stored in a database that is replicated synchronously across the two sites.
|
||||
The data is also cached in the {project_name} Infinispan caches as local caches.
|
||||
When the data is changed in one {project_name} instance, that data is updated in the database, and an invalidation message is sent to the other site using the replicated `work` cache.
|
||||
When the data is changed in one {project_name} instance, that data is updated in the database, and an invalidation message is sent to the other site using the `work` cache.
|
||||
|
||||
In the following paragraphs and diagrams, references to deploying {jdgserver_name} apply to the external {jdgserver_name}.
|
||||
|
||||
|
@ -46,14 +46,14 @@ Monitoring is necessary to detect degraded setups.
|
|||
| Seconds to minutes (depending on the database)
|
||||
|
||||
| {project_name} node
|
||||
| Multiple {project_name} instances run in each site. If one instance fails, it takes a few seconds for the other nodes to notice the change, and some incoming requests might receive an error message or are delayed for some seconds.
|
||||
| Multiple {project_name} instances run in each site. If one instance fails some incoming requests might receive an error message or are delayed for some seconds.
|
||||
| No data loss
|
||||
| Less than one minute
|
||||
| Less than 30 seconds
|
||||
|
||||
| {jdgserver_name} node
|
||||
| Multiple {jdgserver_name} instances run in each site. If one instance fails, it takes a few seconds for the other nodes to notice the change. Entities are stored in at least two {jdgserver_name} nodes, so a single node failure does not lead to data loss.
|
||||
| No data loss
|
||||
| Less than one minute
|
||||
| Less than 30 seconds
|
||||
|
||||
| {jdgserver_name} cluster failure
|
||||
| If the {jdgserver_name} cluster fails in one of the sites, {project_name} will not be able to communicate with the external {jdgserver_name} on that site, and the {project_name} service will be unavailable.
|
||||
|
@ -83,7 +83,7 @@ Manual operations might be necessary depending on the database.
|
|||
| If none of the {project_name} nodes are available, the loadbalancer will detect the outage and redirect the traffic to the other site.
|
||||
Some requests might receive an error message until the loadbalancer detects the failure.
|
||||
| No data loss^3^
|
||||
| Less than one minute
|
||||
| Less than two minutes
|
||||
|
||||
|===
|
||||
|
||||
|
@ -121,7 +121,7 @@ A synchronously replicated database ensures that data written in one site is alw
|
|||
It also ensures that the next request will not return stale data, independent on which site it is served.
|
||||
|
||||
Why synchronous {jdgserver_name} replication?::
|
||||
A synchronously replicated {jdgserver_name} ensures that sessions created, updated and deleted in one site are always available on the other site after a site failure and no data is lost.
|
||||
A synchronously replicated {jdgserver_name} ensures that cached data in one site are always available on the other site after a site failure and no data is lost.
|
||||
It also ensures that the next request will not return stale data, independent on which site it is served.
|
||||
|
||||
Why is a low-latency network between sites needed?::
|
||||
|
@ -134,7 +134,8 @@ However, as the two sites would never be fully up-to-date, this setup could lead
|
|||
This would include:
|
||||
+
|
||||
--
|
||||
* Lost logouts, meaning sessions are logged in one site although they are logged out in the other site at the point of failure when using an asynchronous {jdgserver_name} replication of sessions.
|
||||
// TODO storing sessions in Infinispan is experimental. Add this bullet point back when we support it
|
||||
//* Lost logouts, meaning sessions are logged in one site although they are logged out in the other site at the point of failure when using an asynchronous {jdgserver_name} replication of sessions.
|
||||
* Lost changes leading to users being able to log in with an old password because database changes are not replicated to the other site at the point of failure when using an asynchronous database.
|
||||
* Invalid caches leading to users being able to log in with an old password because invalidating caches are not propagated at the point of failure to the other site when using an asynchronous {jdgserver_name} replication.
|
||||
--
|
||||
|
|
|
@ -33,6 +33,8 @@ Low numbers ensure fast response times for all clients, even if there is an occa
|
|||
|
||||
=== JGroups connection pool
|
||||
|
||||
NOTE: This currently applies to single-site setups only. In a multi-site setup with an external {jdgserver_name} this is no longer a restriction.
|
||||
|
||||
The combined number of executor threads in all {project_name} nodes in the cluster should not exceed the number of threads available in JGroups thread pool to avoid the error `org.jgroups.util.ThreadPool: thread pool is full`.
|
||||
To see the error the first time it happens, the system property `jgroups.thread_dumps_threshold` needs to be set to `1`, as otherwise the message appears only after 10000 requests have been rejected.
|
||||
|
||||
|
|
|
@ -46,9 +46,7 @@ See the <@links.ha id="concepts-database-connections" /> {section} for details.
|
|||
<5> To be able to analyze the system under load, enable the metrics endpoint.
|
||||
The disadvantage of the setting is that the metrics will be available at the external {project_name} endpoint, so you must add a filter so that the endpoint is not available from the outside.
|
||||
Use a reverse proxy in front of {project_name} to filter out those URLs.
|
||||
<6> The default setting for the internal JGroup thread pools is 200 threads maximum.
|
||||
The number of all {project_name} threads in the StatefulSet should not exceed the number of JGroup threads to avoid a JGroup thread pool exhaustion which could stall {project_name} request processing.
|
||||
You might consider limiting the number of {project_name} threads further because multiple concurrent threads will lead to throttling by Kubernetes once the requested CPU limit is reached.
|
||||
<6> You might consider limiting the number of {project_name} threads further because multiple concurrent threads will lead to throttling by Kubernetes once the requested CPU limit is reached.
|
||||
See the <@links.ha id="concepts-threads" /> {section} for details.
|
||||
|
||||
== Verifying the deployment
|
||||
|
|
Loading…
Reference in a new issue