High availability guide updates (#32093)
* Remove connecting Infinispan to Keycloak building block * Rephrase two sites restriction limitation * Update the KCB generated yaml files for HA guide * Remove setting number of owners to 1 for session caches as it is no longer necessary * Add multi-site feature * Remove histrograms and slos * Replace stonith with fencing * Switch for DG in community and product Closes #31029 Signed-off-by: Michal Hajas <mhajas@redhat.com> Signed-off-by: Alexander Schwartz <aschwart@redhat.com> Co-authored-by: Alexander Schwartz <aschwart@redhat.com>
This commit is contained in:
parent
841e2790ad
commit
709165a90a
16 changed files with 300 additions and 224 deletions
|
@ -52,7 +52,7 @@ It might be considered in the future.
|
|||
|
||||
A clustered deployment of {project_name} in each site, connected to an external {jdgserver_name}.
|
||||
|
||||
*Blueprint:* <@links.ha id="deploy-keycloak-kubernetes" /> together with <@links.ha id="connect-keycloak-to-external-infinispan"/> and the Aurora database.
|
||||
*Blueprint:* <@links.ha id="deploy-keycloak-kubernetes" /> that includes connecting to the Aurora database and the {jdgserver_name} server.
|
||||
|
||||
</@tmpl.guide>
|
||||
|
||||
|
|
|
@ -14,13 +14,10 @@ Use this setup to provide {project_name} deployments that are able to tolerate s
|
|||
== Deployment, data storage and caching
|
||||
|
||||
Two independent {project_name} deployments running in different sites are connected with a low latency network connection.
|
||||
Users, realms, clients, offline sessions, and other entities are stored in a database that is replicated synchronously across the two sites.
|
||||
Users, realms, clients, sessions, and other entities are stored in a database that is replicated synchronously across the two sites.
|
||||
The data is also cached in the {project_name} Infinispan caches as local caches.
|
||||
When the data is changed in one {project_name} instance, that data is updated in the database, and an invalidation message is sent to the other site using the replicated `work` cache.
|
||||
|
||||
Session-related data is stored in the replicated caches of the Infinispan caches of {project_name}, and forwarded to the external {jdgserver_name}, which forwards information to the external {jdgserver_name} running synchronously in the other site.
|
||||
As session data of the external {jdgserver_name} is also cached in the Infinispan caches, invalidation messages of the replicated `work` cache are needed for invalidation.
|
||||
|
||||
In the following paragraphs and diagrams, references to deploying {jdgserver_name} apply to the external {jdgserver_name}.
|
||||
|
||||
image::high-availability/active-active-sync.dio.svg[]
|
||||
|
@ -54,7 +51,7 @@ Monitoring is necessary to detect degraded setups.
|
|||
| Less than one minute
|
||||
|
||||
| {jdgserver_name} node
|
||||
| Multiple {jdgserver_name} instances run in each site. If one instance fails, it takes a few seconds for the other nodes to notice the change. Sessions are stored in at least two {jdgserver_name} nodes, so a single node failure does not lead to data loss.
|
||||
| Multiple {jdgserver_name} instances run in each site. If one instance fails, it takes a few seconds for the other nodes to notice the change. Entities are stored in at least two {jdgserver_name} nodes, so a single node failure does not lead to data loss.
|
||||
| No data loss
|
||||
| Less than one minute
|
||||
|
||||
|
@ -62,15 +59,15 @@ Monitoring is necessary to detect degraded setups.
|
|||
| If the {jdgserver_name} cluster fails in one of the sites, {project_name} will not be able to communicate with the external {jdgserver_name} on that site, and the {project_name} service will be unavailable.
|
||||
The loadbalancer will detect the situation as `/lb-check` returns an error, and will direct all traffic to the other site.
|
||||
|
||||
The setup is degraded until the {jdgserver_name} cluster is restored and the session data is re-synchronized.
|
||||
The setup is degraded until the {jdgserver_name} cluster is restored and the data is re-synchronized.
|
||||
| No data loss^3^
|
||||
| Seconds to minutes (depending on load balancer setup)
|
||||
|
||||
| Connectivity {jdgserver_name}
|
||||
| If the connectivity between the two sites is lost, session information cannot be sent to the other site.
|
||||
| If the connectivity between the two sites is lost, data cannot be sent to the other site.
|
||||
Incoming requests might receive an error message or are delayed for some seconds.
|
||||
The {jdgserver_name} will mark the other site offline, and will stop sending data.
|
||||
One of the sites needs to be taken offline in the load balancer until the connection is restored and the session data is re-synchronized between the two sites.
|
||||
One of the sites needs to be taken offline in the loadbalancer until the connection is restored and the data is re-synchronized between the two sites.
|
||||
In the blueprints, we show how this can be automated.
|
||||
| No data loss^3^
|
||||
| Seconds to minutes (depending on load balancer setup)
|
||||
|
@ -100,17 +97,22 @@ The statement "`No data loss`" depends on the setup not being degraded from prev
|
|||
== Known limitations
|
||||
|
||||
Site Failure::
|
||||
* A successful failover requires a setup not degraded from previous failures.
|
||||
A successful failover requires a setup not degraded from previous failures.
|
||||
All manual operations like a re-synchronization after a previous failure must be complete to prevent data loss.
|
||||
Use monitoring to ensure degradations are detected and handled in a timely manner.
|
||||
|
||||
Out-of-sync sites::
|
||||
* The sites can become out of sync when a synchronous {jdgserver_name} request fails.
|
||||
The sites can become out of sync when a synchronous {jdgserver_name} request fails.
|
||||
This situation is currently difficult to monitor, and it would need a full manual re-sync of {jdgserver_name} to recover.
|
||||
Monitoring the number of cache entries in both sites and the {project_name} log file can show when resynch would become necessary.
|
||||
|
||||
Manual operations::
|
||||
* Manual operations that re-synchronize the {jdgserver_name} state between the sites will issue a full state transfer which will put a stress on the system (network, CPU, Java heap in {jdgserver_name} and {project_name}).
|
||||
Manual operations that re-synchronize the {jdgserver_name} state between the sites will issue a full state transfer which will put a stress on the system.
|
||||
|
||||
Two sites restriction::
|
||||
This setup is tested and supported only with two sites.
|
||||
Each additional site increases overall latency as it is necessary for data to be synchronously written to each site.
|
||||
Furthermore, the probability of network failures, and therefore downtime, also increases. Therefore, we do not support more than two sites as we believe it would lead to a deployment with inferior stability and performance.
|
||||
|
||||
== Questions and answers
|
||||
|
||||
|
@ -126,12 +128,6 @@ Why is a low-latency network between sites needed?::
|
|||
Synchronous replication defers the response to the caller until the data is received at the other site.
|
||||
For synchronous database replication and synchronous {jdgserver_name} replication, a low latency is necessary as each request can have potentially multiple interactions between the sites when data is updated which would amplify the latency.
|
||||
|
||||
Is this setup limited to two sites?::
|
||||
This setup could be extended to multiple sites, and there are no fundamental changes necessary to have, for example, three sites.
|
||||
Once more sites are added, the overall latency between the sites increases, and the likeliness of network failures, and therefore short downtimes, increases as well.
|
||||
Therefore, such a deployment is expected to have worse performance and an inferior.
|
||||
For now, it has been tested and documented with blueprints only for two sites.
|
||||
|
||||
Is a synchronous cluster less stable than an asynchronous cluster?::
|
||||
An asynchronous setup would handle network failures between the sites gracefully, while the synchronous setup would delay requests and will throw errors to the caller where the asynchronous setup would have deferred the writes to {jdgserver_name} or the database on the other site.
|
||||
However, as the two sites would never be fully up-to-date, this setup could lead to data loss during failures.
|
||||
|
|
|
@ -1,54 +0,0 @@
|
|||
<#import "/templates/guide.adoc" as tmpl>
|
||||
<#import "/templates/links.adoc" as links>
|
||||
|
||||
<@tmpl.guide
|
||||
title="Connect {project_name} with an external {jdgserver_name}"
|
||||
summary="Building block for an Infinispan deployment on Kubernetes"
|
||||
tileVisible="false"
|
||||
includedOptions="cache-remote-*" >
|
||||
|
||||
This topic describes advanced {jdgserver_name} configurations for {project_name} on Kubernetes.
|
||||
|
||||
== Architecture
|
||||
|
||||
This connects {project_name} to {jdgserver_name} using TCP connections secured by TLS 1.3.
|
||||
It uses the {project_name}'s truststore to verify {jdgserver_name}'s server certificate.
|
||||
As {project_name} is deployed using its Operator on OpenShift in the prerequisites listed below, the Operator already added the `service-ca.crt` to the truststore which is used to sign {jdgserver_name}'s server certificates.
|
||||
In other environments, add the necessary certificates to {project_name}'s truststore.
|
||||
|
||||
== Prerequisites
|
||||
|
||||
* <@links.ha id="deploy-keycloak-kubernetes" /> as it will be extended.
|
||||
* <@links.ha id="deploy-infinispan-kubernetes-crossdc" />.
|
||||
|
||||
== Procedure
|
||||
|
||||
. Create a Secret with the username and password to connect to the external {jdgserver_name} deployment:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
include::examples/generated/keycloak-ispn.yaml[tag=keycloak-ispn-secret]
|
||||
----
|
||||
|
||||
. Extend the {project_name} Custom Resource with `additionalOptions` as shown below.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
All the memory, resource and database configurations are skipped from the CR below as they have been described in <@links.ha id="deploy-keycloak-kubernetes" /> {section} already.
|
||||
Administrators should leave those configurations untouched.
|
||||
====
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
include::examples/generated/keycloak-ispn.yaml[tag=keycloak-ispn]
|
||||
----
|
||||
<1> The hostname of the remote {jdgserver_name} cluster.
|
||||
<2> The port of the remote {jdgserver_name} cluster.
|
||||
This is optional and it default to `11222`.
|
||||
<3> The Secret `name` and `key` with the {jdgserver_name} username credential.
|
||||
<4> The Secret `name` and `key` with the {jdgserver_name} password credential.
|
||||
<5> The `spi-connections-infinispan-quarkus-site-name` is an arbitrary {jdgserver_name} site name which {project_name} needs for its Infinispan caches deployment when a remote store is used.
|
||||
This site-name is related only to the Infinispan caches and does not need to match any value from the external {jdgserver_name} deployment.
|
||||
If you are using multiple sites for {project_name} in a cross-DC setup such as <@links.ha id="deploy-infinispan-kubernetes-crossdc" />, the site name must be different in each site.
|
||||
|
||||
</@tmpl.guide>
|
|
@ -49,10 +49,10 @@ include::partials/aurora/aurora-create-peering-connections.adoc[]
|
|||
|
||||
include::partials/aurora/aurora-verify-peering-connections.adoc[]
|
||||
|
||||
== Deploying {project_name}
|
||||
[#connecting-aurora-to-keycloak]
|
||||
== Connecting Aurora database with {project_name}
|
||||
|
||||
Now that an Aurora database has been established and linked with all of your ROSA clusters, the next step is to deploy {project_name} as described in the <@links.ha id="deploy-keycloak-kubernetes" /> {section} with the JDBC url configured to use the Aurora database writer endpoint.
|
||||
To do this, create a `Keycloak` CR with the following adjustments:
|
||||
Now that an Aurora database has been established and linked with all of your ROSA clusters, here are the relevant {project_name} CR options to connect the Aurora database with {project_name}. These changes will be required in the <@links.ha id="deploy-keycloak-kubernetes" /> {section}. The JDBC url is configured to use the Aurora database writer endpoint.
|
||||
|
||||
. Update `spec.db.url` to be `jdbc:aws-wrapper:postgresql://$HOST:5432/keycloak` where `$HOST` is the
|
||||
<<aurora-writer-url, Aurora writer endpoint URL>>.
|
||||
|
@ -60,3 +60,7 @@ To do this, create a `Keycloak` CR with the following adjustments:
|
|||
. Ensure that the Secrets referenced by `spec.db.usernameSecret` and `spec.db.passwordSecret` contain usernames and passwords defined when creating Aurora.
|
||||
|
||||
</@tmpl.guide>
|
||||
|
||||
== Next steps
|
||||
|
||||
After successful deployment of the Aurora database continue with <@links.ha id="deploy-infinispan-kubernetes-crossdc" />
|
||||
|
|
|
@ -15,7 +15,7 @@ include::partials/blueprint-disclaimer.adoc[]
|
|||
|
||||
== Architecture
|
||||
In the event of a network communication failure between the two sites in a multi-site deployment, it is no
|
||||
longer possible for the two sites to continue to replicate session state between themselves and the two sites
|
||||
longer possible for the two sites to continue to replicate data between themselves and the two sites
|
||||
will become increasingly out-of-sync. As it is possible for subsequent Keycloak requests to be routed to different
|
||||
sites, this may lead to unexpected behaviour as previous updates will not have been applied to both sites.
|
||||
|
||||
|
@ -247,6 +247,7 @@ aws lambda get-function-url-config \
|
|||
</#noparse>
|
||||
----
|
||||
<1> The AWS region where the Lambda was created
|
||||
|
||||
+
|
||||
.Output:
|
||||
[source,bash]
|
||||
|
@ -296,7 +297,7 @@ kubectl -n ${NAMESPACE} scale --replicas=0 deployment/infinispan-router #<2>
|
|||
kubectl -n ${NAMESPACE} rollout status -w deployment/infinispan-router
|
||||
</#noparse>
|
||||
----
|
||||
<1> Scale down the {jdgserver_name} Operator so that the next steop does not result in the deployment being recreated by the operator
|
||||
<1> Scale down the {jdgserver_name} Operator so that the next step does not result in the deployment being recreated by the operator
|
||||
<2> Scale down the Gossip Router deployment.Replace `$\{NAMESPACE}` with the namespace containing your {jdgserver_name} server
|
||||
+
|
||||
. Verify the `SiteOffline` event has been fired on a cluster by inspecting the *Observe* -> *Alerting* menu in the Openshift
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
<#import "/templates/guide.adoc" as tmpl>
|
||||
<#import "/templates/links.adoc" as links>
|
||||
<#import "/templates/profile.adoc" as profile>
|
||||
|
||||
<@tmpl.guide
|
||||
title="Deploy {jdgserver_name} for HA with the {jdgserver_name} Operator"
|
||||
summary="Building block for an {jdgserver_name} deployment on Kubernetes"
|
||||
tileVisible="false" >
|
||||
tileVisible="false"
|
||||
includedOptions="cache-remote-*">
|
||||
|
||||
include::partials/infinispan/infinispan-attributes.adoc[]
|
||||
|
||||
|
@ -19,7 +21,12 @@ See the <@links.ha id="introduction" /> {section} for an overview.
|
|||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Only versions based on Infinispan version ${properties["infinispan.version"]} or more recent patch releases are supported for external {jdgserver_name} deployments.
|
||||
<@profile.ifCommunity>
|
||||
Only Infinispan version ${properties["infinispan.version"]} or more recent patch releases are supported for external {jdgserver_name} deployments.
|
||||
</@profile.ifCommunity>
|
||||
<@profile.ifProduct>
|
||||
Only {jdgserver_name} version {jdgserver_min_version} or more recent patch releases are supported for external {jdgserver_name} deployments.
|
||||
</@profile.ifProduct>
|
||||
====
|
||||
|
||||
== Architecture
|
||||
|
@ -171,17 +178,6 @@ include::examples/generated/ispn-site-a.yaml[tag=infinispan-crossdc]
|
|||
<14> The secret with the access token to authenticate into the remote site.
|
||||
--
|
||||
+
|
||||
When using persistent sessions, limit the cache size limit for `sessions`, `offlineSessions`, `clientSessions`, and `offlineClientSessions` by extending the configuration as follows:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
distributedCache:
|
||||
owners: "1"
|
||||
memory:
|
||||
maxCount: 10000
|
||||
# ...
|
||||
----
|
||||
+
|
||||
For `{site-b}`, the `Infinispan` CR looks similar to the above.
|
||||
Note the differences in point 4, 11 and 13.
|
||||
+
|
||||
|
@ -195,41 +191,26 @@ include::examples/generated/ispn-site-b.yaml[tag=infinispan-crossdc]
|
|||
+
|
||||
{project_name} requires the following caches to be present: `sessions`, `actionTokens`, `authenticationSessions`, `offlineSessions`, `clientSessions`, `offlineClientSessions`, `loginFailures`, and `work`.
|
||||
+
|
||||
Due to the use of persistent-user-sessions, the configuration of the caches `sessions`, `offlineSessions`, `clientSessions` and `offlineClientSessions` is different to the `authenticationSessions`, `actionTokens`,
|
||||
`loginFailures` and `work` caches. In this section we detail how to configure both types of cache for multi-site deployments.
|
||||
+
|
||||
The {jdgserver_name} {infinispan-operator-docs}#creating-caches[Cache CR] allows deploying the caches in the {jdgserver_name} cluster.
|
||||
Cross-site needs to be enabled per cache as documented by {infinispan-xsite-docs}[Cross Site Documentation].
|
||||
The documentation contains more details about the options used by this {section}.
|
||||
The following example shows the `Cache` CR for `{site-a}`.
|
||||
+
|
||||
--
|
||||
.`sessions`, `offlineSessions`, `clientSessions` and `offlineClientSessions` caches in `{site-a}`
|
||||
. In `{site-a}` create a `Cache` CR for each of the caches mentioned above with the following content.
|
||||
[source,yaml]
|
||||
----
|
||||
include::examples/generated/ispn-site-a.yaml[tag=infinispan-cache-sessions]
|
||||
----
|
||||
<1> Number of owners is only 1 as sessions are stored in the DB and so there is no need to hold redundant copies in memory.
|
||||
<2> The maximum number of entries which will be cached in memory.
|
||||
<3> The remote site name.
|
||||
<4> The cross-site communication, in this case, `SYNC`.
|
||||
--
|
||||
+
|
||||
--
|
||||
.Remaining caches in `{site-a}`
|
||||
[source,yaml]
|
||||
----
|
||||
include::examples/generated/ispn-site-a.yaml[tag=infinispan-cache-actionTokens]
|
||||
----
|
||||
<1> The cross-site merge policy, invoked when there is a write-write conflict.
|
||||
Set this for the caches `sessions`, `offlineSessions`, `clientSessions` and `offlineClientSessions`, and do not set it for all other caches.
|
||||
<2> The remote site name.
|
||||
<3> The cross-site communication, in this case, `SYNC`.
|
||||
--
|
||||
+
|
||||
For `{site-b}`, the `Cache` CR is similar for both `*sessions` and all other caches, except for the `backups.<name>` outlined in point 2 of the above diagram.
|
||||
For `{site-b}`, the `Cache` CR is similar, except for the `backups.<name>` outlined in point 2 of the above diagram.
|
||||
+
|
||||
.session in `{site-b}`
|
||||
.sessions `Cache` CR in `{site-b}`
|
||||
[source,yaml]
|
||||
----
|
||||
include::examples/generated/ispn-site-b.yaml[tag=infinispan-cache-sessions]
|
||||
|
@ -251,8 +232,48 @@ kubectl wait --for condition=WellFormed --timeout=300s infinispans.infinispan.or
|
|||
kubectl wait --for condition=CrossSiteViewFormed --timeout=300s infinispans.infinispan.org -n {ns} {cluster-name}
|
||||
----
|
||||
|
||||
[#connecting-infinispan-to-keycloak]
|
||||
== Connecting {jdgserver_name} with {project_name}
|
||||
|
||||
Now that an {jdgserver_name} server is running, here are the relevant {project_name} CR changes necessary to connect it to {project_name}. These changes will be required in the <@links.ha id="deploy-keycloak-kubernetes" /> {section}.
|
||||
|
||||
. Create a Secret with the username and password to connect to the external {jdgserver_name} deployment:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
include::examples/generated/keycloak-ispn.yaml[tag=keycloak-ispn-secret]
|
||||
----
|
||||
|
||||
. Extend the {project_name} Custom Resource with `additionalOptions` as shown below.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
All the memory, resource and database configurations are skipped from the CR below as they have been described in <@links.ha id="deploy-keycloak-kubernetes" /> {section} already.
|
||||
Administrators should leave those configurations untouched.
|
||||
====
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
include::examples/generated/keycloak-ispn.yaml[tag=keycloak-ispn]
|
||||
----
|
||||
<1> The hostname of the remote {jdgserver_name} cluster.
|
||||
<2> The port of the remote {jdgserver_name} cluster.
|
||||
This is optional and it default to `11222`.
|
||||
<3> The Secret `name` and `key` with the {jdgserver_name} username credential.
|
||||
<4> The Secret `name` and `key` with the {jdgserver_name} password credential.
|
||||
<5> The `spi-connections-infinispan-quarkus-site-name` is an arbitrary {jdgserver_name} site name which {project_name} needs for its Infinispan caches deployment when a remote store is used.
|
||||
This site-name is related only to the Infinispan caches and does not need to match any value from the external {jdgserver_name} deployment.
|
||||
If you are using multiple sites for {project_name} in a cross-DC setup such as <@links.ha id="deploy-infinispan-kubernetes-crossdc" />, the site name must be different in each site.
|
||||
|
||||
=== Architecture
|
||||
|
||||
This connects {project_name} to {jdgserver_name} using TCP connections secured by TLS 1.3.
|
||||
It uses the {project_name}'s truststore to verify {jdgserver_name}'s server certificate.
|
||||
As {project_name} is deployed using its Operator on OpenShift in the prerequisites listed below, the Operator already added the `service-ca.crt` to the truststore which is used to sign {jdgserver_name}'s server certificates.
|
||||
In other environments, add the necessary certificates to {project_name}'s truststore.
|
||||
|
||||
== Next steps
|
||||
|
||||
After {jdgserver_name} is deployed and running, use the procedure in the <@links.ha id="connect-keycloak-to-external-infinispan"/> {section} to connect your {project_name} cluster with the {jdgserver_name} cluster.
|
||||
After the Aurora AWS database and {jdgserver_name} are deployed and running, use the procedure in the <@links.ha id="deploy-keycloak-kubernetes" /> {section} to deploy {project_name} and connect it to all previously created building blocks.
|
||||
|
||||
</@tmpl.guide>
|
||||
|
|
|
@ -15,6 +15,8 @@ Use it together with the other building blocks outlined in the <@links.ha id="bb
|
|||
|
||||
* OpenShift or Kubernetes cluster running.
|
||||
* Understanding of a <@links.operator id="basic-deployment" /> of {project_name} with the {project_name} Operator.
|
||||
* Aurora AWS database deployed using the <@links.ha id="deploy-aurora-multi-az" /> {section}.
|
||||
* {jdgserver_name} server deployed using the <@links.ha id="deploy-infinispan-kubernetes-crossdc" /> {section}.
|
||||
|
||||
== Procedure
|
||||
|
||||
|
@ -22,7 +24,9 @@ Use it together with the other building blocks outlined in the <@links.ha id="bb
|
|||
|
||||
. Install the {project_name} Operator as described in the <@links.operator id="installation" /> {section}.
|
||||
|
||||
. Deploy Aurora AWS as described in the <@links.ha id="deploy-aurora-multi-az" /> {section}.
|
||||
. Notice the configuration file below contains options relevant for connecting to the Aurora database from <@links.ha id="deploy-aurora-multi-az" anchor="connecting-aurora-to-keycloak" />
|
||||
|
||||
. Notice the configuration file below options relevant for connecting to the {jdgserver_name} server from <@links.ha id="deploy-infinispan-kubernetes-crossdc" anchor="connecting-infinispan-to-keycloak" />
|
||||
|
||||
. Build a custom {project_name} image which is link:{links_server_db_url}#preparing-keycloak-for-amazon-aurora-postgresql[prepared for usage with the Amazon Aurora PostgreSQL database].
|
||||
|
||||
|
|
|
@ -23,7 +23,6 @@ data:
|
|||
cacheContainer:
|
||||
metrics:
|
||||
namesAsTags: true
|
||||
gauges: true
|
||||
histograms: false
|
||||
server:
|
||||
endpoints:
|
||||
|
@ -52,9 +51,16 @@ spec:
|
|||
mode: "SYNC"
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: 14000
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
media-type: "application/x-protostream"
|
||||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: 16
|
||||
chunkSize: "16"
|
||||
|
||||
# end::infinispan-cache-actionTokens[]
|
||||
---
|
||||
|
@ -73,9 +79,16 @@ spec:
|
|||
mode: "SYNC"
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: 14000
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
media-type: "application/x-protostream"
|
||||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: 16
|
||||
chunkSize: "16"
|
||||
|
||||
# end::infinispan-cache-authenticationSessions[]
|
||||
---
|
||||
|
@ -94,9 +107,16 @@ spec:
|
|||
mode: "SYNC"
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: 14000
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
media-type: "application/x-protostream"
|
||||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: 16
|
||||
chunkSize: "16"
|
||||
|
||||
# end::infinispan-cache-clientSessions[]
|
||||
---
|
||||
|
@ -115,9 +135,16 @@ spec:
|
|||
mode: "SYNC"
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: 14000
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
media-type: "application/x-protostream"
|
||||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: 16
|
||||
chunkSize: "16"
|
||||
|
||||
# end::infinispan-cache-loginFailures[]
|
||||
---
|
||||
|
@ -136,9 +163,16 @@ spec:
|
|||
mode: "SYNC"
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: 14000
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
media-type: "application/x-protostream"
|
||||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: 16
|
||||
chunkSize: "16"
|
||||
|
||||
# end::infinispan-cache-offlineClientSessions[]
|
||||
---
|
||||
|
@ -157,9 +191,16 @@ spec:
|
|||
mode: "SYNC"
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: 14000
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
media-type: "application/x-protostream"
|
||||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: 16
|
||||
chunkSize: "16"
|
||||
|
||||
# end::infinispan-cache-offlineSessions[]
|
||||
---
|
||||
|
@ -178,9 +219,16 @@ spec:
|
|||
mode: "SYNC"
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: 14000
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
media-type: "application/x-protostream"
|
||||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: 16
|
||||
chunkSize: "16"
|
||||
|
||||
# end::infinispan-cache-sessions[]
|
||||
---
|
||||
|
@ -199,9 +247,16 @@ spec:
|
|||
mode: "SYNC"
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: 14000
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
media-type: "application/x-protostream"
|
||||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: 16
|
||||
chunkSize: "16"
|
||||
|
||||
# end::infinispan-cache-work[]
|
||||
---
|
||||
|
@ -217,6 +272,8 @@ metadata:
|
|||
infinispan.org/monitoring: 'true' # <2>
|
||||
spec:
|
||||
replicas: 3
|
||||
jmx:
|
||||
enabled: true
|
||||
# end::infinispan-single[]
|
||||
# end::infinispan-crossdc[]
|
||||
# This exposes the http endpoint to interact with its caches - more info - https://infinispan.org/docs/stable/titles/rest/rest.html
|
||||
|
@ -224,7 +281,8 @@ spec:
|
|||
expose:
|
||||
type: Route
|
||||
configMapName: "cluster-config"
|
||||
image: quay.io/infinispan/server:15.0.7.Final
|
||||
image: quay.io/infinispan-test/server:15.0.x
|
||||
version: 15.0.4
|
||||
configListener:
|
||||
enabled: false
|
||||
container:
|
||||
|
|
|
@ -35,7 +35,6 @@ data:
|
|||
cacheContainer:
|
||||
metrics:
|
||||
namesAsTags: true
|
||||
gauges: true
|
||||
histograms: false
|
||||
server:
|
||||
endpoints:
|
||||
|
@ -144,6 +143,10 @@ metadata:
|
|||
spec:
|
||||
route:
|
||||
receiver: default
|
||||
groupBy:
|
||||
- accelerator
|
||||
groupInterval: 90s
|
||||
groupWait: 60s
|
||||
matchers:
|
||||
- matchType: =
|
||||
name: alertname
|
||||
|
@ -185,7 +188,7 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
|
@ -194,7 +197,7 @@ spec:
|
|||
backup:
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-actionTokens[]
|
||||
|
@ -220,16 +223,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-b: # <1>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-b: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <2>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-authenticationSessions[]
|
||||
|
@ -255,16 +259,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-b: # <1>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-b: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <2>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-clientSessions[]
|
||||
|
@ -290,16 +295,16 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-b: # <1>
|
||||
site-b: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <2>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-loginFailures[]
|
||||
|
@ -325,16 +330,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-b: # <1>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-b: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <2>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-offlineClientSessions[]
|
||||
|
@ -352,7 +358,7 @@ spec:
|
|||
template: |-
|
||||
distributedCache:
|
||||
mode: "SYNC"
|
||||
owners: "1" # <1>
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
|
@ -360,16 +366,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-b: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-offlineSessions[]
|
||||
|
@ -387,7 +394,7 @@ spec:
|
|||
template: |-
|
||||
distributedCache:
|
||||
mode: "SYNC"
|
||||
owners: "1" # <1>
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
|
@ -395,18 +402,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
memory:
|
||||
maxCount: 10000 # <2>
|
||||
backups:
|
||||
site-b: # <3>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-b: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <4>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-sessions[]
|
||||
|
@ -432,7 +438,7 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
|
@ -441,7 +447,7 @@ spec:
|
|||
backup:
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-work[]
|
||||
|
@ -458,6 +464,8 @@ metadata:
|
|||
infinispan.org/monitoring: 'true' # <2>
|
||||
spec:
|
||||
replicas: 3
|
||||
jmx:
|
||||
enabled: true
|
||||
# end::infinispan-single[]
|
||||
# end::infinispan-crossdc[]
|
||||
# This exposes the http endpoint to interact with its caches - more info - https://infinispan.org/docs/stable/titles/rest/rest.html
|
||||
|
@ -465,7 +473,8 @@ spec:
|
|||
expose:
|
||||
type: Route
|
||||
configMapName: "cluster-config"
|
||||
image: quay.io/infinispan/server:15.0.7.Final
|
||||
image: quay.io/infinispan-test/server:15.0.x
|
||||
version: 15.0.4
|
||||
configListener:
|
||||
enabled: false
|
||||
container:
|
||||
|
|
|
@ -23,7 +23,6 @@ data:
|
|||
cacheContainer:
|
||||
metrics:
|
||||
namesAsTags: true
|
||||
gauges: true
|
||||
histograms: false
|
||||
server:
|
||||
endpoints:
|
||||
|
@ -144,7 +143,7 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
|
@ -153,7 +152,7 @@ spec:
|
|||
backup:
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-actionTokens[]
|
||||
|
@ -179,16 +178,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-a: # <1>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-a: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <2>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-authenticationSessions[]
|
||||
|
@ -214,16 +214,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-a: # <1>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-a: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <2>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-clientSessions[]
|
||||
|
@ -249,7 +250,7 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
|
@ -258,7 +259,7 @@ spec:
|
|||
backup:
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-loginFailures[]
|
||||
|
@ -284,16 +285,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-a: # <1>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-a: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <2>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-offlineClientSessions[]
|
||||
|
@ -319,16 +321,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-a: # <1>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-a: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <2>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-offlineSessions[]
|
||||
|
@ -346,7 +349,7 @@ spec:
|
|||
template: |-
|
||||
distributedCache:
|
||||
mode: "SYNC"
|
||||
owners: "1" # <1>
|
||||
owners: "2"
|
||||
statistics: "true"
|
||||
remoteTimeout: "5000"
|
||||
encoding:
|
||||
|
@ -354,16 +357,17 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
backups:
|
||||
site-a: # <3>
|
||||
mergePolicy: "ALWAYS_REMOVE" # <1>
|
||||
site-a: # <2>
|
||||
backup:
|
||||
strategy: "SYNC" # <4>
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-sessions[]
|
||||
|
@ -389,7 +393,7 @@ spec:
|
|||
locking:
|
||||
acquireTimeout: "4000"
|
||||
transaction:
|
||||
mode: "NON_XA"
|
||||
mode: "NONE"
|
||||
locking: "PESSIMISTIC"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
|
@ -398,7 +402,7 @@ spec:
|
|||
backup:
|
||||
strategy: "SYNC" # <3>
|
||||
timeout: "4500"
|
||||
failurePolicy: "FAIL"
|
||||
failurePolicy: "WARN"
|
||||
stateTransfer:
|
||||
chunkSize: "16"
|
||||
# end::infinispan-cache-work[]
|
||||
|
@ -415,6 +419,8 @@ metadata:
|
|||
infinispan.org/monitoring: 'true' # <2>
|
||||
spec:
|
||||
replicas: 3
|
||||
jmx:
|
||||
enabled: true
|
||||
# end::infinispan-single[]
|
||||
# end::infinispan-crossdc[]
|
||||
# This exposes the http endpoint to interact with its caches - more info - https://infinispan.org/docs/stable/titles/rest/rest.html
|
||||
|
@ -422,7 +428,8 @@ spec:
|
|||
expose:
|
||||
type: Route
|
||||
configMapName: "cluster-config"
|
||||
image: quay.io/infinispan/server:15.0.7.Final
|
||||
image: quay.io/infinispan-test/server:15.0.x
|
||||
version: 15.0.4
|
||||
configListener:
|
||||
enabled: false
|
||||
container:
|
||||
|
|
|
@ -54,7 +54,7 @@ metadata:
|
|||
name: keycloak-providers
|
||||
namespace: keycloak
|
||||
binaryData:
|
||||
keycloak-benchmark-dataset-0.12-SNAPSHOT.jar: ...
|
||||
keycloak-benchmark-dataset-0.13-SNAPSHOT.jar: ...
|
||||
---
|
||||
# Source: keycloak/templates/postgres/postgres-exporter-configmap.yaml
|
||||
apiVersion: v1
|
||||
|
@ -206,12 +206,11 @@ spec:
|
|||
value: keycloak
|
||||
- name: POSTGRES_DB
|
||||
value: keycloak
|
||||
image: postgres:13.2
|
||||
args:
|
||||
# default of max_prepared_transactions is 0, and this setting should match the number of active connections
|
||||
# so that running Quarkus with JTA and more than one data store can prepare transactions.
|
||||
- -c
|
||||
- max_prepared_transactions=100
|
||||
image: postgres:15
|
||||
volumeMounts:
|
||||
# Using volume mount for PostgreSQL's data folder as it is otherwise not writable
|
||||
- mountPath: /var/lib/postgresql
|
||||
name: cache-volume
|
||||
resources:
|
||||
requests:
|
||||
cpu: "0"
|
||||
|
@ -235,6 +234,9 @@ spec:
|
|||
ports:
|
||||
- containerPort: 5432
|
||||
protocol: TCP
|
||||
volumes:
|
||||
- name: cache-volume
|
||||
emptyDir: {}
|
||||
restartPolicy: Always
|
||||
# The rhel9/postgresql-13 is known to take ~30 seconds to shut down
|
||||
# As this is a deployment with ephemeral storage, there is no need to wait as the data will be gone anyway
|
||||
|
@ -455,6 +457,12 @@ spec:
|
|||
# tag::keycloak-ispn[]
|
||||
additionalOptions:
|
||||
# end::keycloak-ispn[]
|
||||
# end::keycloak[]
|
||||
- name: http-metrics-histograms-enabled
|
||||
value: 'true'
|
||||
- name: http-metrics-slos
|
||||
value: '5,10,25,50,250,500'
|
||||
# tag::keycloak[]
|
||||
# tag::keycloak-queue-size[]
|
||||
- name: http-max-queued-requests
|
||||
value: "1000"
|
||||
|
@ -491,7 +499,7 @@ spec:
|
|||
podTemplate:
|
||||
metadata:
|
||||
annotations:
|
||||
checksum/config: 385f54cb8e4bf326f6970aa2a0c8e573d35d9071e69ab2baee252728748bca76-4832924b47210161956e3b1718daf07ff52d801545186a76c391485eaf1897d3-<KEYCLOAK_IMAGE_HERE>-01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b-v1.27.0
|
||||
checksum/config: d0810a69cecfbd99a7527cb6d30decb179d7a7ee548119fc19d34f3ce2a777a7-9bfd430c6539df907f0421bb34c92fb32194d461565bd342f7f96ff5a5408273-<KEYCLOAK_IMAGE_HERE>-01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b-v1.27.0
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
|
@ -508,6 +516,7 @@ spec:
|
|||
name: keycloak-preconfigured-admin
|
||||
key: password
|
||||
optional: false
|
||||
# JMX is disabled as it breaks Quarkus configuration. Issue is tracked in https://github.com/keycloak/keycloak-benchmark/issues/840
|
||||
- name: JAVA_OPTS_APPEND # <5>
|
||||
value: ""
|
||||
ports:
|
||||
|
@ -522,8 +531,8 @@ spec:
|
|||
# - 'true'
|
||||
volumeMounts:
|
||||
- name: keycloak-providers
|
||||
mountPath: /opt/keycloak/providers/keycloak-benchmark-dataset-0.12-SNAPSHOT.jar
|
||||
subPath: keycloak-benchmark-dataset-0.12-SNAPSHOT.jar
|
||||
mountPath: /opt/keycloak/providers/keycloak-benchmark-dataset-0.13-SNAPSHOT.jar
|
||||
subPath: keycloak-benchmark-dataset-0.13-SNAPSHOT.jar
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: keycloak-providers
|
||||
|
@ -541,8 +550,7 @@ spec:
|
|||
matchLabels:
|
||||
app: keycloak
|
||||
podMetricsEndpoints:
|
||||
# todo: targetPort is deprecated, ask the operator to specify a name instead
|
||||
- targetPort: 8443
|
||||
- port: management
|
||||
scheme: https
|
||||
tlsConfig:
|
||||
insecureSkipVerify: true
|
||||
|
|
|
@ -41,7 +41,7 @@ metadata:
|
|||
name: keycloak-providers
|
||||
namespace: keycloak
|
||||
binaryData:
|
||||
keycloak-benchmark-dataset-0.12-SNAPSHOT.jar: ...
|
||||
keycloak-benchmark-dataset-0.13-SNAPSHOT.jar: ...
|
||||
---
|
||||
# Source: keycloak/templates/postgres/postgres-exporter-configmap.yaml
|
||||
apiVersion: v1
|
||||
|
@ -193,12 +193,11 @@ spec:
|
|||
value: keycloak
|
||||
- name: POSTGRES_DB
|
||||
value: keycloak
|
||||
image: postgres:13.2
|
||||
args:
|
||||
# default of max_prepared_transactions is 0, and this setting should match the number of active connections
|
||||
# so that running Quarkus with JTA and more than one data store can prepare transactions.
|
||||
- -c
|
||||
- max_prepared_transactions=100
|
||||
image: postgres:15
|
||||
volumeMounts:
|
||||
# Using volume mount for PostgreSQL's data folder as it is otherwise not writable
|
||||
- mountPath: /var/lib/postgresql
|
||||
name: cache-volume
|
||||
resources:
|
||||
requests:
|
||||
cpu: "2"
|
||||
|
@ -222,6 +221,9 @@ spec:
|
|||
ports:
|
||||
- containerPort: 5432
|
||||
protocol: TCP
|
||||
volumes:
|
||||
- name: cache-volume
|
||||
emptyDir: {}
|
||||
restartPolicy: Always
|
||||
# The rhel9/postgresql-13 is known to take ~30 seconds to shut down
|
||||
# As this is a deployment with ephemeral storage, there is no need to wait as the data will be gone anyway
|
||||
|
@ -444,6 +446,12 @@ spec:
|
|||
# tag::keycloak-ispn[]
|
||||
additionalOptions:
|
||||
# end::keycloak-ispn[]
|
||||
# end::keycloak[]
|
||||
- name: http-metrics-histograms-enabled
|
||||
value: 'true'
|
||||
- name: http-metrics-slos
|
||||
value: '5,10,25,50,250,500'
|
||||
# tag::keycloak[]
|
||||
# tag::keycloak-queue-size[]
|
||||
- name: http-max-queued-requests
|
||||
value: "1000"
|
||||
|
@ -454,6 +462,23 @@ spec:
|
|||
value: 'true'
|
||||
- name: http-pool-max-threads # <6>
|
||||
value: "66"
|
||||
# end::keycloak[]
|
||||
# This block is just for documentation purposes as we need both versions of Infinispan config, with and without numbers to corresponding options
|
||||
# tag::keycloak[]
|
||||
- name: cache-remote-host
|
||||
value: "infinispan.keycloak.svc"
|
||||
- name: cache-remote-port
|
||||
value: "11222"
|
||||
- name: cache-remote-username
|
||||
secret:
|
||||
name: remote-store-secret
|
||||
key: username
|
||||
- name: cache-remote-password
|
||||
secret:
|
||||
name: remote-store-secret
|
||||
key: password
|
||||
- name: spi-connections-infinispan-quarkus-site-name
|
||||
value: keycloak
|
||||
- name: db-driver
|
||||
value: software.amazon.jdbc.Driver
|
||||
http:
|
||||
|
@ -464,7 +489,7 @@ spec:
|
|||
podTemplate:
|
||||
metadata:
|
||||
annotations:
|
||||
checksum/config: 2cae63c85a3485c135aebe1472971dd056b1dda42fb54ef2f891bc521e31fc1a-34c125a6d541ad11d915b6d4f128a9281329070f67d06de917c9c3201e9326c1-<KEYCLOAK_IMAGE_HERE>-01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b-v1.27.0
|
||||
checksum/config: d0810a69cecfbd99a7527cb6d30decb179d7a7ee548119fc19d34f3ce2a777a7-9af6f9e8393229798cfb789798e36f84e39803616fe3e51b2a38e3ce05830565-<KEYCLOAK_IMAGE_HERE>-01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b-v1.27.0
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
|
@ -481,6 +506,7 @@ spec:
|
|||
name: keycloak-preconfigured-admin
|
||||
key: password
|
||||
optional: false
|
||||
# JMX is disabled as it breaks Quarkus configuration. Issue is tracked in https://github.com/keycloak/keycloak-benchmark/issues/840
|
||||
- name: JAVA_OPTS_APPEND # <5>
|
||||
value: ""
|
||||
ports:
|
||||
|
@ -495,8 +521,8 @@ spec:
|
|||
# - 'true'
|
||||
volumeMounts:
|
||||
- name: keycloak-providers
|
||||
mountPath: /opt/keycloak/providers/keycloak-benchmark-dataset-0.12-SNAPSHOT.jar
|
||||
subPath: keycloak-benchmark-dataset-0.12-SNAPSHOT.jar
|
||||
mountPath: /opt/keycloak/providers/keycloak-benchmark-dataset-0.13-SNAPSHOT.jar
|
||||
subPath: keycloak-benchmark-dataset-0.13-SNAPSHOT.jar
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: keycloak-providers
|
||||
|
@ -514,8 +540,7 @@ spec:
|
|||
matchLabels:
|
||||
app: keycloak
|
||||
podMetricsEndpoints:
|
||||
# todo: targetPort is deprecated, ask the operator to specify a name instead
|
||||
- targetPort: 8443
|
||||
- port: management
|
||||
scheme: https
|
||||
tlsConfig:
|
||||
insecureSkipVerify: true
|
||||
|
|
|
@ -29,9 +29,8 @@ Additional performance tuning and security hardening are still recommended when
|
|||
== Blueprints for building blocks
|
||||
|
||||
* <@links.ha id="deploy-aurora-multi-az" />
|
||||
* <@links.ha id="deploy-keycloak-kubernetes" />
|
||||
* <@links.ha id="deploy-infinispan-kubernetes-crossdc" />
|
||||
* <@links.ha id="connect-keycloak-to-external-infinispan" />
|
||||
* <@links.ha id="deploy-keycloak-kubernetes" />
|
||||
* <@links.ha id="deploy-aws-accelerator-loadbalancer" />
|
||||
* <@links.ha id="deploy-aws-accelerator-fencing-lambda" />
|
||||
|
||||
|
|
|
@ -12,8 +12,7 @@ include::partials/infinispan/infinispan-attributes.adoc[]
|
|||
Use this when the state of {jdgserver_name} clusters of two sites become disconnected and the contents of the caches are out-of-sync.
|
||||
Perform this for example after a split-brain or when one site has been taken offline for maintenance.
|
||||
|
||||
At the end of the procedure, the session contents on the secondary site have been discarded and replaced by the session
|
||||
contents of the active site. All caches in the offline site are cleared to prevent invalid cache contents.
|
||||
At the end of the procedure, the data on the secondary site have been discarded and replaced by the data of the active site. All caches in the offline site are cleared to prevent invalid cache contents.
|
||||
|
||||
== Procedures
|
||||
|
||||
|
|
|
@ -2,9 +2,8 @@ introduction
|
|||
concepts-multi-site
|
||||
bblocks-multi-site
|
||||
deploy-aurora-multi-az
|
||||
deploy-keycloak-kubernetes
|
||||
deploy-infinispan-kubernetes-crossdc
|
||||
connect-keycloak-to-external-infinispan
|
||||
deploy-keycloak-kubernetes
|
||||
deploy-aws-route53-loadbalancer
|
||||
deploy-aws-route53-failover-lambda
|
||||
operate-failover
|
||||
|
|
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 24 KiB |
Loading…
Reference in a new issue