3a8160b71d
Closes #32131 Signed-off-by: Pedro Ruivo <pruivo@redhat.com> Signed-off-by: Alexander Schwartz <aschwart@redhat.com> Co-authored-by: Alexander Schwartz <aschwart@redhat.com>
303 lines
13 KiB
Text
303 lines
13 KiB
Text
<#import "/templates/guide.adoc" as tmpl>
|
|
<#import "/templates/links.adoc" as links>
|
|
<#import "/templates/profile.adoc" as profile>
|
|
|
|
<@tmpl.guide
|
|
title="Deploy {jdgserver_name} for HA with the {jdgserver_name} Operator"
|
|
summary="Building block for an {jdgserver_name} deployment on Kubernetes"
|
|
tileVisible="false"
|
|
includedOptions="cache-remote-*">
|
|
|
|
include::partials/infinispan/infinispan-attributes.adoc[]
|
|
|
|
This {section} describes the procedures required to deploy {jdgserver_name} in a multiple-cluster environment (cross-site).
|
|
For simplicity, this topic uses the minimum configuration possible that allows {project_name} to be used with an external {jdgserver_name}.
|
|
|
|
This {section} assumes two {ocp} clusters named `{site-a}` and `{site-b}`.
|
|
|
|
This is a building block following the concepts described in the <@links.ha id="concepts-multi-site" /> {section}.
|
|
See the <@links.ha id="introduction" /> {section} for an overview.
|
|
|
|
|
|
[IMPORTANT]
|
|
====
|
|
<@profile.ifCommunity>
|
|
Only Infinispan version ${properties["infinispan.version"]} or more recent patch releases are supported for external {jdgserver_name} deployments.
|
|
</@profile.ifCommunity>
|
|
<@profile.ifProduct>
|
|
Only {jdgserver_name} version {jdgserver_min_version} or more recent patch releases are supported for external {jdgserver_name} deployments.
|
|
</@profile.ifProduct>
|
|
====
|
|
|
|
== Architecture
|
|
|
|
This setup deploys two synchronously replicating {jdgserver_name} clusters in two sites with a low-latency network connection.
|
|
An example of this scenario could be two availability zones in one AWS region.
|
|
|
|
{project_name}, loadbalancer and database have been removed from the following diagram for simplicity.
|
|
|
|
image::high-availability/infinispan-crossdc-az.dio.svg[]
|
|
|
|
== Prerequisites
|
|
|
|
include::partials/infinispan/infinispan-prerequisites.adoc[]
|
|
|
|
== Procedure
|
|
|
|
include::partials/infinispan/infinispan-install-operator.adoc[]
|
|
include::partials/infinispan/infinispan-credentials.adoc[]
|
|
+
|
|
These commands must be executed on both {ocp} clusters.
|
|
|
|
. Create a service account.
|
|
+
|
|
A service account is required to establish a connection between clusters.
|
|
The {ispn-operator} uses it to inspect the network configuration from the remote site and to configure the local {jdgserver_name} cluster accordingly.
|
|
+
|
|
For more details, see the {infinispan-operator-docs}#managed-cross-site-connections_cross-site[Managing Cross-Site Connections] documentation.
|
|
+
|
|
.. Create a `service-account-token` secret type as follows.
|
|
The same YAML file can be used in both {ocp} clusters.
|
|
+
|
|
.xsite-sa-secret-token.yaml
|
|
[source,yaml,subs="+attributes"]
|
|
----
|
|
apiVersion: v1
|
|
kind: Secret
|
|
metadata:
|
|
name: ispn-xsite-sa-token #<1>
|
|
annotations:
|
|
kubernetes.io/service-account.name: "{sa}" #<2>
|
|
type: kubernetes.io/service-account-token
|
|
----
|
|
<1> The secret name.
|
|
<2> The service account name.
|
|
.. Create the service account and generate an access token in both {ocp} clusters.
|
|
+
|
|
.Create the service account in `{site-a}`
|
|
[source,bash,subs="+attributes"]
|
|
----
|
|
kubectl create sa -n {ns} {sa}
|
|
oc policy add-role-to-user view -n {ns} -z {sa}
|
|
kubectl create -f xsite-sa-secret-token.yaml
|
|
kubectl get secrets ispn-xsite-sa-token -o jsonpath="{.data.token}" | base64 -d > {site-a}-token.txt
|
|
----
|
|
+
|
|
.Create the service account in `{site-b}`
|
|
[source,bash,subs="+attributes"]
|
|
----
|
|
kubectl create sa -n {ns} {sa}
|
|
oc policy add-role-to-user view -n {ns} -z {sa}
|
|
kubectl create -f xsite-sa-secret-token.yaml
|
|
kubectl get secrets ispn-xsite-sa-token -o jsonpath="{.data.token}" | base64 -d > {site-b}-token.txt
|
|
----
|
|
+
|
|
.. The next step is to deploy the token from `{site-a}` into `{site-b}` and the reverse:
|
|
+
|
|
.Deploy `{site-b}` token into `{site-a}`
|
|
[source,bash,subs="+attributes"]
|
|
----
|
|
kubectl create secret generic -n {ns} {sa-secret} \
|
|
--from-literal=token="$(cat {site-b}-token.txt)"
|
|
----
|
|
+
|
|
.Deploy `{site-a}` token into `{site-b}`
|
|
[source,bash,subs="+attributes"]
|
|
----
|
|
kubectl create secret generic -n {ns} {sa-secret} \
|
|
--from-literal=token="$(cat {site-a}-token.txt)"
|
|
----
|
|
|
|
. Create TLS secrets
|
|
+
|
|
In this {section}, {jdgserver_name} uses an {ocp} Route for the cross-site communication.
|
|
It uses the SNI extension of TLS to direct the traffic to the correct Pods.
|
|
To achieve that, JGroups use TLS sockets, which require a Keystore and Truststore with the correct certificates.
|
|
+
|
|
For more information, see the {infinispan-operator-docs}#securing-cross-site-connections_cross-site[Securing Cross Site Connections] documentation or this https://developers.redhat.com/learn/openshift/cross-site-and-cross-applications-red-hat-openshift-and-red-hat-data-grid[Red Hat Developer Guide].
|
|
+
|
|
Upload the Keystore and the Truststore in an {ocp} Secret.
|
|
The secret contains the file content, the password to access it, and the type of the store.
|
|
Instructions for creating the certificates and the stores are beyond the scope of this {section}.
|
|
+
|
|
To upload the Keystore as a Secret, use the following command:
|
|
+
|
|
.Deploy a Keystore
|
|
[source,bash,subs="+attributes"]
|
|
----
|
|
kubectl -n {ns} create secret generic {ks-secret} \
|
|
--from-file=keystore.p12="./certs/keystore.p12" \ # <1>
|
|
--from-literal=password=secret \ #<2>
|
|
--from-literal=type=pkcs12 #<3>
|
|
----
|
|
<1> The filename and the path to the Keystore.
|
|
<2> The password to access the Keystore.
|
|
<3> The Keystore type.
|
|
+
|
|
To upload the Truststore as a Secret, use the following command:
|
|
+
|
|
.Deploy a Truststore
|
|
[source,bash,subs="+attributes"]
|
|
----
|
|
kubectl -n {ns} create secret generic {ts-secret} \
|
|
--from-file=truststore.p12="./certs/truststore.p12" \ # <1>
|
|
--from-literal=password=caSecret \ # <2>
|
|
--from-literal=type=pkcs12 # <3>
|
|
----
|
|
<1> The filename and the path to the Truststore.
|
|
<2> The password to access the Truststore.
|
|
<3> The Truststore type.
|
|
+
|
|
NOTE: Keystore and Truststore must be uploaded in both {ocp} clusters.
|
|
|
|
. Create a Cluster for {jdgserver_name} with Cross-Site enabled
|
|
+
|
|
The {infinispan-operator-docs}#setting-up-xsite[Setting Up Cross-Site] documentation provides all the information on how to create and configure your {jdgserver_name} cluster with cross-site enabled, including the previous steps.
|
|
+
|
|
A basic example is provided in this {section} using the credentials, tokens, and TLS Keystore/Truststore created by the commands from the previous steps.
|
|
+
|
|
--
|
|
.The `Infinispan` CR for `{site-a}`
|
|
[source,yaml]
|
|
----
|
|
include::examples/generated/ispn-site-a.yaml[tag=infinispan-crossdc]
|
|
----
|
|
<1> The cluster name
|
|
<2> Allows the cluster to be monitored by Prometheus.
|
|
<3> If using a custom credential, configure here the secret name.
|
|
<4> The name of the local site, in this case `{site-a}`.
|
|
<5> Exposing the cross-site connection using {ocp} Route.
|
|
<6> The secret name where the Keystore exists as defined in the previous step.
|
|
<7> The alias of the certificate inside the Keystore.
|
|
<8> The secret key (filename) of the Keystore as defined in the previous step.
|
|
<9> The secret name where the Truststore exists as defined in the previous step.
|
|
<10> The Truststore key (filename) of the Keystore as defined in the previous step.
|
|
<11> The remote site's name, in this case `{site-b}`.
|
|
<12> The namespace of the {jdgserver_name} cluster from the remote site.
|
|
<13> The {ocp} API URL for the remote site.
|
|
<14> The secret with the access token to authenticate into the remote site.
|
|
--
|
|
+
|
|
For `{site-b}`, the `Infinispan` CR looks similar to the above.
|
|
Note the differences in point 4, 11 and 13.
|
|
+
|
|
.The `Infinispan` CR for `{site-b}`
|
|
[source,yaml]
|
|
----
|
|
include::examples/generated/ispn-site-b.yaml[tag=infinispan-crossdc]
|
|
----
|
|
|
|
. Creating the caches for {project_name}.
|
|
+
|
|
{project_name} requires the following caches to be present: `actionTokens`, `authenticationSessions`, `loginFailures`, and `work`.
|
|
+
|
|
The {jdgserver_name} {infinispan-operator-docs}#creating-caches[Cache CR] allows deploying the caches in the {jdgserver_name} cluster.
|
|
Cross-site needs to be enabled per cache as documented by {infinispan-xsite-docs}[Cross Site Documentation].
|
|
The documentation contains more details about the options used by this {section}.
|
|
The following example shows the `Cache` CR for `{site-a}`.
|
|
+
|
|
--
|
|
. In `{site-a}` create a `Cache` CR for each of the caches mentioned above with the following content.
|
|
This is an example for the `authenticationSessions` cache:
|
|
[source,yaml]
|
|
----
|
|
include::examples/generated/ispn-site-a.yaml[tag=infinispan-cache-authenticationSessions]
|
|
----
|
|
<1> The transaction mode.
|
|
<2> The locking mode used by the transaction.
|
|
<3> The remote site name.
|
|
<4> The cross-site communication strategy, in this case, `SYNC`.
|
|
<5> The cross-site replication timeout.
|
|
<6> The cross-site replication failure policy.
|
|
--
|
|
+
|
|
The example above is the recommended configuration to achieve the best data consistency.
|
|
+
|
|
====
|
|
*Background information*
|
|
|
|
Deadlocks may occur in an active-active setup as entries are modified concurrently in both sites.
|
|
|
|
The `transaction.mode: NON_XA` ensures that the transaction is rolled back keeping the data consistent if this occurs.
|
|
The setting `backup.failurePolicy: FAIL` is required in this case.
|
|
It will throw an error that allows the transaction to be safely rolled back.
|
|
When this occurs, {project_name} will attempt a retry.
|
|
|
|
The `transaction.locking: PESSIMISTIC` is the only supported locking mode; `OPTIMISTIC` is not recommended due to its network costs.
|
|
The same settings also prevent that one site it updated while the other site is unreachable.
|
|
|
|
The `backup.strategy: SYNC` ensures the data is visible and stored in the other site when the {project_name} request is completed.
|
|
|
|
NOTE: The `locking.acquireTimeout` can be reduced to fail fast in a deadlock scenario.
|
|
The `backup.timeout` must always be higher than the `locking.acquireTimeout`.
|
|
====
|
|
+
|
|
For `{site-b}`, the `Cache` CR is similar, except for the `backups.<name>` outlined in point 3 of the above diagram.
|
|
+
|
|
.authenticationSessions `Cache` CR in `{site-b}`
|
|
[source,yaml]
|
|
----
|
|
include::examples/generated/ispn-site-b.yaml[tag=infinispan-cache-authenticationSessions]
|
|
----
|
|
|
|
== Verifying the deployment
|
|
|
|
Confirm that the {jdgserver_name} cluster is formed, and the cross-site connection is established between the {ocp} clusters.
|
|
|
|
.Wait until the {jdgserver_name} cluster is formed
|
|
[source,bash,subs="+attributes"]
|
|
----
|
|
kubectl wait --for condition=WellFormed --timeout=300s infinispans.infinispan.org -n {ns} {cluster-name}
|
|
----
|
|
|
|
.Wait until the {jdgserver_name} cross-site connection is established
|
|
[source,bash,subs="+attributes"]
|
|
----
|
|
kubectl wait --for condition=CrossSiteViewFormed --timeout=300s infinispans.infinispan.org -n {ns} {cluster-name}
|
|
----
|
|
|
|
[#connecting-infinispan-to-keycloak]
|
|
== Connecting {jdgserver_name} with {project_name}
|
|
|
|
Now that an {jdgserver_name} server is running, here are the relevant {project_name} CR changes necessary to connect it to {project_name}. These changes will be required in the <@links.ha id="deploy-keycloak-kubernetes" /> {section}.
|
|
|
|
. Create a Secret with the username and password to connect to the external {jdgserver_name} deployment:
|
|
+
|
|
[source,yaml]
|
|
----
|
|
include::examples/generated/keycloak-ispn.yaml[tag=keycloak-ispn-secret]
|
|
----
|
|
|
|
. Extend the {project_name} Custom Resource with `additionalOptions` as shown below.
|
|
+
|
|
[NOTE]
|
|
====
|
|
All the memory, resource and database configurations are skipped from the CR below as they have been described in <@links.ha id="deploy-keycloak-kubernetes" /> {section} already.
|
|
Administrators should leave those configurations untouched.
|
|
====
|
|
+
|
|
[source,yaml]
|
|
----
|
|
include::examples/generated/keycloak-ispn.yaml[tag=keycloak-ispn]
|
|
----
|
|
<1> The hostname of the remote {jdgserver_name} cluster.
|
|
<2> The port of the remote {jdgserver_name} cluster.
|
|
This is optional and it default to `11222`.
|
|
<3> The Secret `name` and `key` with the {jdgserver_name} username credential.
|
|
<4> The Secret `name` and `key` with the {jdgserver_name} password credential.
|
|
<5> The `spi-connections-infinispan-quarkus-site-name` is an arbitrary {jdgserver_name} site name which {project_name} needs for its Infinispan caches deployment when a remote store is used.
|
|
This site-name is related only to the Infinispan caches and does not need to match any value from the external {jdgserver_name} deployment.
|
|
If you are using multiple sites for {project_name} in a cross-DC setup such as <@links.ha id="deploy-infinispan-kubernetes-crossdc" />, the site name must be different in each site.
|
|
|
|
=== Architecture
|
|
|
|
This connects {project_name} to {jdgserver_name} using TCP connections secured by TLS 1.3.
|
|
It uses the {project_name}'s truststore to verify {jdgserver_name}'s server certificate.
|
|
As {project_name} is deployed using its Operator on OpenShift in the prerequisites listed below, the Operator already added the `service-ca.crt` to the truststore which is used to sign {jdgserver_name}'s server certificates.
|
|
In other environments, add the necessary certificates to {project_name}'s truststore.
|
|
|
|
== Next steps
|
|
|
|
After the Aurora AWS database and {jdgserver_name} are deployed and running, use the procedure in the <@links.ha id="deploy-keycloak-kubernetes" /> {section} to deploy {project_name} and connect it to all previously created building blocks.
|
|
|
|
</@tmpl.guide>
|