The current distributed cache implementation is built on top of https://infinispan.org[Infinispan], a high-performance, distributable in-memory data grid.
When you start {project_name} in production mode, by using the `start` command, caching is enabled and all {project_name} nodes in your network are discovered.
By default, caches are using a UDP transport stack so that nodes are discovered using IP multicast transport based on UDP. For most production environments, there are better discovery alternatives to UDP available. {project_name} allows you to either choose from a set of pre-defined default transport stacks, or to define your own custom stack, as you will see later in this {section}.
When you start {project_name} in development mode, by using the `start-dev` command, {project_name} uses only local caches and distributed caches are completely disabled by implicitly setting the `--cache=local` option.
In order to achieve an optimal runtime and avoid additional round-trips to the database you should consider looking at
the configuration for each cache to make sure the maximum number of entries is aligned with the size of your database. More entries
you can cache, less often the server needs to fetch data from the database. You should evaluate the trade-offs between memory utilization and performance.
When one {project_name} node updates data in the shared database, all other nodes need to be aware of it, so they invalidate that data from their caches.
Authentication sessions are created whenever a user tries to authenticate. They are automatically destroyed once the authentication process
completes or due to reaching their expiration time.
The `authenticationSessions` distributed cache is used to store authentication sessions and any other data associated with it
during the authentication process.
By relying on a distributable cache, authentication sessions are available to any node in the cluster so that users can be redirected
to any node without losing their authentication state. However, production-ready deployments should always consider session affinity and favor redirecting users
to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve
authenticate to any application without being asked for their credentials again. For each application, the user authenticates with a client session, so that the server can track the applications the user is authenticated with and their state on a per-application basis.
User and client sessions are automatically destroyed whenever the user performs a logout, the client performs a token revocation, or due to reaching their expiration time.
By relying on a distributable cache, cached user and client sessions are available to any node in the cluster so that users can be redirected
to any node without the need to load session data from the database. However, production-ready deployments should always consider session affinity and favor redirecting users
These in-memory caches for user sessions and client sessions are limited to, by default, 10000 entries per node which reduces the overall memory usage of {project_name} for larger installations.
The internal caches will run with only a single owner for each cache entry.
Consider trade-off between memory consumption and the database utilization and set different sizes for the caches, edit {project_name}'s cache config file (`conf/cache-ispn.xml`) to set a `+<memory max-count="..."/>+` for those caches.
.Volatile user sessions
By default, user sessions are stored in the database and loaded on-demand to the cache.
It is possible to configure {project_name} to store user sessions in the cache only and minimize the database utilization.
1. Since the cache is the only source of truth for user and client sessions, configure caches to not limit the number of entries and to replicate each entry to at least two nodes. To do so, edit {project_name}'s cache config file (`conf/cache-ispn.xml`) for caches `sessions` and `clientSessions` with the following update:
+
--
* Remove the `+<memory max-count="..."/>+`
* Change `owners` attribute of the `distributed-cache` tag to 2 or more
--
+
An example of the resulting configuration for the `sessions` cache would look as follows.
As an OpenID Connect Provider, the server is also capable of authenticating users and issuing offline tokens. Similarly to regular user and client sessions,
when an offline token is issued by the server upon successful authentication, the server also creates an offline user session and an offline client session.
Similarly to regular user and client sessions caches, also the offline user and client session caches are limited to 10000 entries per node by default. Items which are evicted from the memory will be loaded on-demand from the database when needed.
Consider trade-off between memory consumption and the database utilization and set different sizes for the caches, edit {project_name}'s cache config file (`conf/cache-ispn.xml`) to set a `+<memory max-count="..."/>+` for those caches.
Action tokens are used for scenarios when a user needs to confirm an action asynchronously, for example in the emails sent by the forgot password flow.
The `actionTokens` distributed cache is used to track metadata about action tokens.
Each distributed cache, that is a primary source of truth of the data (`authenticationSessions`, `loginFailures` and `actionTokens`) has two owners per default, which means that two nodes have a copy of the specific cache entries.
The default number of owners is enough to survive 1 node (owner) failure in a cluster setup with at least three nodes. You are free
to change the number of owners accordingly to better fit into your availability requirements. To change the number of owners, open `conf/cache-ispn.xml` and change the value for `owners=<value>` for the distributed caches to your desired value.
For configuration of {project_name} server for high availability and multi-node clustered setup there was introduced following CLI options `cache-remote-host`, `cache-remote-port`, `cache-remote-username` and `cache-remote-password` simplifying configuration within the XML file.
Once any of declared CLI parameters are present, it is expected there is no configuration related to remote store present in the XML file.
The CLI options `cache-remote-username` and `cache-remote-password` are optional and, if not set, {project_name} will connect to the {jdgserver_name} server without presenting any credentials.
If the {jdgserver_name} server has authentication enabled, {project_name} will fail to start.
The following table shows transport stacks that are supported by {project_name}, but need some extra steps to work.
Note that _none_ of these stacks are Kubernetes / OpenShift stacks, so no need exists to enable the `google` stack if you want to run {project_name} on top of the Google Kubernetes engine.
Instead, when you have a distributed cache setup running on AWS EC2 instances, you would need to set the stack to `ec2`, because ec2 does not support a default discovery mechanism such as UDP.
For more information and links to repositories with these dependencies, see the https://infinispan.org/docs/dev/titles/embedding/embedding.html#jgroups-cloud-discovery-protocols_cluster-transport[Infinispan documentation].
If none of the available transport stacks are enough for your deployment, you are able to change your cache configuration file
and define your own transport stack.
For more details, see https://infinispan.org/docs/stable/titles/server/server.html#using-inline-jgroups-stacks_cluster-transport[Using inline JGroups stacks].
The current Infinispan cache implementation should be secured by various security measures such as RBAC, ACLs, and transport stack encryption.
JGroups handles all the communication between {project_name} server, and it supports Java SSL sockets for TCP communication.
{project_name} uses CLI options to configure the TLS communication without having to create a customized JGroups stack or modifying the cache XML file.
To enable TLS, `cache-embedded-mtls-enabled` must be set to `true`.
It requires a keystore with the certificate to use: `cache-embedded-mtls-key-store-file` sets the path to the keystore, and `cache-embedded-mtls-key-store-password` sets the password to decrypt it.
The truststore contains the valid certificates to accept connection from, and it can be configured with `cache-embedded-mtls-trust-store-file` (path to the truststore), and `cache-embedded-mtls-trust-store-password` (password to decrypt it).
To restrict unauthorized access, use a self-signed certificate for each {project_name} deployment.
For JGroups stacks with `UDP` or `TCP_NIO2`, see the http://jgroups.org/manual5/index.html#ENCRYPT[JGroups Encryption documentation] on how to set up the protocol stack.
For more information about securing cache communication, see the https://infinispan.org/docs/stable/titles/security/security.html#[Infinispan security guide].
To enable histograms for the cache metrics, set `cache-metrics-histograms-enabled` to `true`.
While these metrics provide more insights into the latency distribution, collecting them might have a performance impact, so you should be cautious to activate them in an already saturated system.