2016-04-30 04:39:32 +00:00
2018-06-29 14:57:08 +00:00
[[_eviction]]
2016-04-28 23:31:06 +00:00
=== Eviction and Expiration
2017-08-28 12:50:14 +00:00
There are multiple different caches configured for {project_name}.
2016-04-28 23:31:06 +00:00
There is a realm cache that holds information about secured applications, general security data, and configuration options.
2016-12-05 16:10:27 +00:00
There is also a user cache that contains user metadata. Both caches default to a maximum of 10000 entries and use a least recently used eviction strategy.
Each of them is also tied to an object revisions cache that controls eviction in a clustered setup.
2018-06-29 14:57:08 +00:00
This cache is created implicitly and has twice the configured size. The same applies for the `authorization` cache, which holds
the authorization data. The `keys` cache holds data about external keys and does not need to have dedicated revisions cache. Rather
it has `expiration` explicitly declared on it, so the keys are periodically expired and forced to be periodically downloaded from external clients or
identity providers.
2016-04-28 23:31:06 +00:00
The eviction policy and max entries for these caches can be configured in the _standalone.xml_, _standalone-ha.xml_, or
2018-06-12 20:10:27 +00:00
_domain.xml_ depending on your <<_operating-mode, operating mode>>. In the configuration file, there is the part with infinispan
subsystem, which looks similar to this:
2016-04-28 23:31:06 +00:00
2018-06-12 20:10:27 +00:00
[source,xml,subs="attributes+"]
2016-04-28 23:31:06 +00:00
----
2018-06-12 20:10:27 +00:00
<subsystem xmlns="{subsystem_infinispan_xml_urn}">
<cache-container name="keycloak">
2016-12-05 16:10:27 +00:00
<local-cache name="realms">
2018-06-12 20:10:27 +00:00
<object-memory size="10000"/>
2016-12-05 16:10:27 +00:00
</local-cache>
2016-11-25 21:02:12 +00:00
<local-cache name="users">
2018-06-12 20:10:27 +00:00
<object-memory size="10000"/>
2016-11-25 21:02:12 +00:00
</local-cache>
2018-06-12 20:10:27 +00:00
...
2016-11-25 21:02:12 +00:00
<local-cache name="keys">
2018-06-12 20:10:27 +00:00
<object-memory size="1000"/>
2016-11-25 21:02:12 +00:00
<expiration max-idle="3600000"/>
</local-cache>
2018-06-12 20:10:27 +00:00
...
2016-11-25 21:02:12 +00:00
</cache-container>
2016-04-28 23:31:06 +00:00
----
2018-06-12 20:10:27 +00:00
To limit or expand the number of allowed entries simply add or edit the `object` element or the `expiration` element of particular cache
2018-06-29 14:57:08 +00:00
configuration.
In addition, there are also separate caches `sessions`, `clientSessions`, `offlineSessions`, `offlineClientSessions`,
`loginFailures` and `actionTokens`. These caches are distributed in cluster environment and they are unbounded in size by default.
If they are bounded, it would then be possible that some sessions will be lost. Expired sessions are cleared internally
by {project_name} itself to avoid growing the size of these caches
without limit. If you see memory issues due to a large number of sessions, you can try to:
* Increase the size of cluster (more nodes in cluster means that sessions are spread more equally among nodes)
* Increase the memory for {project_name} server process
* Decrease the number of owners to ensure that caches are saved in one single place. See <<_replication>> for more details
* Disable l1-lifespan for distributed caches. See Infinispan documentation for more details
* Decrease session timeouts, which could be done individually for each realm in {project_name} admin console. But this could affect
usability for end users. See link:{adminguide_timeouts_link}[{adminguide_timeouts_name}] for more details.
There is an additional replicated cache, `work`, which is mostly used to send messages among cluster nodes; it is also unbounded
by default. However, this cache should not cause any memory issues as entries in this cache are very short-lived.