KEYCLOAK-4626 Authentication sessions

This commit is contained in:
mposolda 2017-05-26 22:25:01 +02:00
parent f322eba712
commit 53231b8a0f
6 changed files with 179 additions and 13 deletions

View file

@ -224,6 +224,28 @@ This also allows that you can ommit the generation of ID Token on purpose - for
Direct grants (OAuth2 Resource Owner Password Credentials Grant) and Service accounts login (OAuth2 Client credentials grant) also don't use ID Token by default now.
You need to explicitly add `scope=openid` parameter to have ID Token included.
===== Authentication sessions and Action tokens
We are working on support for multiple datacenters. As the initial step, we introduced authentication session and action tokens.
Authentication session replaces Client session, which was used in previous versions. Action tokens are used especially for the scenarios, where the authenticator or requiredActionProvider
requires sending email to the user and requires user to click on the link in email.
Here are concrete changes related to this, which may affect you for the migration.
First change related to this is introducing new infinispan caches `authenticationSessions` and `actionTokens` in `standalone.xml` or `standalone-ha.xml`. If you use our migration CLI, you
don't need to care much as your `standalone(-ha).xml` files will be migrated automatically.
Second change is changing of some signatures in methods of authentication SPI. This may affect you if you use custom `Authenticator` or
`RequiredActionProvider`. Classes `AuthenticationFlowContext` and `RequiredActionContext` now allow to retrieve authentication session
instead of client session.
We also added some initial support for sticky sessions. You may want to change your loadbalancer/proxy server and configure it if you want to suffer from it and have better performance.
The route is added to the new `AUTH_SESSION_ID` cookie. More info in the clustering documentation.
Another change is, that `token.getClientSession()` was removed. This may affect you for example if you're using Client Initiated Identity Broker Linking feature.
Finally we added some new timeouts to the admin console. This allows you for example to specify different timeouts for the email actions triggered by admin and by user himself.
==== Migrating to 2.5.1
===== Migration of old offline tokens

View file

@ -70,23 +70,23 @@ The Username/Password form is not executed if there is an SSO Cookie set or a su
Let's walk through the steps from when a client first redirects to keycloak to authenticate the user.
. The OpenID Connect or SAML protocol provider unpacks relevent data, verifies the client and any signatures.
It creates a ClientSessionModel.
It creates a AuthenticationSessionModel.
It looks up what the browser flow should be, then starts executing the flow.
. The flow looks at the cookie execution and sees that it is an alternative.
It loads the cookie provider.
It checks to see if the cookie provider requires that a user already be associated with the client session.
It checks to see if the cookie provider requires that a user already be associated with the authentication session.
Cookie provider does not require a user.
If it did, the flow would abort and the user would see an error screen.
Cookie provider then executes.
Its purpose is to see if there is an SSO cookie set.
If there is one set, it is validated and the UserSessionModel is verified and associated with the ClientSessionModel.
If there is one set, it is validated and the UserSessionModel is verified and associated with the AuthenticationSessionModel.
The Cookie provider returns a success() status if the SSO cookie exists and is validated.
Since the cookie provider returned success and each execution at this level of the flow is ALTERNATIVE, no other execution is executed and this results in a successful login.
If there is no SSO cookie, the cookie provider returns with a status of attempted(). This means there was no error condition, but no success either.
The provider tried, but the request just wasn't set up to handle this authenticator.
. Next the flow looks at the Kerberos execution.
This is also an alternative.
The kerberos provider also does not require a user to be already set up and associated with the ClientSessionModel so this provider is executed.
The kerberos provider also does not require a user to be already set up and associated with the AuthenticationSessionModel so this provider is executed.
Kerberos uses the SPNEGO browser protocol.
This requires a series of challenge/responses between the server and client exchanging negotiation headers.
The kerberos provider does not see any negotiate header, so it assumes that this is the first interaction between the server and client.
@ -94,7 +94,7 @@ Let's walk through the steps from when a client first redirects to keycloak to a
A forceChallenge() means that this HTTP response cannot be ignored by the flow and must be returned to the client.
If instead the provider returned a challenge() status, the flow would hold the challenge response until all other alternatives are attempted.
So, in this initial phase, the flow would stop and the challenge response would be sent back to the browser.
If the browser then responds with a successful negotiate header, the provider associates the user with the ClientSession and the flow ends because the rest of the executions on this level of the flow are all alternatives.
If the browser then responds with a successful negotiate header, the provider associates the user with the AuthenticationSession and the flow ends because the rest of the executions on this level of the flow are all alternatives.
Otherwise, again, the kerberos provider sets an attempted() status and the flow continues.
. The next execution is a subflow called Forms.
The executions for this subflow are loaded and the same processing logic occurs
@ -107,7 +107,7 @@ Let's walk through the steps from when a client first redirects to keycloak to a
If the user entered an invalid username or password, a new challenge response is created and a status of failureChallenge() is set for this execution.
A failureChallenge() means that there is a challenge, but that the flow should log this as an error in the error log.
This error log can be used to lock accounts or IP Addresses that have had too many login failures.
If the username and password is valid, the provider associated the UserModel with the ClientSessionModel and returns a status of success()
If the username and password is valid, the provider associated the UserModel with the AuthenticationSessionModel and returns a status of success()
. The next execution is the OTP Form.
This provider requires that a user has been associated with the flow.
This requirement is satisfied because the UsernamePassword provider already associated the user with the flow.
@ -115,7 +115,7 @@ Let's walk through the steps from when a client first redirects to keycloak to a
If user is not configured, and this execution is required, then the flow will then set up a required action that the user must perform after authentication is complete.
For OTP, this means the OTP setup page.
If the execution was optional, then this execution is skipped.
. After the flow is complete, the authentication processor creates a UserSessionModel and associates it with the ClientSessionModel.
. After the flow is complete, the authentication processor creates a UserSessionModel and associates it with the AuthenticationSessionModel.
It then checks to see if the user is required to complete any required actions before logging in.
. First, each required action's evaluateTriggers() method is called.
This allows the required action provider to figure out if there is some state that might trigger the action to be fired.

View file

@ -43,8 +43,8 @@ nonce::
This is a random string that your application must generate
hash::
This is a Base64 URL encoded hash. This hash is generated by Base64 URL encoding a SHA_256 hash of `nonce` + `token.getSessionState()` + `token.getClientSession()` + `provider`
The token variable are obtained from the OIDC access token. Basically you are hashing the random nonce, the user session id, the client session id, and the identity
This is a Base64 URL encoded hash. This hash is generated by Base64 URL encoding a SHA_256 hash of `nonce` + `token.getSessionState()` + `token.getIssuedFor()` + `provider`
The token variable are obtained from the OIDC access token. Basically you are hashing the random nonce, the user session id, the client id, and the identity
provider alias you want to access.
Here's an example of Java Servlet code that generates the URL to establish the account link.
@ -54,7 +54,7 @@ Here's an example of Java Servlet code that generates the URL to establish the a
----
KeycloakSecurityContext session = (KeycloakSecurityContext) httpServletRequest.getAttribute(KeycloakSecurityContext.class.getName());
AccessToken token = session.getToken();
String clientSessionId = token.getClientSession();
String clientId = token.getIssuedFor();
String nonce = UUID.randomUUID().toString();
MessageDigest md = null;
try {
@ -62,7 +62,7 @@ Here's an example of Java Servlet code that generates the URL to establish the a
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
String input = nonce + token.getSessionState() + clientSessionId + provider;
String input = nonce + token.getSessionState() + clientId + provider;
byte[] check = md.digest(input.getBytes(StandardCharsets.UTF_8));
String hash = Base64Url.encode(check);
request.getSession().setAttribute("hash", hash);
@ -71,7 +71,7 @@ Here's an example of Java Servlet code that generates the URL to establish the a
.path("/auth/realms/{realm}/broker/{provider}/link")
.queryParam("nonce", nonce)
.queryParam("hash", hash)
.queryParam("client_id", token.getIssuedFor())
.queryParam("client_id", clientId)
.queryParam("redirect_uri", redirectUri).build(realm, provider).toString();
----

View file

@ -36,6 +36,7 @@
... link:server_installation/topics/clustering/recommended.adoc[Recommended Network Architecture]
... link:server_installation/topics/clustering/example.adoc[Cluster Example]
... link:server_installation/topics/clustering/load-balancer.adoc[Setting Up a Load Balancer or Proxy]
... link:server_installation/topics/clustering/sticky-sessions.adoc[Sticky Sessions Support]
... link:server_installation/topics/clustering/multicast.adoc[Multicast Network Setup]
... link:server_installation/topics/clustering/securing-cluster-comm.adoc[Securing Cluster Communication]
... link:server_installation/topics/clustering/serialized.adoc[Serialized Cluster Startup]

View file

@ -1,7 +1,7 @@
=== Replication and Failover
The `sessions`, `offlineSessions` and `loginFailures` caches are the only caches that may perform replication. Entries are
The `sessions`, `authenticationSessions`, `offlineSessions` and `loginFailures` caches are the only caches that may perform replication. Entries are
not replicated to every single node, but instead one or more nodes is chosen as an owner of that data. If a node is not the owner of a specific cache entry it queries
the cluster to obtain it. What this means for failover is that if all the nodes that own a piece of data go down, that data
is lost forever. By default, {{book.project.name}} only specifies one owner for data. So if that one node goes down

View file

@ -0,0 +1,143 @@
=== Sticky sessions
Typical cluster deployment consists of the load balancer (reverse proxy) and 2 or more {{book.project.name}} servers on private network. For performance purposes,
it may be useful if load balancer forwards all requests related to particular browser session to the same {{book.project.name}} backend node.
The reason is, that {{book.project.name}} is using infinispan distributed cache under the covers for save data related to current authentication session and user session.
The Infinispan distributed caches are configured with one owner by default. That means that particular session is saved just on one cluster node and the other nodes need
to lookup the session remotely if they want to access it.
For example if authentication session with ID `123` is saved in the infinispan cache on `node1`, and then `node2` needs to lookup this session,
it needs to send the request to `node1` over the network to return the particular session entity.
It is beneficial if particular session entity is always available locally, which can be done with the help of sticky sessions.
The workflow in the cluster environment with the public frontend load balancer and two backend {{book.project.name}} nodes can be like this:
* User sends initial request to see the {{book.project.name}} login screen
* This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn't need to be random,
but can be chosen according to some other criterias (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy).
* {{book.project.name}} creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache.
* Infinispan distributed cache assigns the primary owner of the session based on the hash of session ID.
See link:http://infinispan.org/docs/8.2.x/user_guide/user_guide.html#distribution_mode[Infinispan documentation] for more details around this.
Let's assume that infinispan assigned `node2` to be the owner of this session.
* {{book.project.name}} creates the cookie `AUTH_SESSION_ID` with the format like `<session-id>.<owner-node-id>` . In our example case, it will be `123.node2` .
* Response is returned to the user with the {{book.project.name}} login screen and the AUTH_SESSION_ID cookie in the browser
From this point, it is beneficial if load balancer forwards all the next requests to the `node2` as this is the node, who is owner of the authentication session with ID `123`
and hence Infinispan can lookup this session locally. After authentication is finished, the authentication session is converted to user session, which will be also saved on
`node2` because it has same ID `123` .
The sticky session is not mandatory for the cluster setup, however it is good for performance for the reasons mentioned above. You need to configure your loadbalancer to sticky
over the `AUTH_SESSION_ID` cookie. How exactly do this is dependent on your loadbalancer.
Some loadbalancers can be configured to add the route information by themselves instead of rely on the backend {{book.project.name}} node. However adding the route by
the {{book.project.name}} is more clever as {{book.project.name}} knows, who is the owner of particular session entity and can route it to that node, which is not necessarily the local node.
We plan to support the setup where the route info won't be added to AUTH_SESSION_ID cookie by the {{book.project.name}} server, but instead by the load balancer. However
adding the cookie by {{book.project.name}} would be still the preferred option.
==== Example cluster setup with mod_cluster
In the example, we will use link:http://mod-cluster.jboss.org/[Mod Cluster] as load balancer. One of the key features of mod cluster is, that there is not much
configuration on the load balancer side. Instead it requires support on the backend node side. Backend nodes communicate with the load balancer through the
dedicated protocol called MCMP and they notify loadbalancer about various events (eg. node joined or left cluster, new application was deployed etc).
Example setup will consist of the one {{book.appserver.name}} {{book.appserver.version}} load balancer node and two {{book.project.name}} nodes.
Clustering example require MULTICAST to be enabled on machine's loopback network interface. This can be done by running the following commands under root privileges (on linux):
[source]
----
route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
ifconfig lo multicast
----
===== Load Balancer Configuration
Unzip the {{book.appserver.name}} {{book.appserver.version}} server somewhere. Assumption is location `EAP_LB`
Edit `EAP_LB/standalone/configuration/standalone.xml` file. In the undertow subsystem add the mod_cluster configuration under filters like this:
[source,xml]
----
<subsystem xmlns="urn:jboss:domain:undertow:4.0">
...
<filters>
...
<mod-cluster name="modcluster" advertise-socket-binding="modcluster"
advertise-frequency="${modcluster.advertise-frequency:2000}"
management-socket-binding="http" enable-http2="true"/>
</filters>
----
and `filter-ref` under `default-host` like this:
[source,xml]
----
<host name="default-host" alias="localhost">
...
<filter-ref name="modcluster"/>
</host>
----
Then under `socket-binding-group` add this group:
[source,xml]
----
<socket-binding name="modcluster" port="0"
multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}"
multicast-port="23364"/>
----
Save the file and run the server:
[source]
----
cd $WILDFLY_LB/bin
./standalone.sh
----
===== Backend node configuration
Unzip the {{book.project.name}} server distribution to some location. Assuming location is `RHSSO_NODE1` .
Edit `RHSSO_NODE1/standalone/configuration/standalone-ha.xml` and configure datasource against the shared database.
See <<fake/../../database/checklist.adoc#_rdbms-setup-checklist, Database chapter >> for more details.
In the undertow subsystem, add the `session-config` under the `servlet-container` element:
[source,xml]
----
<servlet-container name="default">
<session-cookie name="AUTH_SESSION_ID" http-only="true" />
...
</servlet-container>
----
Then you can configure `proxy-address-forwarding` as described in the chapter <<fake/../../clustering/load-balancer.adoc#_setting-up-a-load-balancer-or-proxy, Load Balancer >> .
Note that mod_cluster uses AJP connector by default, so you need to configure that one.
That's all as mod_cluster is already configured.
The node name of the {{book.project.name}} can be detected automatically based on the hostname of current server. However for more fine grained control,
it is recommended to use system property `jboss.node.name` to specify the node name directly. It is especially useful in case that you test with 2 backend nodes on
same physical server etc. So you can run the startup command like this:
[source]
----
cd $RHSSO_NODE1
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1
----
Configure the second backend server in same way and run with different port offset and node name.
[source]
----
cd $RHSSO_NODE2
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2
----
Access the server on `http://localhost:8080/auth` . Creation of admin user is possible just from local address and without load balancer (proxy) access,
so you first need to access backend node directly on `http://localhost:8180/auth` to create admin user.