Fix typos found by codespell in docs (#28890)

Run `chmod -x` on files that need not be executable.

Signed-off-by: Dimitri Papadopoulos <3234522+DimitriPapadopoulos@users.noreply.github.com>
Signed-off-by: Alexander Schwartz <aschwart@redhat.com>
Co-authored-by: Alexander Schwartz <aschwart@redhat.com>
This commit is contained in:
Dimitri Papadopoulos Orfanos 2024-05-03 14:41:16 +02:00 committed by GitHub
parent df47dee924
commit 9db1443367
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
45 changed files with 32 additions and 32 deletions

View file

@ -12,7 +12,7 @@ To manually determine a license, clone/checkout the source code at the tag or co
## How to store license info
Typically, each zip that gets distibuted to users needs to contain a license XML and individual license files, plus an html file generated at build time.
Typically, each zip that gets distributed to users needs to contain a license XML and individual license files, plus an html file generated at build time.
The XML and individual files are maintained in git. When you change or add a dependency that is a part of:

0
docs/documentation/License.html Executable file → Normal file
View file

0
docs/documentation/README.md Executable file → Normal file
View file

View file

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View file

@ -118,7 +118,7 @@ ifeval::[{project_community}==true]
Declarative user profile is still a preview feature in this release, but we are working hard on promoting it to a supported feature. Feedback is welcome.
If you find any issues or have any improvements in mind, you are welcome to create https://github.com/keycloak/keycloak/issues/new/choose[Github issue],
ideally with the label `area/user-profile`. It is also recommended to check the link:{upgradingguide_link}[{upgradingguide_name}] with the migration changes for this
release for some additional informations related to the migration.
release for some additional information related to the migration.
endif::[]

View file

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View file

Before

Width:  |  Height:  |  Size: 88 KiB

After

Width:  |  Height:  |  Size: 88 KiB

View file

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 59 KiB

0
docs/documentation/server_admin/images/add-user.png Executable file → Normal file
View file

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 55 KiB

0
docs/documentation/server_admin/images/credentials.png Executable file → Normal file
View file

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

0
docs/documentation/server_admin/images/domain-mode.png Executable file → Normal file
View file

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 103 KiB

View file

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 39 KiB

View file

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View file

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View file

Before

Width:  |  Height:  |  Size: 66 KiB

After

Width:  |  Height:  |  Size: 66 KiB

View file

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 80 KiB

0
docs/documentation/server_admin/images/users.png Executable file → Normal file
View file

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View file

@ -196,7 +196,7 @@ The `create` and `update` commands send a JSON body to the server. You can use `
====
The value in name=value pairs used in --set, -s options, are assumed to be JSON. If it cannot be parsed as valid JSON, then it will be sent to the server as a text value.
If the value is enclosed in quotes after shell processing, but is not valid JSON, the quotes will be stripped and the rest of the value will be sent as text. This behavior is deprecated, please consider specifying your value without qoutes or a valid JSON string literal with double quotes.
If the value is enclosed in quotes after shell processing, but is not valid JSON, the quotes will be stripped and the rest of the value will be sent as text. This behavior is deprecated, please consider specifying your value without quotes or a valid JSON string literal with double quotes.
====
Several methods are available in {project_name} to update a resource using the `update` command. You can determine the current state of a resource and save it to a file, edit that file, and send it to the server for an update.

View file

@ -271,7 +271,7 @@ Now you configure the flow for the first authentication level.
. Enter `Level 1` as an alias.
. Enter `1` for the Level of Authentication (LoA).
. Set Max Age to *36000*. This value is in seconds and it is equivalent to 10 hours, which is the default `SSO Session Max` timeout set in the realm.
As a result, when a user authenticates with this level, subsequent SSO logins can re-use this level and the user does not need to authenticate
As a result, when a user authenticates with this level, subsequent SSO logins can reuse this level and the user does not need to authenticate
with this level until the end of the user session, which is 10 hours by default.
. Click *Save*
+

View file

@ -139,7 +139,7 @@ endif::[]
==== Credential delegation
Kerberos supports the credential delegation. Applications may need access to the Kerberos ticket so they can re-use it to interact with other services secured by Kerberos. Because the {project_name} server processed the SPNEGO protocol, you must propagate the GSS credential to your application within the OpenID Connect token claim or a SAML assertion attribute. {project_name} transmits this to your application from the {project_name} server. To insert this claim into the token or assertion, each application must enable the built-in protocol mapper `gss delegation credential`. This mapper is available in the *Mappers* tab of the application's client page. See <<_protocol-mappers, Protocol Mappers>> chapter for more details.
Kerberos supports the credential delegation. Applications may need access to the Kerberos ticket so they can reuse it to interact with other services secured by Kerberos. Because the {project_name} server processed the SPNEGO protocol, you must propagate the GSS credential to your application within the OpenID Connect token claim or a SAML assertion attribute. {project_name} transmits this to your application from the {project_name} server. To insert this claim into the token or assertion, each application must enable the built-in protocol mapper `gss delegation credential`. This mapper is available in the *Mappers* tab of the application's client page. See <<_protocol-mappers, Protocol Mappers>> chapter for more details.
Applications must deserialize the claim it receives from {project_name} before using it to make GSS calls against other services. When you deserialize the credential from the access token to the GSSCredential object, create the GSSContext with this credential passed to the `GSSManager.createContext` method. For example:

View file

@ -24,7 +24,7 @@ With Counter-Based One Time Passwords (HOTP), {project_name} uses a shared count
TOTP is more secure than HOTP because the matchable OTP is valid for a short window of time, while the OTP for HOTP is valid for an indeterminate amount of time. HOTP is more user-friendly than TOTP because no time limit exists to enter the OTP.
HOTP requires a database update every time the server increments the counter. This update is a performance drain on the authentication server during heavy load. To increase efficiency, TOTP does not remember passwords used, so there is no need to perform database updates. The drawback is that it is possible to re-use TOTPs in the valid time interval.
HOTP requires a database update every time the server increments the counter. This update is a performance drain on the authentication server during heavy load. To increase efficiency, TOTP does not remember passwords used, so there is no need to perform database updates. The drawback is that it is possible to reuse TOTPs in the valid time interval.
==== TOTP configuration options

View file

@ -14,7 +14,7 @@ image:images/paypal-add-identity-provider.png[Add Identity Provider]
.. Note the *Client ID* and *Client Secret*. Click the *Show* link to view the secret.
.. Ensure *Log in with PayPal* is checked.
.. Under Log in with PayPal click on *Advanced Settings*.
.. Set the value of the *Return URL* field to the value of *Redirect URI* from {project_name}. Note that the URL can not contain `localhost`. If you want to use {project_name} locally, replace the `localhost` in the *Return URL* by `127.0.0.1` and then access {project_name} using `127.0.0.1` in the browser intead of `localhost`.
.. Set the value of the *Return URL* field to the value of *Redirect URI* from {project_name}. Note that the URL can not contain `localhost`. If you want to use {project_name} locally, replace the `localhost` in the *Return URL* by `127.0.0.1` and then access {project_name} using `127.0.0.1` in the browser instead of `localhost`.
.. Ensure *Full Name* and *Email* fields are checked.
.. Click *Save* and then *Save Changes*.
. In {project_name}, paste the value of the `Client ID` into the *Client ID* field.

View file

@ -4,7 +4,7 @@
= Role scope mappings
[role="_abstract"]
On creation of an OIDC access token or SAML assertion, the user role mappings become claims within the token or assertion. Applications use these claims to make access decisions on the resources controlled by the application. {project_name} digitally signs access tokens and applications re-use them to invoke remotely secured REST services. However, these tokens have an associated risk. An attacker can obtain these tokens and use their permissions to compromise your networks. To prevent this situation, use _Role Scope Mappings_.
On creation of an OIDC access token or SAML assertion, the user role mappings become claims within the token or assertion. Applications use these claims to make access decisions on the resources controlled by the application. {project_name} digitally signs access tokens and applications reuse them to invoke remotely secured REST services. However, these tokens have an associated risk. An attacker can obtain these tokens and use their permissions to compromise your networks. To prevent this situation, use _Role Scope Mappings_.
_Role Scope Mappings_ limit the roles declared inside an access token. When a client requests a user authentication, the access token they receive contains only the role mappings that are explicitly specified for the client's scope. The result is that you limit the permissions of each individual access token instead of giving the client access to all the users permissions.

View file

@ -404,7 +404,7 @@ number of digits expected in a determined position. For instance, a format `(\{2
|Number UnFormat
|If set, the `data-kcNumberUnFormat` attribute is added to the field to format the value based on a given format before submitting the form. This annotation
is useful if you do not want to store any format for a specific attribute but only format the value on the client side. For instance, if the current value
is `(00) 00000-0000`, the value will change to `00000000000` if you set the value `\{11}` to this annotation or any other format you want by specifying a set of one or ore group of digits.
is `(00) 00000-0000`, the value will change to `00000000000` if you set the value `\{11}` to this annotation or any other format you want by specifying a set of one or more group of digits.
Make sure to add validators to perform server-side validations before storing values.
|===
@ -897,4 +897,4 @@ If you want to use internationalized messages when configuring attributes, attri
set their display name, description, and values, using a placeholder that will translate to a message from a message bundle.
For that, you can use a placeholder to resolve messages keys such as `${myAttributeName}`, where `myAttributeName` is the key for a message in a message bundle. For more details,
look at link:{developerguide_link}#messages[{developerguide_name}] about how to add message bundles to custom themes.
look at link:{developerguide_link}#messages[{developerguide_name}] about how to add message bundles to custom themes.

View file

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View file

@ -67,7 +67,7 @@ action token is invalidated.
As action token is just a signed JWT with few mandatory fields (see <<_action_token_anatomy,Anatomy of action token>>
above), it can be serialized and signed as such using Keycloak's `JWSBuilder` class. This way has been already
implemented in `serialize(session, realm, uriInfo)` method of `org.keycloak.authentication.actiontoken.DefaultActionToken`
and can be leveraged by implementors by using that class for tokens instead of plain `JsonWebToken`.
and can be leveraged by implementers by using that class for tokens instead of plain `JsonWebToken`.
The following example shows the implementation of a simple action token. Note that the class must have a private constructor without any arguments.
This is necessary to deserialize the token class from JWT.

View file

@ -291,7 +291,7 @@ following content:
[source]
----
usernameOrEmail=Brukernavn
password=Passord
password=Password
----
+
If you omit a translation for messages, they will use English.

View file

@ -15,7 +15,7 @@ Default `Client ID` mapper of `Service Account Client` has been changed. `Token
`clientId` userSession note still exists.
= Keycloak JS adapter must be instanciated with the `new` operator
= Keycloak JS adapter must be instantiated with the `new` operator
Historically it has been possible to create an instance of the Keycloak JS adapter by calling the `Keycloak()` function directly:

View file

@ -256,9 +256,9 @@ your realm configuration to eliminate the use of the `http challenge` flow befor
If you use the `http challenge` flow as `Authentication Flow Binding Override` for any client, the migration would complete, but you could no longer log in to that client.
After the migration, you would need to re-create the flow and update the configuration of your clients to use the new/differentJson flow.
= Removing thirdparty dependencies
= Removing third party dependencies
The removal of openshift-integration allows us to remove few thirdparty dependencies from Keycloak distribution. This includes
The removal of openshift-integration allows us to remove few third party dependencies from Keycloak distribution. This includes
`openshift-rest-client`, `okio-jvm`, `okhttp`, `commons-lang`, `commons-compress`, `jboss-dmr` and `kotlin-stdlib`. This means that if you use
any of these libraries as dependencies of your own providers deployed to Keycloak server, you may also need to copy those `jar` files
explicitly to the Keycloak distribution `providers` directory as well.

View file

@ -1,6 +1,6 @@
= Never expires option removed from client advanced settings combos
The option `Never expires` is now removed from all the combos of the Advanced Settings client tab. This option was misleading because the different lifespans or idle timeouts were never infinite, but limited by the general user session or realm values. Therefore, this option is removed in favor of the other two remaining options: `Inherits from the realm settings` (the client uses general realm timeouts) and `Expires in` (the value is overriden for the client). Internally the `Never expires` was represented by `-1`. Now that value is shown with a warning in the Admin Console and cannot be set directly by the administrator.
The option `Never expires` is now removed from all the combos of the Advanced Settings client tab. This option was misleading because the different lifespans or idle timeouts were never infinite, but limited by the general user session or realm values. Therefore, this option is removed in favor of the other two remaining options: `Inherits from the realm settings` (the client uses general realm timeouts) and `Expires in` (the value is overridden for the client). Internally the `Never expires` was represented by `-1`. Now that value is shown with a warning in the Admin Console and cannot be set directly by the administrator.
= New LinkedIn OpenID Connect social provider

View file

@ -5,7 +5,7 @@ To properly support multiple user storage providers within a realm, the `org.key
interface has changed.
The `decorateUserProfile` method is no longer invoked when parsing the user profile configuration for the first time (and caching it),
but everytime a user is being managed through the user profile provider. As a result, the method changed its contract to:
but every time a user is being managed through the user profile provider. As a result, the method changed its contract to:
```java
List<AttributeMetadata> decorateUserProfile(String providerId, UserProfileMetadata metadata)

View file

@ -271,7 +271,7 @@ current version, the update is not possible and you will be immediately informed
The fields `username`, `email`, `firstName` and `lastName` in the `UserModel` are migrated to custom attributes as a preparation for adding more sophisticated user profiles to {project_name} in an upcoming version.
If a database contains users with custom attributes of that exact name, the custom attributes will need to be migrated before upgrading. This migration does not occur automatically. Otherwise, they will not be read from the database anymore and possibly deleted.
This situation implies that the `username` can now also be accessed and set via `UserModel.getFirstAttribute(UserModel.USERNAME)`. Similar implications exist for other fields.
Implementors of SPIs subclassing the `UserModel` directly or indirectly should ensure that the behavior between `setUsername` and `setSingleAttribute(UserModel.USERNAME, ...)` (and similar for the other fields) is consistent.
Implementers of SPIs subclassing the `UserModel` directly or indirectly should ensure that the behavior between `setUsername` and `setSingleAttribute(UserModel.USERNAME, ...)` (and similar for the other fields) is consistent.
Users of the policy evaluation feature have to adapt their policies if they use the number of attributes in their evaluations since every user will now have four new attributes by default.
The public API of `UserModel` did not change. No changes to frontend resources or SPIs accessing user data are necessary.
@ -894,7 +894,7 @@ You also need to update `keycloak-server.json` as it's changed due to this.
==== Adapter subsystems only bring in dependencies if Keycloak is on
Previously, if you had installed our SAML or OIDC Keycloak subsystem adapters into WildFly or JBoss EAP, we would automatically include Keycloak client jars into EVERY application irregardless if you were using Keycloak or not.
Previously, if you had installed our SAML or OIDC Keycloak subsystem adapters into WildFly or JBoss EAP, we would automatically include Keycloak client jars into EVERY application regardless if you were using Keycloak or not.
These libraries are now only added to your deployment if you have Keycloak authentication turned on for that adapter (via the subsystem, or auth-method in web.xml)
==== Client Registration service endpoints moved

View file

@ -20,7 +20,7 @@ include::partials/blueprint-disclaimer.adoc[]
Ensures that synchronous replication is available for both the database and the external {jdgserver_name}.
*Suggested setup:* Two AWS Availablity Zones within the same AWS Region.
*Suggested setup:* Two AWS Availability Zones within the same AWS Region.
*Not considered:* Two regions on the same or different continents, as it would increase the latency and the likelihood of network failures.
Synchronous replication of databases as a services with Aurora Regional Deployments on AWS is only available within the same region.

View file

@ -14,7 +14,7 @@ include::partials/blueprint-disclaimer.adoc[]
== Architecture
All {project_name} client requests are routed by a DNS name managed by Route53 records.
Route53 is responsibile to ensure that all client requests are routed to the Primary cluster when it is available and healthy, or to the backup cluster in the event of the primary availability-zone or {project_name} deployment failing.
Route53 is responsible to ensure that all client requests are routed to the Primary cluster when it is available and healthy, or to the backup cluster in the event of the primary availability-zone or {project_name} deployment failing.
If the primary site fails, the DNS changes will need to propagate to the clients.
Depending on the client's settings, the propagation may take some minutes based on the client's configuration.

View file

@ -161,7 +161,7 @@ include::examples/generated/ispn-site-a.yaml[tag=infinispan-crossdc]
<11> The remote site's name, in this case `{site-b}`.
<12> The namespace of the {jdgserver_name} cluster from the remote site.
<13> The {ocp} API URL for the remote site.
<14> The secret with the access toke to authenticate into the remote site.
<14> The secret with the access token to authenticate into the remote site.
+
For `{site-b}`, the `Infinispan` CR looks similar to the above.
Note the differences in point 4, 11 and 13.

View file

@ -341,7 +341,7 @@ spec:
- name: SQLPAD_SEED_DATA_PATH
value: /etc/sqlpad/seed-data
- name: SQLPAD_CONNECTIONS__pgdemo__name
value: PostgresSQL Keycloak
value: PostgreSQL Keycloak
- name: SQLPAD_CONNECTIONS__pgdemo__port
value: '5432'
- name: SQLPAD_CONNECTIONS__pgdemo__host

View file

@ -328,7 +328,7 @@ spec:
- name: SQLPAD_SEED_DATA_PATH
value: /etc/sqlpad/seed-data
- name: SQLPAD_CONNECTIONS__pgdemo__name
value: PostgresSQL Keycloak
value: PostgreSQL Keycloak
- name: SQLPAD_CONNECTIONS__pgdemo__port
value: '5432'
- name: SQLPAD_CONNECTIONS__pgdemo__host

View file

@ -103,7 +103,7 @@ The following caches are used to store both user and client sessions:
* clientSessions
By relying on a distributable cache, user and client sessions are available to any node in the cluster so that users can be redirected
to any node without loosing their state. However, production-ready deployments should always consider session affinity and favor redirecting users
to any node without losing their state. However, production-ready deployments should always consider session affinity and favor redirecting users
to the node where their sessions were initially created. By doing that, you are going to avoid unnecessary state transfer between nodes and improve
CPU, memory, and network utilization.

View file

@ -23,7 +23,7 @@ It is possible to enable metrics using the build time option `metrics-enabled`:
* `/metrics`
For more information about the management interface, see <@links.server id="management-interface" />.
The response from the endpoint uses a `application/openmetrics-text` content type and it is based on the Prometheus (OpenMetrics) text format. The snippet bellow
The response from the endpoint uses a `application/openmetrics-text` content type and it is based on the Prometheus (OpenMetrics) text format. The snippet below
is an example of a response:
[source]

View file

@ -207,7 +207,7 @@ We recommend optimizing {project_name} to provide faster startup and better memo
=== Creating an optimized {project_name} build
By default, when you use the `start` or `start-dev` command, {project_name} runs a `build` command under the covers for convenience reasons.
This `build` command performs a set of optimizations for the startup and runtime behavior. The build process can take a few seconds. Especially when running {project_name} in containerized environments such as Kubernetes or OpenShift, startup time is important. To avoid losing that time, run a `build` explicity before starting up, such as a separate step in a CI/CD pipeline.
This `build` command performs a set of optimizations for the startup and runtime behavior. The build process can take a few seconds. Especially when running {project_name} in containerized environments such as Kubernetes or OpenShift, startup time is important. To avoid losing that time, run a `build` explicitly before starting up, such as a separate step in a CI/CD pipeline.
==== First step: Run a build explicitly
To run a `build`, enter the following command:

View file

@ -34,7 +34,7 @@ only exists for development use-cases. The `dev-file` database is not suitable f
== Installing a database driver
Database drivers are shipped as part of {project_name} except for the Oracle Database and Micrsoft SQL Server drivers which need to be installed separately.
Database drivers are shipped as part of {project_name} except for the Oracle Database and Microsoft SQL Server drivers which need to be installed separately.
Install the necessary driver if you want to connect to one of these databases or skip this section if you want to connect to a different database for which the database driver is already included.
@ -280,7 +280,7 @@ NOTE: Certain vendors, such as Azure SQL and MariaDB Galera, do not support or r
XA recovery defaults to enabled and will use the file system location `KEYCLOAK_HOME/data/transaction-logs` to store transaction logs.
NOTE: Enabling XA transactions in a containerized envionment does not fully support XA recovery unless stable storage is available at that path.
NOTE: Enabling XA transactions in a containerized environment does not fully support XA recovery unless stable storage is available at that path.
== Setting JPA provider configuration option for migrationStrategy

View file

@ -268,7 +268,7 @@ earlier. If you prefer to avoid this option, you can for instance ask all your u
the non-RHEL compatible platform or on the non-FIPS enabled platform, the FIPS compliance cannot be strictly guaranteed and cannot be officially supported.
If you are still restricted to running {project_name} on such a system, you can at least update your security providers configured in `java.security` file. This update does not amount to FIPS compliance, but
at least the setup is closer to it. It can be done by providing a custom security file with only an overriden list of security providers as described earlier. For a list of recommended providers,
at least the setup is closer to it. It can be done by providing a custom security file with only an overridden list of security providers as described earlier. For a list of recommended providers,
see the https://access.redhat.com/documentation/en-us/openjdk/17/html/configuring_openjdk_17_on_rhel_with_fips/openjdk-default-fips-configuration[OpenJDK 17 documentation].
You can check the {project_name} server log at startup to see if the correct security providers are used. TRACE logging should be enabled for crypto-related {project_name} packages as described in the Keycloak startup command earlier.

View file

@ -81,7 +81,7 @@ It is beneficial if particular session entity is always available locally, which
* User sends initial request to see the {project_name} login screen
* This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn't need to be random, but can be chosen according to some other criterias (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy).
* This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn't need to be random, but can be chosen according to some other criteria (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy).
* {project_name} creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache.

0
docs/pom.xml Executable file → Normal file
View file

View file

@ -27,7 +27,7 @@ and then restart server.
Reason: Keycloak supports rotation and dynamically downloading client's keys from configured jwks_url. However by default there is 10 seconds timeout
between 2 requests for download public keys to avoid DoS attacks.
The OIDC test OP-Rotation-RP-Sig registers client, then login user for the 1st time (which downloads new client keys) and
then immediatelly rotates client keys and expects refreshToken request to succeed with new key. This is just the testing scenario.
then immediately rotates client keys and expects refreshToken request to succeed with new key. This is just the testing scenario.
In real production environment, clients will very unlikely do something like this. Hence just for testing purposes is DoS protection disabled by set -1 here.