Merge pull request #5049 from tkyjovsk/KEYCLOAK-6165

KEYCLOAK-6165 Ability to load performance provisioning and test parameters from a properties file
This commit is contained in:
Marko Strukelj 2018-03-07 15:20:57 +01:00 committed by GitHub
commit b29edc7056
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
23 changed files with 316 additions and 226 deletions

View file

@ -26,9 +26,8 @@ mvn clean install
# Make sure your Docker daemon is running THEN # Make sure your Docker daemon is running THEN
mvn verify -Pprovision mvn verify -Pprovision
mvn verify -Pgenerate-data -Ddataset=100u -DnumOfWorkers=10 -DhashIterations=100 mvn verify -Pgenerate-data -Ddataset=100u2c -DnumOfWorkers=10 -DhashIterations=100
mvn verify -Ptest -Ddataset=100u -DusersPerSec=4.5 -DrampUpPeriod=10 -DuserThinkTime=0 -DbadLoginAttempts=1 -DrefreshTokenCount=1 -DmeasurementPeriod=60 -DfilterResults=true mvn verify -Ptest -Ddataset=100u2c -DusersPerSec=2 -DrampUpPeriod=10 -DuserThinkTime=0 -DbadLoginAttempts=1 -DrefreshTokenCount=1 -DmeasurementPeriod=60 -DfilterResults=true
``` ```
Now open the generated report in a browser - the link to .html file is displayed at the end of the test. Now open the generated report in a browser - the link to .html file is displayed at the end of the test.
@ -40,7 +39,7 @@ mvn verify -Pteardown
You can perform all phases in a single run: You can perform all phases in a single run:
``` ```
mvn verify -Pprovision,generate-data,test,teardown -Ddataset=100u -DnumOfWorkers=10 -DhashIterations=100 -DusersPerSec=5 -DrampUpPeriod=10 mvn verify -Pprovision,generate-data,test,teardown -Ddataset=100u2c -DnumOfWorkers=10 -DhashIterations=100 -DusersPerSec=4 -DrampUpPeriod=10
``` ```
Note: The order in which maven profiles are listed does not determine the order in which profile related plugins are executed. `teardown` profile always executes last. Note: The order in which maven profiles are listed does not determine the order in which profile related plugins are executed. `teardown` profile always executes last.
@ -49,21 +48,44 @@ Keep reading for more information.
## Provisioning ## Provisioning
### Available provisioners:
- `docker-compose` **Default.** See [`README.docker-compose.md`](README.docker-compose.md) for more details.
### Provision ### Provision
Usage: `mvn verify -Pprovision [-Dprovisioner=<PROVISIONER>] [-D<PARAMETER>=<VALUE>] …`. #### Provisioners
Depending on the target environment different provisioners may be used.
Provisioner can be selected via property `-Dprovisioner=PROVISIONER`.
Default value is `docker-compose` which is intended for testing on a local docker host.
This is currently the only implemented option. See [`README.docker-compose.md`](README.docker-compose.md) for more details.
#### Deployment Types #### Deployment Types
- Single node: `mvn verify -Pprovision` Different types of deployment can be provisioned.
- Cluster: `mvn verify -Pprovision,cluster [-Dkeycloak.scale=N] [-Dkeycloak.cpusets="cpuset1 cpuset2 … cpusetM"]`. `N ∈ {1 .. M}`. The default deployment is `singlenode` with only a single instance of Keycloak server and a database.
- Cross-DC: `mvn verify -Pprovision,crossdc [-Dkeycloak.dc1.scale=K] [-Dkeycloak.dc2.scale=L] [-Dkeycloak.dc1.cpusets=…] [-Dkeycloak.dc2.cpusets=…]` Additional options are `cluster` and `crossdc` which can be enabled with a profile (see below).
#### Usage
Usage: `mvn verify -P provision[,DEPLOYMENT_PROFILE] [-Dprovisioning.properties=NAMED_PROPERTY_SET]`.
The properties are loaded from `tests/parameters/provisioning/${provisioning.properties}.properties` file.
Individual parameters can be overriden from command line via `-D` params.
Default property set is `docker-compose/4cpus/singlenode`.
To load a custom properties file specify `-Dprovisioning.properties.file=ABSOLUTE_PATH_TO_FILE` instead of `-Dprovisioning.properties`.
This file needs to contain all properties required by the specific combination of provisioner and deployment type.
See examples in folder `tests/parameters/provisioning/docker-compose/4cpus`.
Available parameters are described in [`README.provisioning-parameters.md`](README.provisioning-parameters.md).
#### Examples:
- Provision a single-node deployment with docker-compose: `mvn verify -P provision`
- Provision a cluster deployment with docker-compose: `mvn verify -P provision,cluster`
- Provision a cluster deployment with docker-compose, overriding some properties: `mvn verify -P provision,cluster -Dkeycloak.scale=2 -Dlb.worker.task-max-threads=32`
- Provision a cross-DC deployment with docker-compose: `mvn verify -P provision,crossdc`
- Provision a cross-DC deployment with docker-compose using a custom properties file: `mvn verify -P provision,crossdc -Dprovisioning.properties.file=/tmp/custom-crossdc.properties`
All available parameters are described in [`README.provisioning-parameters.md`](README.provisioning-parameters.md).
#### Provisioned System #### Provisioned System
@ -71,8 +93,9 @@ The `provision` operation will produce a `provisioned-system.properties` inside
with information about the provisioned system such as the type of deployment and URLs of Keycloak servers and load balancers. with information about the provisioned system such as the type of deployment and URLs of Keycloak servers and load balancers.
This information is then used by operations `generate-data`, `import-dump`, `test`, `teardown`. This information is then used by operations `generate-data`, `import-dump`, `test`, `teardown`.
Provisioning can be run multiple times with different parameters. The system will be updated/reprovisioned based on the new parameters. Provisioning operation is idempotent for a specific combination of provisioner+deployment.
However when switching between different deployment types (e.g. from `singlenode` to `cluster`) it is always necessary When running multiple times the system will be simply updated based on the new parameters.
However when switching between different provisioiners or deployment types it is **always necessary**
to tear down the currently running system. to tear down the currently running system.
**Note:** When switching deployment type from `singlenode` or `cluster` to `crossdc` (or the other way around) **Note:** When switching deployment type from `singlenode` or `cluster` to `crossdc` (or the other way around)
@ -99,18 +122,19 @@ because it contains the `provisioned-system.properties` with information about t
### Generate Test Data ### Generate Test Data
Usage: `mvn verify -Pgenerate-data [-Ddataset=DATASET] [-D<dataset.property>=<value>]`. Usage: `mvn verify -P generate-data [-Ddataset=NAMED_PROPERTY_SET] [-DnumOfWorkers=N]`. The default dataset is `2u2c`. Workers default to `1`.
Dataset properties are loaded from `datasets/${dataset}.properties` file. Individual properties can be overriden by specifying `-D` params. The parameters are loaded from `tests/parameters/datasets/${dataset}.properties` file.
Individual properties can be overriden from command line via `-D` params.
Dataset data is first generated as a .json file, and then imported into Keycloak via Admin Client REST API. To use a custom properties file specify `-Ddataset.properties.file=ABSOLUTE_PATH_TO_FILE` instead of `-Ddataset`.
#### Dataset Properties #### Dataset Parameters
| Property | Description | Value in the Default Dataset | | Property | Description | Value in the Default Dataset |
| --- | --- | --- | | --- | --- | --- |
| `numOfRealms` | Number of realms to be created. | `1` | | `numOfRealms` | Number of realms to be created. | `1` |
| `usersPerRealm` | Number of users per realm. | `100` | | `usersPerRealm` | Number of users per realm. | `2` |
| `clientsPerRealm` | Number of clients per realm. | `2` | | `clientsPerRealm` | Number of clients per realm. | `2` |
| `realmRoles` | Number of realm-roles per realm. | `2` | | `realmRoles` | Number of realm-roles per realm. | `2` |
| `realmRolesPerUser` | Number of realm-roles assigned to a created user. Has to be less than or equal to `realmRoles`. | `2` | | `realmRolesPerUser` | Number of realm-roles assigned to a created user. Has to be less than or equal to `realmRoles`. | `2` |
@ -120,53 +144,73 @@ Dataset data is first generated as a .json file, and then imported into Keycloak
#### Examples: #### Examples:
- `mvn verify -Pgenerate-data` - generate default dataset - Generate the default dataset. `mvn verify -P generate-data`
- `mvn verify -Pgenerate-data -DusersPerRealm=5` - generate default dataset, override the `usersPerRealm` property - Generate the `100u2c` dataset. `mvn verify -P generate-data -Ddataset=100u2c`
- `mvn verify -Pgenerate-data -Ddataset=100u` - generate `100u` dataset - Generate the `100u2c` dataset but override some parameters. `mvn verify -P generate-data -Ddataset=100u2c -DclientRolesPerUser=5 -DclientRolesPerClient=5`
- `mvn verify -Pgenerate-data -Ddataset=100r/default` - generate dataset based on `datasets/100r/default.properties`
#### Export / Import Database Dump #### Export Database
To speed up dataset initialization part, it is possible to pass `-Dexport-dump` option to have the generated dataset To export the generated data to a data-dump file enable profile `-P export-dump`. This will create a `${DATASET}.sql.gz` file next to the dataset properties file.
exported right after it has been generated. Then, if there is a data dump file available then `-Pimport-dump`
can be used to import the data directly into the database, bypassing Keycloak server completely.
**Usage:** `mvn verify -Pimport-dump [-Ddataset=DATASET]` Example: `mvn verify -P generate-data,export-dump -Ddataset=100u2c`
**For example:** #### Import Database
- `mvn verify -Pgenerate-data -Ddataset=100u -Dexport-dump` will generate data based on `datasets/100u.properties` and export a database dump to a file: `datasets/100u.sql.gz`.
- `mvn verify -Pimport-dump -Ddataset=100u` will import the database dump from a file: `datasets/100u.sql.gz`, and reboot the server(s) To import data from an existing data-dump file use profile `-P import-dump`.
Example: `mvn verify -P import-dump -Ddataset=100u2c`
If the dump file doesn't exist locally the script will attempt to download it from `${db.dump.download.site}` which defaults to `https://downloads.jboss.org/keycloak-qe/${server.version}`
with `server.version` defaulting to `${project.version}` from `pom.xml`.
**Warning:** Don't override dataset parameters (with `-Dparam=value`) when running export/import because then the contents of dump file might not match the properties file.
### Run Tests ### Run Tests
Usage: `mvn verify -Ptest[,cluster] [-DtestParameter=value]`. Usage: `mvn verify -P test [-Dtest.properties=NAMED_PROPERTY_SET]`. Default property set is `basic-oidc`.
#### Common Parameters The parameters are loaded from `tests/parameters/test/${test.properties}.properties` file.
Individual properties can be overriden from command line via `-D` params.
To use a custom properties file specify `-Dtest.properties.file=ABSOLUTE_PATH_TO_FILE` instead of `-Dtest.properties`.
When running the tests it is also necessary to define a dataset to use. Usage is described in the section above.
#### Common Test Run Parameters
| Parameter | Description | Default Value | | Parameter | Description | Default Value |
| --- | --- | --- | | --- | --- | --- |
| `gatling.simulationClass` | Classname of the simulation to be run. | `keycloak.BasicOIDCSimulation` | | `gatling.simulationClass` | Classname of the simulation to be run. | `keycloak.BasicOIDCSimulation` |
| `dataset` | Name of the dataset to use. (Individual dataset properties can be overridden with `-Ddataset.property=value`.) | `default` | | `dataset` | Name of the dataset to use. (Individual dataset properties can be overridden with `-Ddataset.property=value`.) | `default` |
| `usersPerSec` | Arrival rate of new users per second. Can be a floating point number. | `1.0` | | `usersPerSec` | Arrival rate of new users per second. Can be a floating point number. | `1.0` for BasicOIDCSimulation, `0.2` for AdminConsoleSimulation |
| `rampUpPeriod` | Period during which the users will be ramped up. (seconds) | `0` | | `rampUpPeriod` | Period during which the users will be ramped up. (seconds) | `15` |
| `warmUpPeriod` | Period with steady number of users intended for the system under test to warm up. (seconds) | `0` | | `warmUpPeriod` | Period with steady number of users intended for the system under test to warm up. (seconds) | `15` |
| `measurementPeriod` | A measurement period after the system is warmed up. (seconds) | `30` | | `measurementPeriod` | A measurement period after the system is warmed up. (seconds) | `30` |
| `filterResults` | Whether to filter out requests which are outside of the `measurementPeriod`. | `false` | | `filterResults` | Whether to filter out requests which are outside of the `measurementPeriod`. | `false` |
| `userThinkTime` | Pause between individual scenario steps. | `5` | | `userThinkTime` | Pause between individual scenario steps. | `5` |
| `refreshTokenPeriod`| Period after which token should be refreshed. | `10` | | `refreshTokenPeriod`| Period after which token should be refreshed. | `10` |
#### Addtional Parameters of `keycloak.BasicOIDCSimulation` #### Test Run Parameters specific to `BasicOIDCSimulation`
| Parameter | Description | Default Value | | Parameter | Description | Default Value |
| --- | --- | --- | | --- | --- | --- |
| `badLoginAttempts` | | `0` | | `badLoginAttempts` | | `0` |
| `refreshTokenCount` | | `0` | | `refreshTokenCount` | | `0` |
#### Examples:
Example: - Run test with default test and dataset parameters:
`mvn verify -Ptest -Dgatling.simulationClass=keycloak.AdminConsoleSimulation -Ddataset=100u -DusersPerSec=1 -DmeasurementPeriod=60 -DuserThinkTime=0 -DrefreshTokenPeriod=15` `mvn verify -P test`
- Run test specific test and dataset parameters:
`mvn verify -P test -Dtest.properties=basic-oidc -Ddataset=100u2c`
- Run test with specific test and dataset parameters, overriding some from command line:
`mvn verify -P test -Dtest.properties=admin-console -Ddataset=100u2c -DrampUpPeriod=30 -DwarmUpPeriod=60 -DusersPerSec=0.3`
## Monitoring ## Monitoring
@ -209,32 +253,6 @@ To compress the binary output with bzip add `-Dbzip=true` to the commandline.
Results will be stored in folder: `tests/target/sar`. Results will be stored in folder: `tests/target/sar`.
## Examples
### Single-node
- Provision single node of KC + DB, generate data, run test, and tear down the provisioned system:
`mvn verify -Pprovision,generate-data,test,teardown -Ddataset=100u -DusersPerSec=5`
- Provision single node of KC + DB, generate data, no test, no teardown:
`mvn verify -Pprovision,generate-data -Ddataset=100u`
- Run test against provisioned system generating 5 new users per second, ramped up over 10 seconds, then tear it down:
`mvn verify -Ptest,teardown -Ddataset=100u -DusersPerSec=5 -DrampUpPeriod=10`
### Cluster
- Provision a 1-node KC cluster + DB, generate data, run test against the provisioned system, then tear it down:
`mvn verify -Pprovision,cluster,generate-data,test,teardown -Ddataset=100u -DusersPerSec=5`
- Provision a 2-node KC cluster + DB, generate data, run test against the provisioned system, then tear it down:
`mvn verify -Pprovision,cluster,generate-data,test,teardown -Dkeycloak.scale=2 -DusersPerRealm=200 -DusersPerSec=5`
## Developing tests in IntelliJ IDEA ## Developing tests in IntelliJ IDEA

View file

@ -37,7 +37,7 @@
<properties> <properties>
<server.groupId>org.keycloak</server.groupId> <server.groupId>org.keycloak</server.groupId>
<server.artifactId>keycloak-server-dist</server.artifactId> <server.artifactId>keycloak-server-dist</server.artifactId>
<server.version>${product.version}</server.version> <!-- `server.version` is defined one level up -->
<server.unpacked.folder.name>keycloak-${server.version}</server.unpacked.folder.name> <server.unpacked.folder.name>keycloak-${server.version}</server.unpacked.folder.name>
<server.unpacked.home>${project.build.directory}/${server.unpacked.folder.name}</server.unpacked.home> <server.unpacked.home>${project.build.directory}/${server.unpacked.folder.name}</server.unpacked.home>

View file

@ -32,6 +32,7 @@
<packaging>pom</packaging> <packaging>pom</packaging>
<properties> <properties>
<server.version>${product.version}</server.version>
<management.user/> <management.user/>
<management.user.password/> <management.user.password/>
</properties> </properties>

View file

@ -366,13 +366,16 @@ case "$OPERATION" in
crossdc) export DB_CONTAINER=${PROJECT_NAME}_mariadb_dc1_1 ;; crossdc) export DB_CONTAINER=${PROJECT_NAME}_mariadb_dc1_1 ;;
*) echo "Deployment '$DEPLOYMENT' doesn't support operation '$OPERATION'." ; exit 1 ;; *) echo "Deployment '$DEPLOYMENT' doesn't support operation '$OPERATION'." ; exit 1 ;;
esac esac
if [ -z "$DATASET" ]; then echo "Operation '$OPERATION' requires DATASET parameter."; exit 1; fi if [ ! -f "$DATASET_PROPERTIES_FILE" ]; then echo "Operation '$OPERATION' requires a valid DATASET_PROPERTIES_FILE parameter."; exit 1; fi
DATASET_PROPERTIES_FILENAME=`basename $DATASET_PROPERTIES_FILE`
DATASET=${DATASET_PROPERTIES_FILENAME%.properties}
echo "DATASET_PROPERTIES_FILE: $DATASET_PROPERTIES_FILE"
echo "DATASET: $DATASET" echo "DATASET: $DATASET"
echo "Stopping Keycloak services." echo "Stopping Keycloak services."
runCommand "docker-compose -f $DOCKER_COMPOSE_FILE -p ${PROJECT_NAME} stop $KEYCLOAK_SERVICES" runCommand "docker-compose -f $DOCKER_COMPOSE_FILE -p ${PROJECT_NAME} stop $KEYCLOAK_SERVICES"
cd $PROJECT_BASEDIR/datasets cd `dirname $DATASET_PROPERTIES_FILE`
case "$OPERATION" in case "$OPERATION" in
export-dump) export-dump)
echo "Exporting $DATASET.sql." echo "Exporting $DATASET.sql."
@ -384,7 +387,7 @@ case "$OPERATION" in
import-dump) import-dump)
DUMP_DOWNLOAD_SITE=${DUMP_DOWNLOAD_SITE:-https://downloads.jboss.org/keycloak-qe} DUMP_DOWNLOAD_SITE=${DUMP_DOWNLOAD_SITE:-https://downloads.jboss.org/keycloak-qe}
if [ ! -f "$DATASET.sql.gz" ]; then if [ ! -f "$DATASET.sql.gz" ]; then
echo "Downloading dump file." echo "Downloading dump file: $DUMP_DOWNLOAD_SITE/$DATASET.sql.gz"
if ! curl -f -O $DUMP_DOWNLOAD_SITE/$DATASET.properties -O $DUMP_DOWNLOAD_SITE/$DATASET.sql.gz ; then if ! curl -f -O $DUMP_DOWNLOAD_SITE/$DATASET.properties -O $DUMP_DOWNLOAD_SITE/$DATASET.sql.gz ; then
echo Download failed. echo Download failed.
exit 1 exit 1

View file

@ -0,0 +1,8 @@
numOfRealms=200
usersPerRealm=1000000
clientsPerRealm=2
realmRoles=2
realmRolesPerUser=2
clientRolesPerUser=2
clientRolesPerClient=2
hashIterations=27500

View file

@ -0,0 +1,8 @@
numOfRealms=20
usersPerRealm=10000
clientsPerRealm=2
realmRoles=2
realmRolesPerUser=2
clientRolesPerUser=2
clientRolesPerClient=2
hashIterations=27500

View file

@ -0,0 +1,8 @@
numOfRealms=2
usersPerRealm=1000
clientsPerRealm=2
realmRoles=2
realmRolesPerUser=2
clientRolesPerUser=2
clientRolesPerClient=2
hashIterations=27500

View file

@ -0,0 +1,32 @@
#provisioner=docker-compose
#deployment=cluster
# Keycloak Settings
keycloak.scale=1
keycloak.docker.cpusets=2 3
keycloak.docker.memlimit=2500m
keycloak.jvm.memory=-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
keycloak.http.max-connections=50000
keycloak.ajp.max-connections=50000
keycloak.worker.io-threads=2
keycloak.worker.task-max-threads=16
keycloak.ds.min-pool-size=10
keycloak.ds.max-pool-size=100
keycloak.ds.pool-prefill=true
keycloak.ds.ps-cache-size=100
# Database Settings
db.docker.cpusets=1
db.docker.memlimit=2g
db.max.connections=100
# Load Balancer Settings
lb.docker.cpusets=1
lb.docker.memlimit=1500m
lb.jvm.memory=-Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
lb.http.max-connections=50000
lb.worker.io-threads=2
lb.worker.task-max-threads=16
# Monitoring Settings
monitoring.docker.cpusets=0

View file

@ -0,0 +1,42 @@
#provisioner=docker-compose
#deployment=crossdc
# Keycloak Settings
keycloak.dc1.scale=1
keycloak.dc2.scale=1
keycloak.dc1.docker.cpusets=2
keycloak.dc2.docker.cpusets=3
keycloak.docker.memlimit=2500m
keycloak.jvm.memory=-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
keycloak.http.max-connections=50000
keycloak.ajp.max-connections=50000
keycloak.worker.io-threads=2
keycloak.worker.task-max-threads=16
keycloak.ds.min-pool-size=10
keycloak.ds.max-pool-size=100
keycloak.ds.pool-prefill=true
keycloak.ds.ps-cache-size=100
# Database Settings
db.dc1.docker.cpusets=1
db.dc2.docker.cpusets=1
db.docker.memlimit=2g
db.max.connections=100
# Load Balancer Settings
lb.dc1.docker.cpusets=1
lb.dc2.docker.cpusets=1
lb.docker.memlimit=1500m
lb.jvm.memory=-Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
lb.http.max-connections=50000
lb.worker.io-threads=2
lb.worker.task-max-threads=16
# Infinispan Settings
infinispan.dc1.docker.cpusets=1
infinispan.dc2.docker.cpusets=1
infinispan.docker.memlimit=1500m
infinispan.jvm.memory=-Xms64m -Xmx1g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC
# Monitoring Settings
monitoring.docker.cpusets=0

View file

@ -0,0 +1,23 @@
#provisioner=docker-compose
#deployment=singlenode
# Keycloak Settings
keycloak.scale=1
keycloak.docker.cpusets=2-3
keycloak.docker.memlimit=2500m
keycloak.jvm.memory=-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
keycloak.http.max-connections=50000
keycloak.worker.io-threads=2
keycloak.worker.task-max-threads=16
keycloak.ds.min-pool-size=10
keycloak.ds.max-pool-size=100
keycloak.ds.pool-prefill=true
keycloak.ds.ps-cache-size=100
# Database Settings
db.docker.cpusets=1
db.docker.memlimit=2g
db.max.connections=100
# Monitoring Settings
monitoring.docker.cpusets=0

View file

@ -0,0 +1,8 @@
gatling.simulationClass=keycloak.AdminConsoleSimulation
usersPerSec=0.2
rampUpPeriod=15
warmUpPeriod=15
measurementPeriod=30
filterResults=false
userThinkTime=0
refreshTokenPeriod=0

View file

@ -0,0 +1,10 @@
gatling.simulationClass=keycloak.BasicOIDCSimulation
usersPerSec=1.0
rampUpPeriod=15
warmUpPeriod=15
measurementPeriod=30
filterResults=false
userThinkTime=0
refreshTokenPeriod=0
refreshTokenCount=1
badLoginAttempts=1

View file

@ -33,55 +33,19 @@
<provisioner>docker-compose</provisioner> <provisioner>docker-compose</provisioner>
<deployment>singlenode</deployment> <deployment>singlenode</deployment>
<provisioning.properties>${provisioner}/4cpus/${deployment}</provisioning.properties>
<dataset>2u2c</dataset>
<test.properties>basic-oidc</test.properties>
<provisioning.properties.file>${project.basedir}/parameters/provisioning/${provisioning.properties}.properties</provisioning.properties.file>
<dataset.properties.file>${project.basedir}/parameters/datasets/${dataset}.properties</dataset.properties.file>
<test.properties.file>${project.basedir}/parameters/test/${test.properties}.properties</test.properties.file>
<provisioned.system.properties.file>${project.build.directory}/provisioned-system.properties</provisioned.system.properties.file> <provisioned.system.properties.file>${project.build.directory}/provisioned-system.properties</provisioned.system.properties.file>
<!-- Keycloak Server Settings --> <!--other-->
<keycloak.scale/>
<keycloak.dc1.scale/>
<keycloak.dc2.scale/>
<keycloak.docker.cpusets>2-3</keycloak.docker.cpusets>
<keycloak.dc1.docker.cpusets>2</keycloak.dc1.docker.cpusets>
<keycloak.dc2.docker.cpusets>3</keycloak.dc2.docker.cpusets>
<keycloak.docker.memlimit>2500m</keycloak.docker.memlimit>
<keycloak.jvm.memory>-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m</keycloak.jvm.memory>
<keycloak.http.max-connections>50000</keycloak.http.max-connections>
<keycloak.ajp.max-connections>50000</keycloak.ajp.max-connections>
<keycloak.worker.io-threads>2</keycloak.worker.io-threads>
<keycloak.worker.task-max-threads>16</keycloak.worker.task-max-threads>
<keycloak.ds.min-pool-size>10</keycloak.ds.min-pool-size>
<keycloak.ds.max-pool-size>100</keycloak.ds.max-pool-size>
<keycloak.ds.pool-prefill>true</keycloak.ds.pool-prefill>
<keycloak.ds.ps-cache-size>100</keycloak.ds.ps-cache-size>
<!-- Database Settings --> <db.dump.download.site>https://downloads.jboss.org/keycloak-qe/${server.version}</db.dump.download.site>
<db.docker.cpusets>1</db.docker.cpusets>
<db.dc1.docker.cpusets>1</db.dc1.docker.cpusets>
<db.dc2.docker.cpusets>1</db.dc2.docker.cpusets>
<db.docker.memlimit>2g</db.docker.memlimit>
<db.max.connections>100</db.max.connections>
<db.dump.download.site>https://downloads.jboss.org/keycloak-qe</db.dump.download.site>
<!-- Load Balancer Settings -->
<lb.docker.cpusets>1</lb.docker.cpusets>
<lb.dc1.docker.cpusets>1</lb.dc1.docker.cpusets>
<lb.dc2.docker.cpusets>1</lb.dc2.docker.cpusets>
<lb.docker.memlimit>1500m</lb.docker.memlimit>
<lb.jvm.memory>-Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m</lb.jvm.memory>
<lb.http.max-connections>50000</lb.http.max-connections>
<lb.worker.io-threads>2</lb.worker.io-threads>
<lb.worker.task-max-threads>16</lb.worker.task-max-threads>
<!-- Infinispan Settings -->
<infinispan.dc1.docker.cpusets>1</infinispan.dc1.docker.cpusets>
<infinispan.dc2.docker.cpusets>1</infinispan.dc2.docker.cpusets>
<infinispan.docker.memlimit>1500m</infinispan.docker.memlimit>
<infinispan.jvm.memory>-Xms64m -Xmx1g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC</infinispan.jvm.memory>
<!-- Monitoring Settings -->
<monitoring.docker.cpusets>0</monitoring.docker.cpusets>
<!-- Other -->
<dataset>default</dataset>
<numOfWorkers>1</numOfWorkers> <numOfWorkers>1</numOfWorkers>
<maven.compiler.target>1.8</maven.compiler.target> <maven.compiler.target>1.8</maven.compiler.target>
@ -206,6 +170,20 @@
<artifactId>properties-maven-plugin</artifactId> <artifactId>properties-maven-plugin</artifactId>
<version>1.0.0</version> <version>1.0.0</version>
<executions> <executions>
<execution>
<id>read-parameters</id>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<files>
<file>${provisioning.properties.file}</file>
<file>${dataset.properties.file}</file>
<file>${test.properties.file}</file>
</files>
</configuration>
</execution>
<execution> <execution>
<id>read-existing-provisioned-system-properties</id> <id>read-existing-provisioned-system-properties</id>
<phase>initialize</phase> <phase>initialize</phase>
@ -274,12 +252,11 @@
<skip>${gatling.skip.run}</skip> <skip>${gatling.skip.run}</skip>
<disableCompiler>true</disableCompiler> <disableCompiler>true</disableCompiler>
<runMultipleSimulations>true</runMultipleSimulations> <runMultipleSimulations>true</runMultipleSimulations>
<!--includes>
<include>keycloak.DemoSimulation2</include>
</includes-->
<jvmArgs> <jvmArgs>
<!--common params-->
<param>-Dproject.build.directory=${project.build.directory}</param> <param>-Dproject.build.directory=${project.build.directory}</param>
<param>-Dkeycloak.server.uris=${keycloak.frontend.servers}</param> <param>-Dkeycloak.server.uris=${keycloak.frontend.servers}</param>
<!--dataset params-->
<param>-DnumOfRealms=${numOfRealms}</param> <param>-DnumOfRealms=${numOfRealms}</param>
<param>-DusersPerRealm=${usersPerRealm}</param> <param>-DusersPerRealm=${usersPerRealm}</param>
<param>-DclientsPerRealm=${clientsPerRealm}</param> <param>-DclientsPerRealm=${clientsPerRealm}</param>
@ -288,6 +265,16 @@
<param>-DclientRolesPerUser=${clientRolesPerUser}</param> <param>-DclientRolesPerUser=${clientRolesPerUser}</param>
<param>-DclientRolesPerClient=${clientRolesPerClient}</param> <param>-DclientRolesPerClient=${clientRolesPerClient}</param>
<param>-DhashIterations=${hashIterations}</param> <param>-DhashIterations=${hashIterations}</param>
<!--test params-->
<param>-DusersPerSec=${usersPerSec}</param>
<param>-DrampUpPeriod=${rampUpPeriod}</param>
<param>-DwarmUpPeriod=${warmUpPeriod}</param>
<param>-DmeasurementPeriod=${measurementPeriod}</param>
<param>-DfilterResults=${filterResults}</param>
<param>-DuserThinkTime=${userThinkTime}</param>
<param>-DrefreshTokenPeriod=${refreshTokenPeriod}</param>
<param>-DrefreshTokenCount=${refreshTokenCount}</param>
<param>-DbadLoginAttempts=${badLoginAttempts}</param>
</jvmArgs> </jvmArgs>
</configuration> </configuration>
@ -313,59 +300,10 @@
<profiles> <profiles>
<profile>
<id>docker-compose</id>
<activation>
<property>
<name>!provisioner</name>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>copy-dockerfiles-etc</id>
<phase>generate-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<copy todir="${project.build.directory}/docker-compose" overwrite="false" >
<fileset dir="${project.basedir}/src/main/docker-compose"/>
</copy>
<copy todir="${project.build.directory}/docker-compose" overwrite="false" >
<fileset dir="${project.basedir}/..">
<include name="db/**"/>
<include name="monitoring/**"/>
</fileset>
</copy>
<copy todir="${project.build.directory}/docker-compose/infinispan" overwrite="false" >
<fileset dir="${project.basedir}/../infinispan/target/docker"/>
</copy>
<copy todir="${project.build.directory}/docker-compose/load-balancer/wildfly-modcluster" overwrite="false" >
<fileset dir="${project.basedir}/../load-balancer/wildfly-modcluster/target/docker"/>
</copy>
<copy todir="${project.build.directory}/docker-compose/keycloak" overwrite="false" >
<fileset dir="${project.basedir}/../keycloak/target/docker"/>
</copy>
</target>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile> <profile>
<id>cluster</id> <id>cluster</id>
<properties> <properties>
<deployment>cluster</deployment> <deployment>cluster</deployment>
<keycloak.docker.cpusets>2 3</keycloak.docker.cpusets>
</properties> </properties>
</profile> </profile>
<profile> <profile>
@ -377,8 +315,30 @@
<profile> <profile>
<id>provision</id> <id>provision</id>
<properties>
<project.basedir>${project.basedir}</project.basedir>
<project.build.directory>${project.build.directory}</project.build.directory>
</properties>
<build> <build>
<plugins> <plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>prepare-provisioning</id>
<phase>generate-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<ant antfile="prepare-provisioning.xml" target="prepare-${provisioner}" />
</target>
</configuration>
</execution>
</executions>
</plugin>
<plugin> <plugin>
<groupId>org.codehaus.mojo</groupId> <groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId> <artifactId>exec-maven-plugin</artifactId>
@ -465,39 +425,6 @@
</build> </build>
</profile> </profile>
<profile>
<id>initialize-dataset-properties</id>
<activation>
<property>
<name>dataset</name>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0.0</version>
<executions>
<execution>
<id>initialize-dataset-properties</id>
<phase>pre-integration-test</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<files>
<file>${project.basedir}/datasets/${dataset}.properties</file>
</files>
<quiet>true</quiet>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile> <profile>
<id>generate-data</id> <id>generate-data</id>
<build> <build>
@ -559,26 +486,6 @@
<id>export-dump</id> <id>export-dump</id>
<build> <build>
<plugins> <plugins>
<plugin>
<artifactId>maven-enforcer-plugin</artifactId>
<executions>
<execution>
<id>enforce-nondefault-dataset</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requireProperty>
<property>dataset</property>
<regex>(?!default).*</regex>
<regexMessage>For the "export-dump" task property "dataset" cannot be set to "default".</regexMessage>
</requireProperty>
</rules>
</configuration>
</execution>
</executions>
</plugin>
<plugin> <plugin>
<groupId>org.codehaus.mojo</groupId> <groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId> <artifactId>exec-maven-plugin</artifactId>
@ -595,7 +502,7 @@
<PROVISIONER>${provisioner}</PROVISIONER> <PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>${deployment}</DEPLOYMENT> <DEPLOYMENT>${deployment}</DEPLOYMENT>
<OPERATION>export-dump</OPERATION> <OPERATION>export-dump</OPERATION>
<DATASET>${dataset}</DATASET> <DATASET_PROPERTIES_FILE>${dataset.properties.file}</DATASET_PROPERTIES_FILE>
</environmentVariables> </environmentVariables>
</configuration> </configuration>
</execution> </execution>
@ -625,7 +532,7 @@
<PROVISIONER>${provisioner}</PROVISIONER> <PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>${deployment}</DEPLOYMENT> <DEPLOYMENT>${deployment}</DEPLOYMENT>
<OPERATION>import-dump</OPERATION> <OPERATION>import-dump</OPERATION>
<DATASET>${dataset}</DATASET> <DATASET_PROPERTIES_FILE>${dataset.properties.file}</DATASET_PROPERTIES_FILE>
<DUMP_DOWNLOAD_SITE>${db.dump.download.site}</DUMP_DOWNLOAD_SITE> <DUMP_DOWNLOAD_SITE>${db.dump.download.site}</DUMP_DOWNLOAD_SITE>
</environmentVariables> </environmentVariables>
</configuration> </configuration>

View file

@ -0,0 +1,24 @@
<project name="prepare-provisioning" basedir="." >
<target name="prepare-docker-compose">
<copy todir="${project.build.directory}/docker-compose" overwrite="false" >
<fileset dir="${project.basedir}/src/main/docker-compose"/>
</copy>
<copy todir="${project.build.directory}/docker-compose" overwrite="false" failonerror="true">
<fileset dir="${project.basedir}/..">
<include name="db/**"/>
<include name="monitoring/**"/>
</fileset>
</copy>
<copy todir="${project.build.directory}/docker-compose/infinispan" overwrite="false" failonerror="true">
<fileset dir="${project.basedir}/../infinispan/target/docker"/>
</copy>
<copy todir="${project.build.directory}/docker-compose/load-balancer/wildfly-modcluster" overwrite="false" failonerror="true">
<fileset dir="${project.basedir}/../load-balancer/wildfly-modcluster/target/docker"/>
</copy>
<copy todir="${project.build.directory}/docker-compose/keycloak" overwrite="false" failonerror="true">
<fileset dir="${project.basedir}/../keycloak/target/docker"/>
</copy>
</target>
</project>

View file

@ -58,9 +58,7 @@ public class TestConfig {
public static final int rampUpPeriod = Integer.getInteger("rampUpPeriod", 0); public static final int rampUpPeriod = Integer.getInteger("rampUpPeriod", 0);
public static final int warmUpPeriod = Integer.getInteger("warmUpPeriod", 0); public static final int warmUpPeriod = Integer.getInteger("warmUpPeriod", 0);
public static final int measurementPeriod = Integer.getInteger("measurementPeriod", 30); public static final int measurementPeriod = Integer.getInteger("measurementPeriod", 30);
public static final boolean rampDownASAP = Boolean.getBoolean("rampDownASAP"); // check for rampdown condition after each scenario step
public static final boolean filterResults = Boolean.getBoolean("filterResults"); // filter out results outside of measurementPeriod public static final boolean filterResults = Boolean.getBoolean("filterResults"); // filter out results outside of measurementPeriod
public static final int pace = Integer.getInteger("pace", 0); // additional dynamic "pause buffer" between scenario loops
public static final int userThinkTime = Integer.getInteger("userThinkTime", 0); public static final int userThinkTime = Integer.getInteger("userThinkTime", 0);
public static final int refreshTokenPeriod = Integer.getInteger("refreshTokenPeriod", 0); public static final int refreshTokenPeriod = Integer.getInteger("refreshTokenPeriod", 0);