Removing testsuite/performance from main Keycloak repository (#15950)

There's a separate repository keycloak-benchmark for this.

Closes #14192
This commit is contained in:
Alexander Schwartz 2022-12-15 14:43:24 +01:00 committed by GitHub
parent 38db622822
commit bd4acfe4c6
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
269 changed files with 1 additions and 17982 deletions

View file

@ -6,6 +6,5 @@
[allowlist]
paths = [
'''saml-core/src/test/java/org/keycloak/saml/processing/core/saml/v2/util/AssertionUtilTest.java''',
'''testsuite/performance/tests/pom.xml''',
'''saml-core/src/test/java/org/keycloak/saml/processing/core/saml/v2/util/AssertionUtilTest.java'''
]

View file

@ -1,152 +0,0 @@
# Keycloak Datasets
## Provision Keycloak Server
Before generating data it is necessary to provision/start Keycloak server. This can
be done automatically by running:
```
cd testsuite/performance
mvn clean install
mvn verify -P provision
```
To tear down the system after testing run:
```
mvn verify -P teardown
```
The teardown step will delete the database as well so it is possible to use it between generating different datasets.
It is also possible to start the server externally (manually). In that case it is necessary
to provide information in file `tests/target/provisioned-system.properties`.
See the main README for details.
## Generate Data
To generate the *default dataset* run:
```
cd testsuite/performance
mvn verify -P generate-data
```
To generate a *specific dataset* from within the project run:
```
mvn verify -P generate-data -Ddataset=<NAMED_DATASET>
```
This will load dataset properties from `tests/src/test/resources/dataset/${dataset}.properties`.
To generate a specific dataset from a *custom properties file* run:
```
mvn verify -P generate-data -Ddataset.properties.file=<FULL_PATH_TO_PROPERTIES_FILE>
```
To delete a dataset run:
```
mvn verify -P generate-data -Ddataset=… -Ddelete=true
```
This will delete all realms specified by the dataset.
## Indexed Model
The model is hierarchical with the parent-child relationships determined by primary foreign keys of entities.
Size of the dataset is determined by specifying a "count per parent" parameter for each entity.
Number of mappings between entities created by the primary "count per parent" parameters
can be speicied by "count per other entity" parameters.
Each nested entity has a unique index which identifies it inside its parent entity.
For example:
- Realm X --> Client Y --> Client Role Z
- Realm X --> Client Y --> Resource Server --> Resource Z
- Realm X --> User Y
- etc.
Hash code of each entity is computed based on its index coordinates within the model and its class name.
Each entity holds entity representation, and a list of mappings to other entities in the indexed model.
The attributes and mappings are initialized by a related *entity template* class.
Each entity class also acts as a wrapper around a Keycloak Admin Client using it
to provide CRUD operations for its entity.
The `id` attribute in the entity representation is set upon entity creation, or in case
an already initialized entity was removed from LRU cache it is reloaded from the server.
This may happen if the number of entities is larger than entity cache size. (see below)
### Attribute Templating
Attributes of each supported entity representation can be set via FreeMarker templates.
The process is based on templates defined in a properties configuration file.
The first template in the list can use the `index` of the entity and any attributes of its parent entity.
Each subsequent attribute template can use any previously set attribute values.
Note: Output of FreeMarker engine is always a String. Transition to the actual type
of the attribute is done with the Jackson 2.9+ parser using `ObjectMapper.update()` method
which allows a gradual updates of an existing Java object.
### Randomness
Randomness in the indexed model is deterministic (pseudorandom) because the
random seeds are based on deterministic hash codes.
There are 2 types of seeds: one is for using randoms in the FreeMarker templates
via methods `indexBasedRandomInt(int bound)` and `indexBasedRandomBool(int percentage)`.
It is based on class of the current entity + hash code of its parent entity.
The other seed is for generating mappings to other entities which are just
random sequences of integer indexes. This is based on hash code of the current entity.
### Generator Settings
#### Timeouts
- `queue.timeout`: How long to wait for an entity to be processed by a thread-pool executor. Default is `60` seconds.
You might want to increase this setting when deleting many realms with many nested entities using a low number of workers.
- `shutdown.timeout`: How long to wait for the executor thread-pool to shut down. Default is `60` seconds.
#### Caching and Memory
- `template.cache.size`: Size of cache of FreeMarker template models. Default is `10000`.
- `randoms.cache.size`: Size of cache of random integer sequences which are used for mappings between entities. Default is `10000`.
- `entity.cache.size`: Size of cache of initialized entities. Default is `100000`.
- `max.heap`: Max heap size of the data generator JVM.
## Notes:
- Mappings are random so it can sometimes happen that the same mappings are generated multiple times.
Only distinct mappings are created.
This means for example that if you specify `realmRolesPerUser=5` it can happen
that only 4 or less roles will be actually mapped.
There is an option to use unique random sequences but is is disabled right now
because checking for uniqueness is CPU-intensive.
- Mapping of client roles to a user right now is determined by a single parameter: `clientRolesPerUser`.
Actually created mappings -- each of which contains specific client + a set of its roles -- is created
based on the list of randomly selected client roles of all clients in the realm.
This means the count of the actual client mappings isn't predictable.
That would require specifying 2 parameters: `clientsPerUser` and `clientRolesPerClientPerUser`
which would say how many clients a user has roles assigned from, and the number of roles per each of these clients.
- Number of resource servers depends on how the attribute `authorizationServicesEnabled`
is set for each client. This means the number isn't specified by any "perRealm" parameter.
If this is needed it can be implemented via a random mapping from a resource server entity
to a set of existing clients in a similar fashion to how a resource is selected for each resource permission.
- The "resource type" attribute for each resource and resource-based permission defaults to
the default type of the parent resource server.
If it's needed a separate abstract/non-persistable entity ResourceType can be created in the model
to represent a set of resource types. The "resource type" attributes can then be set based on random mappings into this set.
- Generating large number of users can take a long time with the default realm settings
which have the password hashing iterations set to a default value of 27500.
If you wish to speed this process up decrease the value of `hashIterations()` in attribute `realm.passwordPolicy`.
Note that this will also significantly affect the performance results of the tests because
password hashing takes a major part of the server's compute resources. The results may
improve even by a factor of 10 or higher when the hashing is set to the minimum value of 1 itreration.
However it's on the expense of security.

View file

@ -1,35 +0,0 @@
# Keycloak Performance Testsuite - Docker Compose Provisioner
## Supported Deployments
| Deployment | Available Operations | Orchestration Template |
| ------------ | ----------------------------------------------------- | ------------------------------- |
| `singlenode` | `provision`, `teardown`, `export-dump`, `import-dump` | `docker-compose.yml` |
| `cluster` | `provision`, `teardown`, `export-dump`, `import-dump` | `docker-compose-cluster.yml`* |
| `crossdc` | `provision`, `teardown`, `export-dump`, `import-dump` | `docker-compose-crossdc.yml`* |
| `monitoring` | `provision`, `teardown` | `docker-compose-monitoring.yml` |
The docker-compose orchestration templates are located in `tests/src/main/docker-compose` directory.
**[*]** The cluster and crossdc templates are generated dynamically during the `provision` operation based on provided `cpusets` parameter.
One Keycloak service entry is generated for each cpuset. This is a workaround for limitations of the default docker-compose scaling mechanism
which only allows setting `cpuset` per service, not per container. For more predictable performance results it is necessary for each
Keycloak server to have an exclusive access to specific CPU cores.
## Debugging docker containers:
- List started containers: `docker ps`. It's useful to watch continuously: `watch docker ps`.
To list compose-only containers use: `docker-compose ps`, but this doesn't show container health status.
- Watch logs of a specific container: `docker logs -f crossdc_mariadb_dc1_1`.
- Watch logs of all containers managed by docker-compose: `docker-compose logs -f`.
- List networks: `docker network ls`
- Inspect network: `docker network inspect NETWORK_NAME`. Shows network configuration and currently connected containers.
## Network Addresses
### KC
`10.i.1.0/24` One network per DC. For single-DC deployments `i = 0`, for cross-DC deployment `i ∈ ` is an index of particular DC.
### Load Balancing
`10.0.2.0/24` Network spans all DCs.
### DB Replication
`10.0.3.0/24` Network spans all DCs.
### ISPN Replication
`10.0.4.0/24` Network spans all DCs.

View file

@ -1,78 +0,0 @@
Example of using log-tool.sh
----------------------------
Perform the usual test run:
```
mvn verify -Pteardown
mvn verify -Pprovision
mvn verify -Pgenerate-data -Ddataset=100users -Dimport.workers=10 -DhashIterations=100
mvn verify -Ptest -Ddataset=100users -DusersPerSec=5 -DrampUpPeriod=10 -DuserThinkTime=0 -DbadLoginAttempts=1 -DrefreshTokenCount=1 -DmeasurementPeriod=60
```
Now analyze the generated simulation.log (adjust LOG_DIR, FROM, and TO):
```
LOG_DIR=$HOME/devel/keycloak/keycloak/testsuite/performance/tests/target/gatling/keycloaksimulation-1502735555123
```
Get general statistics about the run to help with deciding about the interval to extract:
```
tests/log-tool.sh -s -f $LOG_DIR/simulation.log
tests/log-tool.sh -s -f $LOG_DIR/simulation.log --lastRequest "Browser logout"
```
Set start and end times for the extraction, and create new directory for results:
```
FROM=1502735573285
TO=1502735581063
RESULT_DIR=tests/target/gatling/keycloaksimulation-$FROM\_$TO
mkdir $RESULT_DIR
```
Extract a portion of the original log, and inspect statistics of resulting log:
```
tests/log-tool.sh -f $LOG_DIR/simulation.log -o $RESULT_DIR/simulation-$FROM\_$TO.log -e --start $FROM --end $TO
tests/log-tool.sh -f $RESULT_DIR/simulation-$FROM\_$TO.log -s
```
Generate another set of reports from extracted log:
```
GATLING_HOME=$HOME/devel/gatling-charts-highcharts-bundle-2.1.7
cd $GATLING_HOME
bin/gatling.sh -ro $RESULT_DIR
```
Installing Gatling Highcharts 2.1.7
-----------------------------------
```
git clone http://github.com/gatling/gatling
cd gatling
git checkout v2.1.7
git checkout -b v2.1.7
sbt clean compile
sbt publishLocal publishM2
cd ..
git clone http://github.com/gatling/gatling-highcharts
cd gatling-highcharts/
git checkout v2.1.7
git checkout -b v2.1.7
sbt clean compile
sbt publishLocal publishM2
cd ..
unzip ~/.ivy2/local/io.gatling.highcharts/gatling-charts-highcharts-bundle/2.1.7/zips/gatling-charts-highcharts-bundle-bundle.zip
cd gatling-charts-highcharts-bundle-2.1.7
bin/gatling.sh
```

View file

@ -1,354 +0,0 @@
# Keycloak Performance Testsuite
## Requirements:
- Bash 2.05+
- Maven 3.5.4+
- Keycloak server distribution installed in the local Maven repository. To do this run `mvn install -Pdistribution` from the root of the Keycloak project.
### Docker Compose Provisioner
- Docker 1.13+
- Docker Compose 1.14+
## Getting started for the impatient
Here's how to perform a simple tests run:
```
# Clone keycloak repository if you don't have it yet
# git clone https://github.com/keycloak/keycloak.git
# Build Keycloak distribution - needed to build docker image with latest Keycloak server
mvn clean install -DskipTests -Pdistribution
# Now build, provision and run the test
cd testsuite/performance
mvn clean install
# Make sure your Docker daemon is running THEN
mvn verify -Pprovision
mvn verify -Pgenerate-data -Ddataset=1r_10c_100u -DnumOfWorkers=10
mvn verify -Ptest -Ddataset=1r_10c_100u -DusersPerSec=2 -DrampUpPeriod=10 -DuserThinkTime=0 -DbadLoginAttempts=1 -DrefreshTokenCount=1 -DmeasurementPeriod=60 -DfilterResults=true
```
Now open the generated report in a browser - the link to .html file is displayed at the end of the test.
After the test run you may want to tear down the docker instances for the next run to be able to import data:
```
mvn verify -Pteardown
```
You can perform all phases in a single run:
```
mvn verify -Pprovision,generate-data,test,teardown -Ddataset=1r_10c_100u -DnumOfWorkers=10 -DusersPerSec=4 -DrampUpPeriod=10
```
Note: The order in which maven profiles are listed does not determine the order in which profile related plugins are executed. `teardown` profile always executes last.
Keep reading for more information.
## Provisioning
### Provision
#### Provisioners
Depending on the target environment different provisioners may be used.
Provisioner can be selected via property `-Dprovisioner=PROVISIONER`.
Default value is `docker-compose` which is intended for testing on a local docker host.
This is currently the only implemented option. See [`README.docker-compose.md`](README.docker-compose.md) for more details.
#### Deployment Types
Different types of deployment can be provisioned.
The default deployment is `singlenode` with only a single instance of Keycloak server and a database.
Additional options are `cluster` and `crossdc` which can be enabled with a profile (see below).
#### Usage
Usage: `mvn verify -P provision[,DEPLOYMENT_PROFILE] [-Dprovisioning.properties=NAMED_PROPERTY_SET]`.
The properties are loaded from `tests/parameters/provisioning/${provisioning.properties}.properties` file.
Individual parameters can be overriden from command line via `-D` params.
Default property set is `docker-compose/4cpus/singlenode`.
To load a custom properties file specify `-Dprovisioning.properties.file=ABSOLUTE_PATH_TO_FILE` instead of `-Dprovisioning.properties`.
This file needs to contain all properties required by the specific combination of provisioner and deployment type.
See examples in folder `tests/parameters/provisioning/docker-compose/4cpus`.
Available parameters are described in [`README.provisioning-parameters.md`](README.provisioning-parameters.md).
#### Examples:
- Provision a single-node deployment with docker-compose: `mvn verify -P provision`
- Provision a cluster deployment with docker-compose: `mvn verify -P provision,cluster`
- Provision a cluster deployment with docker-compose, overriding some properties: `mvn verify -P provision,cluster -Dkeycloak.scale=2 -Dlb.worker.task-max-threads=32`
- Provision a cross-DC deployment with docker-compose: `mvn verify -P provision,crossdc`
- Provision a cross-DC deployment with docker-compose using a custom properties file: `mvn verify -P provision,crossdc -Dprovisioning.properties.file=/tmp/custom-crossdc.properties`
#### Provisioned System
The `provision` operation will produce a `provisioned-system.properties` inside the `tests/target` directory
with information about the provisioned system such as the type of deployment and URLs of Keycloak servers and load balancers.
This information is then used by operations `generate-data`, `import-dump`, `test`, `teardown`.
Provisioning operation is idempotent for a specific combination of provisioner+deployment.
When running multiple times the system will be simply updated based on the new parameters.
However when switching between different provisioiners or deployment types it is **always necessary**
to tear down the currently running system.
**Note:** When switching deployment type from `singlenode` or `cluster` to `crossdc` (or the other way around)
it is necessary to update the generated Keycloak server configuration (inside `keycloak/target` directory) by
adding a `clean` goal to the provisioning command like so: `mvn clean verify -Pprovision …`. It is *not* necessary to update this configuration
when switching between `singlenode` and `cluster` deployments.
#### Manual Provisioning
If you want to generate data or run the test against an already running instance of Keycloak server
you need to provide information about the system in a properties file.
Create file: `tests/target/provisioned-system.properties` with the following properties:
```
keycloak.frontend.servers=http://localhost:8080/auth
keycloak.admin.user=admin
keycloak.admin.password=admin
```
and replace the values with your actual information. Then it will be possible to run tasks: `generate-data` and `test`.
The tasks: `export-dump`, `import-dump` and `collect` (see below) are only available with the automated provisioning
because they require direct access to the provisioned services.
### Collect Artifacts
Usage: `mvn verify -Pcollect`
Collects artifacts such as logs from the provisioned system and stores them in `tests/target/collected-artifacts/${deployment}-TIMESTAMP/`.
When used in combination with teardown (see below) the artifacts are collected just before the system is torn down.
### Teardown
Usage: `mvn verify -Pteardown [-Dprovisioner=<PROVISIONER>]`
**Note:** Unless the provisioned system has been properly torn down the maven build will not allow a cleanup of the `tests/target` directory
because it contains the `provisioned-system.properties` with information about the still-running system.
## Testing
### Generate Test Data
Usage: `mvn verify -P generate-data [-Ddataset=NAMED_PROPERTY_SET] [-DnumOfWorkers=N]`. Workers default to `1`.
The parameters are loaded from `tests/src/test/resources/dataset/${dataset}.properties` file with `${dataset}` defaulting to `default`.
To use a custom properties file specify `-Ddataset.properties.file=ABSOLUTE_PATH_TO_FILE` instead of `-Ddataset`.
To generate data using a different version of Keycloak Admin Client set property `-Dserver.version=SERVER_VERSION` to match the version of the provisioned server.
To delete the generated dataset add `-Ddelete=true` to the above command. Dataset is deleted by deleting individual realms.
#### Examples:
- Generate the default dataset. `mvn verify -P generate-data`
- Generate the `1r_10c_100u` dataset. `mvn verify -P generate-data -Ddataset=1r_10c_100u`
#### Export Database
To export the generated data to a data-dump file enable profile `-P export-dump`. This will create a `${DATASET}.sql.gz` file next to the dataset properties file.
Example: `mvn verify -P generate-data,export-dump -Ddataset=1r_10c_100u`
#### Import Database
To import data from an existing data-dump file use profile `-P import-dump`.
Example: `mvn verify -P import-dump -Ddataset=1r_10c_100u`
If the dump file doesn't exist locally the script will attempt to download it from `${db.dump.download.site}` which defaults to `https://downloads.jboss.org/keycloak-qe/${server.version}`
with `server.version` defaulting to `${project.version}` from `pom.xml`.
**Warning:** Don't override dataset parameters (with `-Dparam=value`) when running export/import because then the contents of dump file might not match the properties file.
### Run Tests
Usage: `mvn verify -P test [-Dtest.properties=NAMED_PROPERTY_SET]`. Default property set is `oidc-login-logout`.
The parameters are loaded from `tests/parameters/test/${test.properties}.properties` file.
Individual properties can be overriden from command line via `-D` params.
To use a custom properties file specify `-Dtest.properties.file=ABSOLUTE_PATH_TO_FILE` instead of `-Dtest.properties`.
#### Dataset
When running the tests it is necessary to define the dataset to be used.
| Parameter | Description | Default Value |
| --- | --- | --- |
| `dataset` | Name of the dataset to use. Individual parameters can be overriden from CLI. For details see the section above. | `default` |
| `sequentialRealmsFrom` | Use sequential realm iteration starting from specific index. Must be lower than `numOfRealms` parameter from dataset properties. Useful for user registration scenario. | `-1` random iteration |
| `sequentialUsersFrom` | Use sequential user iteration starting from specific index. Must be lower than `usersPerRealm` parameter from dataset properties. Useful for user registration scenario. | `-1` random iteration |
#### Common Test Run Parameters
| Parameter | Description | Default Value |
| --- | --- | --- |
| `gatling.simulationClass` | Classname of the simulation to be run. | `keycloak.OIDCLoginAndLogoutSimulation` |
| `usersPerSec` | Arrival rate of new users per second. Can be a floating point number. | `1.0` for OIDCLoginAndLogoutSimulation, `0.2` for AdminConsoleSimulation |
| `rampUpPeriod` | Period during which the users will be ramped up. (seconds) | `15` |
| `warmUpPeriod` | Period with steady number of users intended for the system under test to warm up. (seconds) | `15` |
| `measurementPeriod` | A measurement period after the system is warmed up. (seconds) | `30` |
| `filterResults` | Whether to filter out requests which are outside of the `measurementPeriod`. | `false` |
| `userThinkTime` | Pause between individual scenario steps. | `5` |
| `refreshTokenPeriod`| Period after which token should be refreshed. | `10` |
| `logoutPct`| Percentage of users who should log out at the end of scenario. | `100` |
| Test Assertion | Description | Default Value |
| --- | --- | --- |
| `maxFailedRequests`| Maximum number of failed requests. | `0` |
| `maxMeanReponseTime`| Maximum mean response time of all requests. | `300` |
#### Test Run Parameters specific to `OIDCLoginAndLogoutSimulation`
| Parameter | Description | Default Value |
| --- | --- | --- |
| `badLoginAttempts` | | `0` |
| `refreshTokenCount` | | `0` |
#### Examples:
- Run test with default test and dataset parameters:
`mvn verify -P test`
- Run test specific test and dataset parameters:
`mvn verify -P test -Dtest.properties=oidc-login-logout -Ddataset=1r_10c_100u`
- Run test with specific test and dataset parameters, overriding some from command line:
`mvn verify -P test -Dtest.properties=admin-console -Ddataset=1r_10c_100u -DrampUpPeriod=30 -DwarmUpPeriod=60 -DusersPerSec=0.3`
#### Running `OIDCRegisterAndLogoutSimulation`
Running the user registration simulation requires a different approach to dataset and how it's iterated.
- It requires sequential iteration instead of the default random one.
- In case some users are already registered it requires starting the iteration from a specific index .
##### Example A:
1. Generate dataset with 0 users: `mvn verify -P generate-data -DusersPerRealm=0`
2. Run the registration test:
`mvn verify -P test -D test.properties=oidc-register-logout -DsequentialUsersFrom=0 -DusersPerRealm=<MAX_EXPECTED_REGISTRATIONS>`
##### Example B:
1. Generate or import dataset with 100 users: `mvn verify -P generate-data -Ddataset=1r_10c_100u`. This will create 1 realm and users 0-99.
2. Run the registration test starting from user 100:
`mvn verify -P test -D test.properties=oidc-register-logout -DsequentialUsersFrom=100 -DusersPerRealm=<MAX_EXPECTED_REGISTRATIONS>`
### Testing with HTTPS
If the provisioned server is secured with HTTPS it is possible to set the truststore which contains the server certificate.
The truststore is used in phases `generate-data` and `test`.
Usage: `mvn verify -P generate-data,test -DtrustStore=<PATH_TO_TRUSTSTORE> -DtrustStorePassword=<TRUSTSTORE_PASSWORD>`
To automatically generate the truststore file run a utility script `tests/create-truststore.sh HOST:PORT [TRUSTSTORE_PASSWORD]`.
The script requires `openssl` and `keytool` (included in JDK).
Example: `tests/create-truststore.sh localhost:8443 truststorepass`
## Monitoring
### JMX
To enable access to JMX on the WildFly-backed services set properties `management.user` and `management.user.password` during the provisioning phase.
#### JVisualVM
- Set `JBOSS_HOME` variable to point to a valid WildFly 10+ installation.
- Start JVisualVM with `jboss-client.jar` on classpath: `./jvisualvm --cp:a $JBOSS_HOME/bin/client/jboss-client.jar`.
- Add a local JMX connection: `service:jmx:remote+http://localhost:9990`. <sup>**[*]**</sup>
- Check "Use security credentials" and set `admin:admin`. (The default credentials can be overriden by providing env. variables `DEBUG_USER` and `DEBUG_USER_PASSWORD` to the container.)
- Open the added connection.
**[*]** For `singlenode` this points to the JMX console of the Keycloak server.
To get the connection URLs for `cluster` or `crossdc` deployments see the JMX section in the generated `provisioned-system.properties` file.
- Property `keycloak.frontend.servers.jmx` contains JMX URLs of the Load Balancers.
- Property `keycloak.backend.servers.jmx` contains JMX URLs of the clustered Keycloak servers.
- Property `infinispan.servers.jmx` contains JMX URLs of the Infinispan servers, in Cross-DC deployment.
### Docker Monitoring
There is a docker-based solution for monitoring CPU, memory and network usage per container.
It uses CAdvisor service to export container metrics into InfluxDB time series database, and Grafana web app to query the DB and present results as graphs.
- To enable run: `mvn verify -Pmonitoring`
- To disable run: `mvn verify -Pmonitoring-off[,delete-monitoring-data]`.
By default the monitoring history is preserved. If you wish to delete it enable the `delete-monitoring-data` profile when turning monitoring off.
To view monitoring dashboard open Grafana UI at: `http://localhost:3000/dashboard/file/resource-usage-combined.json`.
### Sysstat metrics
To enable recording of sysstat metrics use `-Psar`.
This will run the `sar` command during the test and process its binary output to produce textual and CSV files with CPU utilisation stats.
To also enable creation of PNG charts use `-Psar,gnuplot`. For this to work Gnuplot needs to be installed on the machine.
To compress the binary output with bzip add `-Dbzip=true` to the commandline.
Results will be stored in folder: `tests/target/sar`.
### JStat - JVM memory statistics
To enable jstat monitoring use `-Pjstat` option.
This will start a `jstat` process in each container with Wildfly-based service (Keycloak, Infinispan, Load balancer)
and record the statistics in the `standalone/log/jstat-gc.log` file. These can be then collected by running the `mvn verify -Pcollect` operation.
To enable creation of PNG charts based on the jstat output use `-Pgnuplot`.
## Developing tests in IntelliJ IDEA
### Add scala support to IDEA
#### Install the correct Scala SDK
First you need to install Scala SDK. In Scala land it's very important that all libraries used are compatible with specific version of Scala.
Gatling version that we use uses Scala version 2.11.7. In order to avoid conflicts between Scala used by IDEA, and Scala dependencies in pom.xml
it's very important to use that same version of Scala SDK for development.
Thus, it's best to download and install [this SDK version](http://scala-lang.org/download/2.11.7.html)
#### Install IntelliJ's official Scala plugin
Open Preferences in IntelliJ. Type 'plugins' in the search box. In the right pane click on 'Install JetBrains plugin'.
Type 'scala' in the search box, and click Install button of the Scala plugin.
#### Run OIDCLoginAndLogoutSimulation from IntelliJ
Make sure that `performance` maven profile is enabled for IDEA to treat `performance` directory as a project module.
You may also need to rebuild the module in IDEA for scala objects to become available.
Then find Engine object In ProjectExplorer (you can use ctrl-N / cmd-O). Right click on class name and select Run or Debug as if it was
a JUnit tests.
You'll have to edit a test configuration, and set 'VM options' to a list of -Dkey=value pairs to override default configuration values in TestConfig class.
Make sure to set 'Use classpath of module' to 'performance-test'.
When tests are executed via maven, the Engine object is not used. It exists only for running tests in IDE.
If test startup fails due to not being able to find the test classes try reimporting the 'performance' module from pom.xml (right click on 'performance' directory, select 'Maven' at the bottom of context menu, then 'Reimport')
If you want to run a different simulation - not DefaultSimulation - you can edit Engine object source, or create another Engine object for a different simulation.
## Troubleshoot
### Verbose logging
You can find `logback-test.xml` file in `tests/src/test/resources` directory. This files contains logging information in log4j xml format.
Root logger is by default set to WARN, but if you want to increase verbosity you can change it to DEBUG or INFO.

View file

@ -1,132 +0,0 @@
# Keycloak Performance Testsuite - Provisioning Parameters
## Overview of Provisioned Services
### Testing
| Deployment | Keycloak Server | Database | Load Balancer | Infinispan Server |
|-----------------|------------------------------------------|--------------------|--------------------|--------------------|
| *Singlenode* | 1 instance | 1 instance | - | - |
| *Cluster* | N instances | 1 instance | 1 instance | - |
| *Cross-DC* | K instances in DC1 + L instances in DC2 | 1 instance per DC | 1 instance per DC | 1 instance per DC |
### Monitoring
| Deployment | CAdvisor | Influx DB | Grafana |
|-----------------|-------------|-------------|-------------|
| *Monitoring* | 1 instance | 1 instance | 1 instance |
## Service Parameters
### Keycloak Server
| Category | Setting | Property | Default Value |
|-------------|-------------------------------|------------------------------------|--------------------------------------------------------------------|
| Keycloak | Server version | `server.version` | `${project.version}` from the project `pom.xml` file. |
| | Admin user | `keycloak.admin.user` | `admin` |
| | Admin user's password | `keycloak.admin.password` | `admin` |
| Scaling<sup>[1]</sup> | Scale for cluster | `keycloak.scale` | Maximum size<sup>[2]</sup> of cluster. |
| | Scale for DC1 | `keycloak.dc1.scale` | Maximum size of DC1. |
| | Scale for DC2 | `keycloak.dc2.scale` | Maximum size of DC2. |
| Docker | Allocated CPUs | `keycloak.docker.cpusets` | `2-3` for singlenode, `2 3` for cluster deployment |
| | Allocated CPUs for DC1 | `keycloak.dc1.docker.cpusets` | `2` |
| | Allocated CPUs for DC2 | `keycloak.dc2.docker.cpusets` | `3` |
| | Available memory | `keycloak.docker.memlimit` | `2500m` |
| JVM | Memory settings | `keycloak.jvm.memory` | `-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m` |
| Undertow | HTTP Listener max connections | `keycloak.http.max-connections` | `50000` |
| | AJP Listener max connections | `keycloak.ajp.max-connections` | `50000` |
| IO | Worker IO thread pool | `keycloak.worker.io-threads` | `2` |
| | Worker Task thread pool | `keycloak.worker.task-max-threads` | `16` |
| Datasources | Connection pool min size | `keycloak.ds.min-pool-size` | `10` |
| | Connection pool max size | `keycloak.ds.max-pool-size` | `100` |
| | Connection pool prefill | `keycloak.ds.pool-prefill` | `true` |
| | Prepared statement cache size | `keycloak.ds.ps-cache-size` | `100` |
**[ 1 ]** The scaling parameters are optional. They can be set within interval from 1 to the maximum cluster size].
If not set they are automatically set to the maximum size of the cluster (DC1/DC2 respectively).
**[ 2 ]** Maximum cluster size is determined by provisioner-specific parameter such as `keycloak.docker.cpusets` for the default *docker-compose* provisioner.
The maximum cluster size corresponds to the number of cpusets.
### Database
| Category | Setting | Property | Default Value |
|-------------|-------------------------------|------------------------------------|--------------------------------------------------------------------|
| Docker | Allocated CPUs | `db.docker.cpusets` | `1` |
| | Allocated CPUs for DC1 | `db.dc1.docker.cpusets` | `1` |
| | Allocated CPUs for DC2 | `db.dc2.docker.cpusets` | `1` |
| | Available memory | `db.docker.memlimit` | `2g` |
### Load Balancer
| Category | Setting | Property | Default Value |
|-------------|-------------------------------|------------------------------|---------------------------------------------------------------------|
| Docker | Allocated CPUs | `lb.docker.cpusets` | `1` |
| | Allocated CPUs for DC1 | `lb.dc1.docker.cpusets` | `1` |
| | Allocated CPUs for DC2 | `lb.dc2.docker.cpusets` | `1` |
| | Available memory | `lb.docker.memlimit` | `1500m` |
| JVM | Memory settings | `lb.jvm.memory` | `-Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m` |
| Undertow | HTTP Listener max connections | `lb.http.max-connections` | `50000` |
| IO | Worker IO thread pool | `lb.worker.io-threads` | `2` |
| | Worker Task thread pool | `lb.worker.task-max-threads` | `16` |
### Infinispan Server
| Category | Setting | Property | Default Value |
|-------------|-------------------------------|---------------------------------|-------------------------------------------------------------------------------------------|
| Docker | Allocated CPUs for DC1 | `infinispan.dc1.docker.cpusets` | `1` |
| | Allocated CPUs for DC2 | `infinispan.dc2.docker.cpusets` | `1` |
| | Available memory | `infinispan.docker.memlimit` | `1500m` |
| JVM | Memory settings | `infinispan.jvm.memory` | `-Xms64m -Xmx1g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC` |
### Monitoring
| Category | Setting | Property | Default Value |
|-------------|-------------------------------|-----------------------------|-----------------|
| Docker | Allocated CPUs | `monitoring.docker.cpusets` | `0` |
| JMX | Management user | `management.user` | Not set. |
| | Management user's password | `management.user.password` | Not set. |
By setting the `managemen.user` and `management.user.password` parameters it is possible
to add a management user to all WildFly-backed services (*Keycloak Server*, *Infinispan Server* and the *Load Balancer*).
Unless both parameters are explicitly provided during the provisioning phase the user will not be added
and it won't be possible to log into the management console or access JMX.
## Note on Docker settings
By default, there are 4 CPU cores allocated: core 0 for monitoring, core 1 for database (MariaDB), and cores 2 and 3 for Keycloak server.
Default memory limits for database and Keycloak server are 2g. The `cpuset` and `memlimit` parameters set here are set to `cpuset` and
`mem_limit` parameters of docker-compose configuration. See docker-compose documentation for meaning of the values. How to set the parameters
correctly depends on number of factors - number of cpu cores, NUMA, available memory etc., hence it is out of scope of this document.
### Example CPU Settings
| HW | Development Machine | "Fat Box" |
|-------------|----------------------|--------------|
| Cores | 4 | 48 |
| NUMA Nodes | 0-3 | 0-23, 24-47 |
#### Cluster
| Setting | Development Machine | "Fat Box" |
|------------------------------------|----------------------|-----------------------------|
| `monitoring.docker.cpusets` | 0 | 0 |
| `db.docker.cpusets` | 1 | 1 |
| `lb.docker.cpusets` | 1 | 2 |
| `keycloak.docker.cpusets` | 2-3 | 3-6 7-10 11-16 … 43-46 |
#### Cross-DC
| Setting | Development Machine | "Fat Box" |
|------------------------------------|----------------------|--------------------------------|
| `monitoring.docker.cpusets` | 0 | 0 |
| `db.dc1.docker.cpusets` | 1 | 1 |
| `lb.dc1.docker.cpusets` | 1 | 2 |
| `infinispan.dc1.docker.cpusets` | 1 | 3 |
| `keycloak.dc1.docker.cpusets` | 2 | 4-7 8-11 12-15 16-19 20-23 |
| `db.dc2.docker.cpusets` | 1 | 24 |
| `lb.dc2.docker.cpusets` | 1 | 25 |
| `infinispan.dc2.docker.cpusets` | 1 | 26 |
| `keycloak.dc2.docker.cpusets` | 3 | 27-30 31-34 35-38 39-42 43-46 |

View file

@ -1,79 +0,0 @@
# Keycloak Performance Testsuite - Stress Testing
## Requirements
- Bash
- `bc`: Arbitrary precision calculator.
## Stress Test
The performance testsuite contains a stress-testing script: `stress-test.sh`.
The stress test is implemented as a loop of individual performance test runs.
The script supports two algorithms:
- incremental (default)
- bisection
The *incremental algorithm* loop starts from a base load and then increases the load by a specified amount in each iteration.
The loop ends when a performance test fails, or when the maximum number of iterations is reached.
The *bisection algorithm* loop has a lower and an upper bound, and a resolution parameter.
In each iteration the middle of the interval is used as a value for the performance test load.
Depending on whether the test passes or fails the lower or upper half of the interval is used for the next iteration.
The loop ends if size of the interval is lower than the specified resolution, or when the maximum number of iterations is reached.
## Usage
```
export PARAMETER1=value1
export PARAMETER2=value2
...
stress-test.sh [-DadditionalTestsuiteParam1=value1 -DadditionalTestsuiteParam2=value2 ...]
```
## Parameters
### Script Execution Parameters
| Variable | Description | Default Value |
| --- | --- | --- |
| `MVN` | The base Maven command to be used. | `mvn` |
| `KEYCLOAK_PROJECT_HOME` | Root directory of the Keycloak project. | Root directory relative to the location of the `stress-test.sh` script. |
| `DRY_RUN` | Don't execute performance tests. Only print out execution information for each iteration. | `false` |
### Performance Testuite Parameters
| Variable | Description | Default Value |
| --- | --- | --- |
| `DATASET` | Dataset to be used. | `1r_10c_100u` |
| `WARMUP_PERIOD` | Value of `warmUpPeriod` testsuite parameter. | `120` seconds |
| `RAMPUP_PERIOD` | Value of `rampUpPeriod` testsuite parameter. | `60` seconds |
| `MEASUREMENT_PERIOD` | Value of `measurementPeriod` testsuite parameter. | `120` seconds |
| `FILTER_RESULTS` | Value of `filterResults` testsuite parameter. Should be enabled. | `true` |
| `@` | Any parameters provided to the `stress-test.sh` script will be passed to the performance testsuite. Optional. | |
### Stress Test Parameters
| Variable | Description | Default Value |
| --- | --- | --- |
| `STRESS_TEST_ALGORITHM` | Stress test loop algorithm: `incremental` or `bisection`. | `incremental` |
| `STRESS_TEST_MAX_ITERATIONS` | Maximum number of stress test loop iterations. | `10` iterations |
| `STRESS_TEST_PROVISIONING` | Should the system be re-provisioned in each iteration? If enabled the dataset DB dump is re-imported and the warmup is run in each iteration. | `false` |
| `STRESS_TEST_PROVISIONING_GENERATE_DATASET` | Should the dataset be generated, instead of imported from DB dump? | `false` |
| `STRESS_TEST_PROVISIONING_PARAMETERS` | Additional parameters for the provisioning command. Optional. | |
#### Incremental Algorithm
| Variable | Description | Default Value |
| --- | --- | --- |
| `STRESS_TEST_UPS_FIRST` | Value of `usersPerSec` parameter in the first iteration. | `1.000` users per second |
| `STRESS_TEST_UPS_INCREMENT` | Increment of `usersPerSec` parameter for each subsequent iteration. | `1.000` users per second |
#### Bisection Algorithm
| Variable | Description | Default Value |
| --- | --- | --- |
| `STRESS_TEST_UPS_LOWER_BOUND` | Lower bound of `usersPerSec` parameter. | `0.000` users per second |
| `STRESS_TEST_UPS_UPPER_BOUND` | Upper bound of `usersPerSec` parameter. | `10.000` users per second |
| `STRESS_TEST_UPS_RESOLUTION` | Required resolution of the bisection algorithm. | `1.000` users per second |

View file

@ -1,67 +0,0 @@
# DB Failover Testing Utilities
A set of scripts for testing DB failover scenarios.
The scripts expect to be run with relative execution prefix `./` from within the `db-failover` directory.
Provisioned services are defined in `../docker-compose-db-failover.yml` template.
## Set the size of DB cluster
Default size is 2 nodes. For a 3-node cluster run: `export NODES=3` before executing any of the scripts.
For more than 3 nodes more service definitions need to be added to the docker-compose template.
## Set up the environment
Run `./setup.sh`
This script will:
1. Start a bootstrap DB instance of MariaDB cluster
2. Start additional DB instances connected to the bootstrapped cluster
3. Stop the bootstrap DB instance
4. Optionally start Keycloak server
Parameterized by environment variables:
- `MARIADB_HA_MODE` See: [MariaDB HA parameters](https://mariadb.com/kb/en/library/failover-and-high-availability-with-mariadb-connector-j/#failover-high-availability-parameters)
Defaults to `replication:`.
- `MARIADB_OPTIONS` See: [MariaDB HA options](https://mariadb.com/kb/en/library/failover-and-high-availability-with-mariadb-connector-j/#failover-high-availability-options).
Use format: `?option1=value1[&option2=value2]...`. Default is an empty string.
- `START_KEYCLOAK` Default is `false`. Use `export START_KEYCLOAK=true` to enable.
More options relevant to MariaDB clustering can be found in `../db/mariadb/wsrep.cnf`.
## Test the failover
### Manual failover
To induce a failure of specific DB node run: `./kill-node.sh X` where `X ∈ {1..3}`
To reconnect the node back run: `./reconnect-node.sh X`
### Automated failover loop
Run `./loop.sh`
This script will run an infinite loop of failover/failback of DB nodes, switching to the next node in each loop.
Parameterized by environment variables:
- `TIME_BETWEEN_FAILURES` Default is `60` (seconds).
- `FAILURE_DURATION` Default is `60` (seconds).
To exit the script press `Ctrl+C`.
### Check number of table rows across the cluster
Run: `./check-rows.sh`
## Tear down the environment
Run `./teardown.sh`
This will stop all services and delete the database.

View file

@ -1,22 +0,0 @@
#!/bin/bash
. ./common.sh
CONCAT_SQL=$(< db-failover/concat.sql)
CONCAT_SQL_COMMAND='mysql -N -B -u keycloak --password=keycloak -e "$CONCAT_SQL" keycloak'
ROWS_SQL=$(eval docker-compose -f docker-compose-db-failover.yml exec mariadb_1 $CONCAT_SQL_COMMAND | tr -dc '[:print:]')
ROWS_SQL=${ROWS_SQL%UNION }
ROWS_SQL_COMMAND='mysql -u keycloak --password=keycloak -e "$ROWS_SQL" keycloak'
for (( i=1; i <= $NODES; i++)); do
ROWS[i]=$(eval docker-compose -f docker-compose-db-failover.yml exec mariadb_$i $ROWS_SQL_COMMAND)
done
DIFF=0
for (( i=2; i <= $NODES; i++)); do
echo Node 1 vs Node $(( i )):
diff -y --suppress-common-lines <(echo "${ROWS[1]}") <(echo "${ROWS[i]}")
if [ $? -eq 0 ]; then echo No difference.; else DIFF=1; fi
done
exit $DIFF

View file

@ -1,18 +0,0 @@
function killNode {
echo Killing mariadb_${1}
docker-compose -f docker-compose-db-failover.yml kill mariadb_${1}
}
function reconnectNode {
N=$1
NR=$(( N + 1 )); if [ "$NR" -gt "$NODES" ]; then NR=1; fi
export MARIADB_RUNNING_HOST=mariadb_${NR}
echo Attempting failback of mariadb_${N}, connecting to running cluster member mariadb_${NR}
docker-compose -f docker-compose-db-failover.yml up -d mariadb_${N}
}
if [ -z $NODES ]; then export NODES=2; fi
if [ -z $MARIADB_OPTIONS ]; then export MARIADB_OPTIONS=""; fi
if [ -z $START_KEYCLOAK ]; then export START_KEYCLOAK=false; fi
cd ..

View file

@ -1,3 +0,0 @@
SELECT CONCAT(
'SELECT "', table_name, '" AS table_name, COUNT(*) AS exact_row_count FROM `', table_schema, '`.`', table_name, '` UNION '
) FROM INFORMATION_SCHEMA.TABLES WHERE table_schema = 'keycloak';

View file

@ -1,7 +0,0 @@
#!/bin/bash
. ./common.sh
if [ -z $1 ]; then echo "Specify DB node to kill."; exit 1; fi
killNode $1

View file

@ -1,35 +0,0 @@
#!/bin/bash
. ./common.sh
if [ -z "$TIME_BETWEEN_FAILURES" ]; then export TIME_BETWEEN_FAILURES=60; fi
if [ -z "$FAILURE_DURATION" ]; then export FAILURE_DURATION=60; fi
echo Running DB failover loop with the following parameters:
echo NODES=$NODES
echo TIME_BETWEEN_FAILURES=$TIME_BETWEEN_FAILURES
echo FAILURE_DURATION=$FAILURE_DURATION
echo
echo Press Ctrl+C to interrupt.
echo
N=1
while :
do
killNode $N
echo Waiting $FAILURE_DURATION s before attempting to reconnect mariadb_${N}
sleep $FAILURE_DURATION
reconnectNode $N
echo Waiting $TIME_BETWEEN_FAILURES s before inducing another failure.
echo
sleep $TIME_BETWEEN_FAILURES
N=$((N+1))
if [ "$N" -gt "$NODES" ]; then N=1; fi
done

View file

@ -1,7 +0,0 @@
#!/bin/bash
. ./common.sh
if [ -z $1 ]; then echo "Specify DB node to reconnect to cluster."; exit 1; fi
reconnectNode $1

View file

@ -1,38 +0,0 @@
#!/bin/bash
. ./common.sh
if [ -z "$DB_BOOTSTRAP_TIMEOUT" ]; then DB_BOOTSTRAP_TIMEOUT=10; fi
if [ -z "$DB_JOIN_TIMEOUT" ]; then DB_JOIN_TIMEOUT=5; fi
echo Setting up Keycloak DB failover environment:
echo Starting DB bootstrap instance.
docker-compose -f docker-compose-db-failover.yml up -d --build mariadb_bootstrap
echo Waiting $DB_BOOTSTRAP_TIMEOUT s for the DB to initialize.
sleep $DB_BOOTSTRAP_TIMEOUT
MARIADB_HOSTS=""
for (( i=1; i<=$NODES; i++ )); do
MARIADB_HOSTS=$MARIADB_HOSTS,mariadb_$i:3306
echo Starting DB node $i.
docker-compose -f docker-compose-db-failover.yml up -d mariadb_$i
echo Waiting $DB_JOIN_TIMEOUT s for the DB node to join
echo
sleep $DB_JOIN_TIMEOUT
done
echo Turning off the DB bootstrap instance.
docker-compose -f docker-compose-db-failover.yml stop mariadb_bootstrap
export MARIADB_HOSTS=${MARIADB_HOSTS/,/}
echo MARIADB_HOSTS=$MARIADB_HOSTS
if $START_KEYCLOAK; then
echo Starting Keycloak server.
docker-compose -f docker-compose-db-failover.yml up -d --build keycloak
./healthcheck.sh
fi

View file

@ -1,8 +0,0 @@
#!/bin/bash
. ./common.sh
echo Stopping Keycloak DB failover environment.
docker-compose -f docker-compose-db-failover.yml down -v

View file

@ -1,14 +0,0 @@
FROM mariadb:10.3.3
ARG MAX_CONNECTIONS=100
ADD wsrep.cnf.template /etc/mysql/conf.d/
RUN sed -e s/@MAX_CONNECTIONS@/$MAX_CONNECTIONS/ /etc/mysql/conf.d/wsrep.cnf.template > /etc/mysql/conf.d/wsrep.cnf; cat /etc/mysql/conf.d/wsrep.cnf
ADD mariadb-healthcheck.sh /usr/local/bin/
RUN chmod -v +x /usr/local/bin/mariadb-healthcheck.sh
HEALTHCHECK --interval=5s --timeout=5s --retries=12 CMD ["mariadb-healthcheck.sh"]
ENV DATADIR /var/lib/mysql
ADD docker-entrypoint-wsrep.sh /usr/local/bin/
RUN chmod -v +x /usr/local/bin/docker-entrypoint-wsrep.sh

View file

@ -1 +0,0 @@

View file

@ -1,22 +0,0 @@
#!/bin/bash
echo "docker-entrypoint-wsrep.sh: $1"
if [ "$1" == "--wsrep-new-cluster" ]; then
echo "docker-entrypoint-wsrep.sh: Cluster 'bootstrap' node."
else
echo "docker-entrypoint-wsrep.sh: Cluster 'member' node."
if [ ! -d "$DATADIR/mysql" ]; then
echo "docker-entrypoint-wsrep.sh: Creating empty datadir to be populated from the cluster 'seed' node."
mkdir -p "$DATADIR/mysql"
chown -R mysql:mysql "$DATADIR"
fi
fi
echo "docker-entrypoint-wsrep.sh: Delegating to original mariadb docker-entrypoint.sh"
docker-entrypoint.sh "$@";

View file

@ -1,25 +0,0 @@
#!/bin/bash
set -eo pipefail
if [ "$MYSQL_RANDOM_ROOT_PASSWORD" ] && [ -z "$MYSQL_USER" ] && [ -z "$MYSQL_PASSWORD" ]; then
# there's no way we can guess what the random MySQL password was
echo >&2 'healthcheck error: cannot determine random root password (and MYSQL_USER and MYSQL_PASSWORD were not set)'
exit 0
fi
host="$(hostname --ip-address || echo '127.0.0.1')"
user="${MYSQL_USER:-root}"
export MYSQL_PWD="${MYSQL_PASSWORD:-$MYSQL_ROOT_PASSWORD}"
args=(
# force mysql to not use the local "mysqld.sock" (test "external" connectibility)
-h"$host"
-u"$user"
--silent
)
if select="$(echo 'SELECT 1' | mysql "${args[@]}")" && [ "$select" = '1' ]; then
exit 0
fi
exit 1

View file

@ -1,30 +0,0 @@
[mysqld]
general_log=OFF
bind-address=0.0.0.0
innodb_flush_log_at_trx_commit=0
query_cache_size=0
query_cache_type=0
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_doublewrite=1
innodb_buffer_pool_size=122M
max_connections=@MAX_CONNECTIONS@
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_provider_options="gcache.size=300M; gcache.page_size=300M"
#wsrep_provider_options="gcache.size=300M; gcache.page_size=300M; pc.bootstrap=YES"
#wsrep_provider_options="gcache.size=300M; gcache.page_size=300M; pc.bootstrap=YES; pc.ignore_sb=TRUE"
# See: http://galeracluster.com/documentation-webpages/twonode.html
wsrep_cluster_address="gcomm://"
wsrep_cluster_name="galera_cluster_keycloak"
wsrep_sst_method=rsync
#wsrep_sst_auth="root:root"
wsrep_slave_threads=16

View file

@ -1,101 +0,0 @@
version: "2.2"
networks:
keycloak:
ipam:
config:
- subnet: 10.0.1.0/24
db_replication:
ipam:
config:
- subnet: 10.0.3.0/24
services:
mariadb_bootstrap:
build: db/mariadb
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
networks:
- db_replication
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_INITDB_SKIP_TZINFO: foo
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: keycloak
entrypoint: docker-entrypoint-wsrep.sh
command: --wsrep-new-cluster
mariadb_1:
build: db/mariadb
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
networks:
- db_replication
- keycloak
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_INITDB_SKIP_TZINFO: foo
entrypoint: docker-entrypoint-wsrep.sh
command: --wsrep_cluster_address=gcomm://${MARIADB_RUNNING_HOST:-mariadb_bootstrap}
ports:
- "3316:3306"
mariadb_2:
build: db/mariadb
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
networks:
- db_replication
- keycloak
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_INITDB_SKIP_TZINFO: foo
entrypoint: docker-entrypoint-wsrep.sh
command: --wsrep_cluster_address=gcomm://${MARIADB_RUNNING_HOST:-mariadb_bootstrap}
ports:
- "3326:3306"
mariadb_3:
build: db/mariadb
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
networks:
- db_replication
- keycloak
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_INITDB_SKIP_TZINFO: foo
entrypoint: docker-entrypoint-wsrep.sh
command: --wsrep_cluster_address=gcomm://${MARIADB_RUNNING_HOST:-mariadb_bootstrap}
ports:
- "3336:3306"
keycloak:
build: keycloak/target/docker
image: keycloak_test_keycloak:${KEYCLOAK_VERSION:-latest}
networks:
- keycloak
environment:
MARIADB_HA_MODE: ${MARIADB_HA_MODE:-replication:}
MARIADB_HOSTS: ${MARIADB_HOSTS:-mariadb_1:3306,mariadb_2:3306}
MARIADB_OPTIONS: ${MARIADB_OPTIONS}
MARIADB_DATABASE: keycloak
MARIADB_USER: keycloak
MARIADB_PASSWORD: keycloak
KEYCLOAK_ADMIN_USER: admin
KEYCLOAK_ADMIN_PASSWORD: admin
# docker-compose syntax note: ${ENV_VAR:-<DEFAULT_VALUE>}
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-500}
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
ports:
- "8080:8080"
- "8443:8443"
- "7989:7989"
- "9990:9990"
- "9999:9999"

View file

@ -1,109 +0,0 @@
<project name="infinispan" basedir="." >
<target name="download-infinispan">
<property environment="env"/>
<property name="infinispan.zip.filename" value="infinispan-server-${infinispan.version}.zip"/>
<property name="infinispan.zip.src" value="https://downloads.jboss.org/infinispan/${infinispan.version}"/>
<property name="infinispan.zip.dir" value="${env.HOME}/.m2/repository/org/infinispan/server/infinispan-server/${infinispan.version}"/>
<mkdir dir="${infinispan.zip.dir}"/>
<get
src="${infinispan.zip.src}/${infinispan.zip.filename}"
dest="${infinispan.zip.dir}/"
usetimestamp="true"
verbose="true"
/>
<echo file="${infinispan.zip.dir}/info.txt" append="false">This artifact was downloaded from: ${infinispan.zip.src}/${infinispan.zip.filename}</echo>
<unzip
src="${infinispan.zip.dir}/${infinispan.zip.filename}" dest="${project.build.directory}"
overwrite="false"
/>
</target>
<target name="check-configuration-state" >
<available property="configured" file="${infinispan.unpacked.home}/../configured"/>
<available property="management.configured" file="${infinispan.unpacked.home}/../management-configured"/>
<echo>configured: ${configured}</echo>
<echo>management.configured: ${management.configured}</echo>
</target>
<target name="configure-infinispan" unless="configured" depends="check-configuration-state">
<!-- configuration common for both DC sites-->
<chmod perm="ug+x">
<fileset dir="${infinispan.unpacked.home}/bin">
<include name="*.sh"/>
</fileset>
</chmod>
<copy file="${scripts.dir}/jboss-cli/add-private-network-interface.cli" todir="${infinispan.unpacked.home}/bin" />
<exec executable="./${jboss.cli.script}" dir="${infinispan.unpacked.home}/bin" failonerror="true">
<arg value="--file=add-private-network-interface.cli"/>
</exec>
<!-- DC-specific configuration (dc1 and dc2) -->
<copy file="${infinispan.unpacked.home}/standalone/configuration/clustered.xml"
tofile="${infinispan.unpacked.home}/standalone/configuration/clustered-dc1.xml" />
<move file="${infinispan.unpacked.home}/standalone/configuration/clustered.xml"
tofile="${infinispan.unpacked.home}/standalone/configuration/clustered-dc2.xml" />
<copy file="${scripts.dir}/jboss-cli/add-keycloak-caches.cli" tofile="${infinispan.unpacked.home}/bin/add-keycloak-caches-dc1.cli" >
<filterset>
<filter token="LOCAL_SITE" value="dc1"/>
<filter token="REMOTE_SITE" value="dc2"/>
</filterset>
</copy>
<copy file="${scripts.dir}/jboss-cli/add-keycloak-caches.cli" tofile="${infinispan.unpacked.home}/bin/add-keycloak-caches-dc2.cli" >
<filterset>
<filter token="LOCAL_SITE" value="dc2"/>
<filter token="REMOTE_SITE" value="dc1"/>
</filterset>
</copy>
<exec executable="./${jboss.cli.script}" dir="${infinispan.unpacked.home}/bin" failonerror="true">
<arg value="--file=add-keycloak-caches-dc1.cli"/>
</exec>
<exec executable="./${jboss.cli.script}" dir="${infinispan.unpacked.home}/bin" failonerror="true">
<arg value="--file=add-keycloak-caches-dc2.cli"/>
</exec>
<!--cleanup-->
<delete dir="${infinispan.unpacked.home}/standalone/configuration/standalone_xml_history"/>
<delete dir="${infinispan.unpacked.home}/standalone/log"/>
<delete dir="${infinispan.unpacked.home}/standalone/data"/>
<delete dir="${infinispan.unpacked.home}/standalone/tmp"/>
<replace file="${infinispan.unpacked.home}/bin/standalone.sh">
<replacetoken><![CDATA[JBOSS_PID=$!]]></replacetoken>
<replacevalue><![CDATA[JBOSS_PID=$!
if [ "$JSTAT" = "true" ] ; then
echo "Starting jstat"
mkdir -p $JBOSS_LOG_DIR
jstat -gc -t $JBOSS_PID 1000 > $JBOSS_LOG_DIR/jstat-gc.log &
fi
]]></replacevalue>
</replace>
<touch file="${infinispan.unpacked.home}/../configured"/>
</target>
<target name="add-management-user" unless="management.configured" depends="check-configuration-state">
<echo>Adding management user: `${management.user}`</echo>
<exec executable="./${add.user.script}" dir="${infinispan.unpacked.home}/bin" failonerror="true">
<arg value="-u"/>
<arg value="${management.user}"/>
<arg value="-p"/>
<arg value="${management.user.password}"/>
</exec>
<touch file="${infinispan.unpacked.home}/../management-configured"/>
</target>
<target name="prepare-docker-config">
<copy todir="${infinispan.unpacked.home}/../docker" overwrite="false">
<fileset dir="${scripts.dir}">
<include name="Dockerfile"/>
<include name="*.sh"/>
</fileset>
</copy>
<copy todir="${infinispan.unpacked.home}/../docker/infinispan-server" overwrite="false">
<fileset dir="${infinispan.unpacked.home}">
<exclude name="bin/*.cli"/>
</fileset>
</copy>
</target>
</project>

View file

@ -1,145 +0,0 @@
<?xml version="1.0"?>
<!--
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
~ and other contributors as indicated by the @author tags.
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<parent>
<groupId>org.keycloak.testsuite</groupId>
<artifactId>performance</artifactId>
<version>999-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>performance-keycloak-infinispan-server</artifactId>
<name>Keycloak Performance TestSuite - Infinispan Server</name>
<packaging>pom</packaging>
<properties>
<infinispan.groupId>org.infinispan.server</infinispan.groupId>
<infinispan.artifactId>infinispan-server-build</infinispan.artifactId>
<!--infinispan.version is located in the root pom.xml-->
<infinispan.unpacked.folder.name>infinispan-server-${infinispan.version}</infinispan.unpacked.folder.name>
<infinispan.unpacked.home>${project.build.directory}/${infinispan.unpacked.folder.name}</infinispan.unpacked.home>
<script.extension>sh</script.extension>
<jboss.cli.script>ispn-cli.${script.extension}</jboss.cli.script>
<add.user.script>add-user.${script.extension}</add.user.script>
<scripts.dir>${project.build.scriptSourceDirectory}</scripts.dir>
<resources.dir>${project.basedir}/src/main/resources</resources.dir>
<skip.add.management.user>true</skip.add.management.user>
<skip.docker.config>false</skip.docker.config>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>download-infinispan</id>
<phase>generate-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<ant antfile="infinispan.xml" target="download-infinispan" />
</target>
</configuration>
</execution>
<execution>
<id>configure-infinispan</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<ant antfile="infinispan.xml" target="configure-infinispan" />
</target>
</configuration>
</execution>
<execution>
<id>add-management-user</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<skip>${skip.add.management.user}</skip>
<target>
<ant antfile="infinispan.xml" target="add-management-user" />
</target>
</configuration>
</execution>
<execution>
<id>prepare-docker-config</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<skip>${skip.docker.config}</skip>
<target>
<ant antfile="infinispan.xml" target="prepare-docker-config" />
</target>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>add-management-user</id>
<activation>
<property>
<name>management.user</name>
</property>
</activation>
<properties>
<skip.add.management.user>false</skip.add.management.user>
<!--it seems to be necessary to explicitly re-set these properties here
otherwise the antrun plugin won't pick them up-->
<management.user>${management.user}</management.user>
<management.user.password>${management.user.password}</management.user.password>
</properties>
</profile>
<profile>
<id>windows</id>
<activation>
<os>
<family>windows</family>
</os>
</activation>
<properties>
<script.extension>ps1</script.extension>
</properties>
</profile>
</profiles>
</project>

View file

@ -1,19 +0,0 @@
FROM jboss/base-jdk:8
ENV LAUNCH_JBOSS_IN_BACKGROUND 1
ENV JSTAT false
ENV CONFIGURATION clustered.xml
ENV INFINISPAN_SERVER_HOME /opt/jboss/infinispan-server
WORKDIR $INFINISPAN_SERVER_HOME
USER root
RUN yum -y install iproute
ADD infinispan-server ./
ADD *.sh /usr/local/bin/
RUN chown -R jboss .; chgrp -R jboss .; chmod -R -v ug+x bin/*.sh ; chmod -R -v ug+x /usr/local/bin/
USER jboss
EXPOSE 7600 8080 8181 8888 9990 11211 11222 57600
HEALTHCHECK --interval=5s --timeout=5s --retries=12 CMD ["infinispan-healthcheck.sh"]
ENTRYPOINT [ "docker-entrypoint-custom.sh" ]

View file

@ -1,28 +0,0 @@
#!/bin/bash
cat $INFINISPAN_SERVER_HOME/standalone/configuration/$CONFIGURATION
. get-ips.sh
PARAMS="-b $PUBLIC_IP -bmanagement $PUBLIC_IP -bprivate $PRIVATE_IP -Djgroups.bind_addr=$PUBLIC_IP -c $CONFIGURATION $@"
echo "Server startup params: $PARAMS"
# Note: External container connectivity is always provided by eth0 -- irrespective of which is considered public/private by KC.
# In case the container needs to be accessible on the host computer override -b $PUBLIC_IP by adding: `-b 0.0.0.0` to the docker command.
if [ $MGMT_USER ] && [ $MGMT_USER_PASSWORD ]; then
echo Adding mgmt user: $MGMT_USER
$INFINISPAN_SERVER_HOME/bin/add-user.sh -u $MGMT_USER -p $MGMT_USER_PASSWORD
fi
if [ $APP_USER ] && [ $APP_USER_PASSWORD ]; then
echo Adding app user: $APP_USER
if [ -z $APP_USER_GROUPS ]; then
$INFINISPAN_SERVER_HOME/bin/add-user.sh -a --user $APP_USER --password $APP_USER_PASSWORD
else
$INFINISPAN_SERVER_HOME/bin/add-user.sh -a --user $APP_USER --password $APP_USER_PASSWORD --group $APP_USER_GROUPS
fi
fi
exec $INFINISPAN_SERVER_HOME/bin/standalone.sh $PARAMS
exit $?

View file

@ -1,37 +0,0 @@
#!/bin/bash
# This script:
# - assumes there are 2 network interfaces connected to $PUBLIC_SUBNET and $PRIVATE_SUBNET respectively
# - finds IP addresses from those interfaces using `ip` command
# - exports the IPs as variables
function getIpForSubnet {
if [ -z $1 ]; then
echo Subnet parameter undefined
exit 1
else
ip r | grep $1 | sed 's/.*src\ \([^\ ]*\).*/\1/'
fi
}
if [ -z $PUBLIC_SUBNET ]; then
PUBLIC_IP=0.0.0.0
echo PUBLIC_SUBNET undefined. PUBLIC_IP defaults to $PUBLIC_IP
else
export PUBLIC_IP=`getIpForSubnet $PUBLIC_SUBNET`
fi
if [ -z $PRIVATE_SUBNET ]; then
PRIVATE_IP=127.0.0.1
echo PRIVATE_SUBNET undefined. PRIVATE_IP defaults to $PRIVATE_IP
else
export PRIVATE_IP=`getIpForSubnet $PRIVATE_SUBNET`
fi
echo Routing table:
ip r
echo PUBLIC_SUBNET=$PUBLIC_SUBNET
echo PRIVATE_SUBNET=$PRIVATE_SUBNET
echo PUBLIC_IP=$PUBLIC_IP
echo PRIVATE_IP=$PRIVATE_IP

View file

@ -1,12 +0,0 @@
#!/bin/bash
. get-ips.sh
CODE=`curl -s -o /dev/null -w "%{http_code}" http://$PUBLIC_IP:9990/console/index.html`
if [ "$CODE" -eq "200" ]; then
exit 0
fi
exit 1

View file

@ -1,32 +0,0 @@
embed-server --server-config=clustered-@LOCAL_SITE@.xml
# 2)
cd /subsystem=datagrid-jgroups
# 2.a)
./channel=xsite:add(stack=tcp-private)
# 2.b)
./stack=udp/relay=RELAY:add(site="@LOCAL_SITE@", properties={relay_multicasts=false})
./stack=udp/relay=RELAY/remote-site=@REMOTE_SITE@:add(channel=xsite)
# 3)
cd /subsystem=datagrid-infinispan/cache-container=clustered/configurations=CONFIGURATIONS
./replicated-cache-configuration=sessions-cfg:add(mode=SYNC, start=EAGER, batching=false)
cd replicated-cache-configuration=sessions-cfg
./transaction=TRANSACTION:add(mode=NON_DURABLE_XA, locking=PESSIMISTIC)
./locking=LOCKING:add(acquire-timeout=0)
./backup=@REMOTE_SITE@:add(failure-policy=FAIL, strategy=SYNC, enabled=true, min-wait=60000, after-failures=3)
cd /subsystem=datagrid-infinispan/cache-container=clustered
./replicated-cache=work:add(configuration=sessions-cfg)
./replicated-cache=sessions:add(configuration=sessions-cfg)
./replicated-cache=clientSessions:add(configuration=sessions-cfg)
./replicated-cache=offlineSessions:add(configuration=sessions-cfg)
./replicated-cache=offlineClientSessions:add(configuration=sessions-cfg)
./replicated-cache=actionTokens:add(configuration=sessions-cfg)
./replicated-cache=loginFailures:add(configuration=sessions-cfg)

View file

@ -1,33 +0,0 @@
embed-server --server-config=clustered.xml
echo *** Adding private network interface for cross-DC communication
/interface=private:add(inet-address=${jboss.bind.address.private:127.0.0.1})
echo *** Adding jgroups socket bindings for private network interface
cd /socket-binding-group=standard-sockets
./socket-binding=jgroups-mping-private:add( interface=private, port=0, multicast-address="${jboss.private.multicast.address:234.99.54.14}", multicast-port="45700")
./socket-binding=jgroups-tcp-private:add( interface=private, port=7600)
./socket-binding=jgroups-tcp-fd-private:add(interface=private, port=57600)
./socket-binding=jgroups-udp-private:add( interface=private, port=55200, multicast-address="${jboss.private.multicast.address:234.99.54.14}", multicast-port="45688")
./socket-binding=jgroups-udp-fd-private:add(interface=private, port=54200)
echo *** Adding TCP protocol stack for private network interface
/subsystem=datagrid-jgroups/stack=tcp-private:add(transport={type=TCP, socket-binding=jgroups-tcp-private}, protocols=[ \
{type=MPING, socket-binding=jgroups-mping-private}, \
{type=MERGE3}, \
{type=FD_SOCK, socket-binding=jgroups-tcp-fd-private}, \
{type=FD_ALL}, \
{type=VERIFY_SUSPECT}, \
{type=pbcast.NAKACK2, properties={"use_mcast_xmit" => "false"}}, \
{type=UNICAST3}, \
{type=pbcast.STABLE}, \
{type=pbcast.GMS}, \
{type=MFC_NB}, \
{type=FRAG3} \
])
# Note: for Infinispan 8.x change the above FRAG3 to FRAG2

View file

@ -1,101 +0,0 @@
<project name="keycloak-server-configuration" basedir="." >
<target name="check-configuration-state">
<available property="performance.configured" file="${project.build.directory}/performance-configured"/>
<available property="management.configured" file="${project.build.directory}/management-configured"/>
<available property="crossdc.configured" file="${project.build.directory}/crossdc-configured"/>
<echo>performance.configured: ${performance.configured}</echo>
<echo>management.configured: ${management.configured}</echo>
<echo>crossdc.configured: ${crossdc.configured}</echo>
</target>
<target name="keycloak-performance-configuration" unless="performance.configured" depends="check-configuration-state">
<echo>Applying keycloak performance configuration.</echo>
<chmod perm="ug+x">
<fileset dir="${server.unpacked.home}/bin">
<include name="*.sh"/>
</fileset>
</chmod>
<filter token="MODULE_NAME" value="${jdbc.driver.groupId}"/>
<filter token="RESOURCE_ROOT_PATH" value="${jdbc.driver.artifactId}-${jdbc.driver.version}.jar"/>
<copy file="${resources.dir}/module.xml"
todir="${server.unpacked.home}/modules/system/layers/base/${jdbc.driver.module.path}/main"
filtering="true"
/>
<copy todir="${server.unpacked.home}/bin" >
<fileset dir="${scripts.dir}/jboss-cli"/>
</copy>
<exec executable="./${jboss.cli.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="--file=set-keycloak-ds.cli"/>
<env key="JBOSS_HOME" value="${server.unpacked.home}"/>
</exec>
<exec executable="./${jboss.cli.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="--file=io-worker-threads.cli"/>
<env key="JBOSS_HOME" value="${server.unpacked.home}"/>
</exec>
<exec executable="./${jboss.cli.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="--file=undertow.cli"/>
<env key="JBOSS_HOME" value="${server.unpacked.home}"/>
</exec>
<exec executable="./${jboss.cli.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="--file=io-worker-threads.cli"/>
<env key="JBOSS_HOME" value="${server.unpacked.home}"/>
</exec>
<delete dir="${server.unpacked.home}/standalone/configuration/standalone_xml_history"/>
<delete dir="${server.unpacked.home}/standalone/log"/>
<delete dir="${server.unpacked.home}/standalone/data"/>
<delete dir="${server.unpacked.home}/standalone/tmp"/>
<replace file="${server.unpacked.home}/bin/standalone.sh">
<replacetoken><![CDATA[JBOSS_PID=$!]]></replacetoken>
<replacevalue><![CDATA[JBOSS_PID=$!
if [ "$JSTAT" = "true" ] ; then
echo "Starting jstat"
mkdir -p $JBOSS_LOG_DIR
jstat -gc -t $JBOSS_PID 1000 > $JBOSS_LOG_DIR/jstat-gc.log &
fi
]]></replacevalue>
</replace>
<touch file="${project.build.directory}/performance-configured"/>
</target>
<target name="add-management-user" unless="management.configured" depends="check-configuration-state">
<echo>Adding management user: `${management.user}`</echo>
<exec executable="./${add.user.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="-u"/>
<arg value="${management.user}"/>
<arg value="-p"/>
<arg value="${management.user.password}"/>
<env key="JBOSS_HOME" value="${server.unpacked.home}"/>
</exec>
<touch file="${project.build.directory}/management-configured"/>
</target>
<target name="keycloak-crossdc-configuration" unless="crossdc.configured" depends="check-configuration-state">
<echo>keycloak-crossdc-configuration</echo>
<exec executable="./${jboss.cli.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="--file=add-remote-cache-stores.cli"/>
<env key="JBOSS_HOME" value="${server.unpacked.home}"/>
</exec>
<delete dir="${server.unpacked.home}/standalone/configuration/standalone_xml_history"/>
<delete dir="${server.unpacked.home}/standalone/log"/>
<delete dir="${server.unpacked.home}/standalone/data"/>
<delete dir="${server.unpacked.home}/standalone/tmp"/>
<touch file="${project.build.directory}/crossdc-configured"/>
</target>
<target name="keycloak-docker">
<copy todir="${project.build.directory}/docker" overwrite="false">
<fileset dir="${scripts.dir}">
<include name="Dockerfile"/>
<include name="*.sh"/>
</fileset>
</copy>
<copy todir="${project.build.directory}/docker/keycloak" overwrite="false">
<fileset dir="${server.unpacked.home}">
<exclude name="bin/*.cli"/>
</fileset>
</copy>
</target>
</project>

View file

@ -1,217 +0,0 @@
<?xml version="1.0"?>
<!--
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
~ and other contributors as indicated by the @author tags.
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<parent>
<groupId>org.keycloak.testsuite</groupId>
<artifactId>performance</artifactId>
<version>999-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>performance-keycloak-server</artifactId>
<name>Keycloak Performance TestSuite - Keycloak Server</name>
<packaging>pom</packaging>
<description>
Uses maven-dependency-plugin to unpack keycloak-server-dist artifact into `target/keycloak` which is then added to Docker image.
</description>
<properties>
<server.groupId>org.keycloak</server.groupId>
<server.artifactId>keycloak-server-dist</server.artifactId>
<!-- `server.version` is defined one level up -->
<server.unpacked.folder.name>keycloak-${server.version}</server.unpacked.folder.name>
<server.unpacked.home>${project.build.directory}/${server.unpacked.folder.name}</server.unpacked.home>
<jdbc.driver.groupId>org.mariadb.jdbc</jdbc.driver.groupId>
<jdbc.driver.artifactId>mariadb-java-client</jdbc.driver.artifactId>
<jdbc.driver.version>2.2.4</jdbc.driver.version>
<jdbc.driver.module.path>org/mariadb/jdbc</jdbc.driver.module.path>
<script.extension>sh</script.extension>
<jboss.cli.script>jboss-cli.${script.extension}</jboss.cli.script>
<add.user.script>add-user.${script.extension}</add.user.script>
<skip.crossdc.configuration>true</skip.crossdc.configuration>
<skip.configuration>false</skip.configuration>
<skip.add.management.user>true</skip.add.management.user>
<skip.keycloak.docker>false</skip.keycloak.docker>
<scripts.dir>${project.build.scriptSourceDirectory}</scripts.dir>
<resources.dir>${project.basedir}/src/main/resources</resources.dir>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>unpack-keycloak-server-dist</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<overWriteIfNewer>true</overWriteIfNewer>
<artifactItems>
<artifactItem>
<groupId>${server.groupId}</groupId>
<artifactId>${server.artifactId}</artifactId>
<version>${server.version}</version>
<type>zip</type>
<outputDirectory>${project.build.directory}</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
<execution>
<id>copy-jdbc-driver</id>
<phase>generate-resources</phase>
<goals>
<goal>copy</goal>
</goals>
<configuration>
<overWriteIfNewer>true</overWriteIfNewer>
<artifactItems>
<artifactItem>
<groupId>${jdbc.driver.groupId}</groupId>
<artifactId>${jdbc.driver.artifactId}</artifactId>
<version>${jdbc.driver.version}</version>
<type>jar</type>
<outputDirectory>${server.unpacked.home}/modules/system/layers/base/${jdbc.driver.module.path}/main</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>keycloak-performance-configuration</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<ant antfile="configure.xml" target="keycloak-performance-configuration" />
</target>
</configuration>
</execution>
<execution>
<id>keycloak-crossdc-configuration</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<skip>${skip.crossdc.configuration}</skip>
<target>
<ant antfile="configure.xml" target="keycloak-crossdc-configuration" />
</target>
</configuration>
</execution>
<execution>
<id>add-management-user</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<skip>${skip.add.management.user}</skip>
<target>
<ant antfile="configure.xml" target="add-management-user" />
</target>
</configuration>
</execution>
<execution>
<id>keycloak-docker</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<skip>${skip.keycloak.docker}</skip>
<target>
<ant antfile="configure.xml" target="keycloak-docker" />
</target>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>windows</id>
<activation>
<os>
<family>windows</family>
</os>
</activation>
<properties>
<script.extension>ps1</script.extension>
</properties>
</profile>
<profile>
<id>add-management-user</id>
<activation>
<property>
<name>management.user</name>
</property>
</activation>
<properties>
<skip.add.management.user>false</skip.add.management.user>
<!--it seems to be necessary to explicitly re-set these properties here
otherwise the antrun plugin won't pick them up-->
<management.user>${management.user}</management.user>
<management.user.password>${management.user.password}</management.user.password>
</properties>
</profile>
<profile>
<id>crossdc</id>
<properties>
<skip.crossdc.configuration>false</skip.crossdc.configuration>
</properties>
</profile>
<profile>
<id>integration-testsuite-server</id>
<properties>
<server.groupId>org.keycloak.testsuite</server.groupId>
<server.artifactId>integration-arquillian-servers-auth-server-wildfly</server.artifactId>
<server.unpacked.home>auth-server-wildfly</server.unpacked.home>
</properties>
</profile>
</profiles>
</project>

View file

@ -1,31 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ JBoss, Home of Professional Open Source.
~ Copyright 2010, Red Hat, Inc., and individual contributors
~ as indicated by the @author tags. See the copyright.txt file in the
~ distribution for a full listing of individual contributors.
~
~ This is free software; you can redistribute it and/or modify it
~ under the terms of the GNU Lesser General Public License as
~ published by the Free Software Foundation; either version 2.1 of
~ the License, or (at your option) any later version.
~
~ This software is distributed in the hope that it will be useful,
~ but WITHOUT ANY WARRANTY; without even the implied warranty of
~ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
~ Lesser General Public License for more details.
~
~ You should have received a copy of the GNU Lesser General Public
~ License along with this software; if not, write to the Free
~ Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
~ 02110-1301 USA, or see the FSF site: http://www.fsf.org.
-->
<module xmlns="urn:jboss:module:1.5" name="@MODULE_NAME@">
<resources>
<resource-root path="@RESOURCE_ROOT_PATH@"/>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="javax.transaction.api"/>
</dependencies>
</module>

View file

@ -1,29 +0,0 @@
FROM jboss/base-jdk:8
ENV JBOSS_HOME /opt/jboss/keycloak
WORKDIR $JBOSS_HOME
ENV CONFIGURATION standalone.xml
# Enables signals getting passed from startup script to JVM
# ensuring clean shutdown when container is stopped.
ENV LAUNCH_JBOSS_IN_BACKGROUND 1
ENV PROXY_ADDRESS_FORWARDING false
ENV JSTAT false
USER root
RUN yum install -y epel-release jq iproute && yum clean all
ADD keycloak ./
ADD *.sh /usr/local/bin/
USER root
RUN chown -R jboss .; chgrp -R jboss .; chmod -R -v ug+x bin/*.sh ; \
chmod -R -v +x /usr/local/bin/
USER jboss
EXPOSE 8080
EXPOSE 9990
HEALTHCHECK --interval=5s --timeout=5s --retries=12 CMD ["keycloak-healthcheck.sh"]
ENTRYPOINT ["docker-entrypoint.sh"]

View file

@ -1,17 +0,0 @@
#!/bin/bash
cat $JBOSS_HOME/standalone/configuration/$CONFIGURATION
. get-ips.sh
PARAMS="-b $PUBLIC_IP -bmanagement $PUBLIC_IP -bprivate $PRIVATE_IP -c $CONFIGURATION $@"
echo "Server startup params: $PARAMS"
# Note: External container connectivity is always provided by eth0 -- irrespective of which is considered public/private by KC.
# In case the container needs to be accessible on the host computer override -b $PUBLIC_IP by adding: `-b 0.0.0.0` to the docker command.
if [ $KEYCLOAK_ADMIN_USER ] && [ $KEYCLOAK_ADMIN_PASSWORD ]; then
$JBOSS_HOME/bin/add-user-keycloak.sh --user $KEYCLOAK_ADMIN_USER --password $KEYCLOAK_ADMIN_PASSWORD
fi
exec /opt/jboss/keycloak/bin/standalone.sh $PARAMS

View file

@ -1,37 +0,0 @@
#!/bin/bash
# This script:
# - assumes there are 2 network interfaces connected to $PUBLIC_SUBNET and $PRIVATE_SUBNET respectively
# - finds IP addresses from those interfaces using `ip` command
# - exports the IPs as variables
function getIpForSubnet {
if [ -z $1 ]; then
echo Subnet parameter undefined
exit 1
else
ip r | grep $1 | sed 's/.*src\ \([^\ ]*\).*/\1/'
fi
}
if [ -z $PUBLIC_SUBNET ]; then
PUBLIC_IP=0.0.0.0
echo PUBLIC_SUBNET undefined. PUBLIC_IP defaults to $PUBLIC_IP
else
export PUBLIC_IP=`getIpForSubnet $PUBLIC_SUBNET`
fi
if [ -z $PRIVATE_SUBNET ]; then
PRIVATE_IP=127.0.0.1
echo PRIVATE_SUBNET undefined. PRIVATE_IP defaults to $PRIVATE_IP
else
export PRIVATE_IP=`getIpForSubnet $PRIVATE_SUBNET`
fi
echo Routing table:
ip r
echo PUBLIC_SUBNET=$PUBLIC_SUBNET
echo PRIVATE_SUBNET=$PRIVATE_SUBNET
echo PUBLIC_IP=$PUBLIC_IP
echo PRIVATE_IP=$PRIVATE_IP

View file

@ -1,26 +0,0 @@
embed-server --server-config=standalone-ha.xml
/subsystem=jgroups/stack=udp/transport=UDP:write-attribute(name=site, value=${env.SITE:dc1})
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-cache:add(host=${env.INFINISPAN_HOST:localhost}, port=${env.INFINISPAN_PORT:11222})
cd /subsystem=infinispan/cache-container=keycloak
:write-attribute(name=module, value=org.keycloak.keycloak-model-infinispan)
./replicated-cache=work/store=remote:add(cache=work, fetch-state=false, passivation=false, preload=false, purge=false, remote-servers=["remote-cache"], shared=true, properties={rawValues=true, marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, protocolVersion=${env.HOTROD_VERSION:2.8}})
./distributed-cache=sessions/store=remote:add(cache=sessions, fetch-state=false, passivation=false, preload=false, purge=false, remote-servers=["remote-cache"], shared=true, properties={rawValues=true, marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, protocolVersion=${env.HOTROD_VERSION:2.8}})
./distributed-cache=offlineSessions/store=remote:add(cache=offlineSessions, fetch-state=false, passivation=false, preload=false, purge=false, remote-servers=["remote-cache"], shared=true, properties={rawValues=true, marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, protocolVersion=${env.HOTROD_VERSION:2.8}})
./distributed-cache=clientSessions/store=remote:add(cache=clientSessions, fetch-state=false, passivation=false, preload=false, purge=false, remote-servers=["remote-cache"], shared=true, properties={rawValues=true, marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, protocolVersion=${env.HOTROD_VERSION:2.8}})
./distributed-cache=offlineClientSessions/store=remote:add(cache=offlineClientSessions, fetch-state=false, passivation=false, preload=false, purge=false, remote-servers=["remote-cache"], shared=true, properties={rawValues=true, marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, protocolVersion=${env.HOTROD_VERSION:2.8}})
./distributed-cache=loginFailures/store=remote:add(cache=loginFailures, fetch-state=false, passivation=false, preload=false, purge=false, remote-servers=["remote-cache"], shared=true, properties={rawValues=true, marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, protocolVersion=${env.HOTROD_VERSION:2.8}})
./distributed-cache=actionTokens/store=remote:add(cache=actionTokens, fetch-state=false, passivation=false, preload=false, purge=false, remote-servers=["remote-cache"], shared=true, properties={rawValues=true, marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory, protocolVersion=${env.HOTROD_VERSION:2.8}})
./replicated-cache=work:write-attribute (name=statistics-enabled, value=${env.CACHE_STATISTICS:true})
./distributed-cache=sessions:write-attribute (name=statistics-enabled, value=${env.CACHE_STATISTICS:true})
./distributed-cache=offlineSessions:write-attribute (name=statistics-enabled, value=${env.CACHE_STATISTICS:true})
./distributed-cache=clientSessions:write-attribute (name=statistics-enabled, value=${env.CACHE_STATISTICS:true})
./distributed-cache=offlineClientSessions:write-attribute (name=statistics-enabled, value=${env.CACHE_STATISTICS:true})
./distributed-cache=loginFailures:write-attribute (name=statistics-enabled, value=${env.CACHE_STATISTICS:true})
./distributed-cache=actionTokens:write-attribute (name=statistics-enabled, value=${env.CACHE_STATISTICS:true})

View file

@ -1,13 +0,0 @@
embed-server --server-config=standalone-ha.xml
# increase number of "owners" for distributed keycloak caches to support failover
cd /subsystem=infinispan/cache-container=keycloak/
./distributed-cache=sessions:write-attribute(name=owners, value=${distributed.cache.owners:2})
./distributed-cache=offlineSessions:write-attribute(name=owners, value=${distributed.cache.owners:2})
./distributed-cache=clientSessions:write-attribute(name=owners, value=${distributed.cache.owners:2})
./distributed-cache=offlineClientSessions:write-attribute(name=owners, value=${distributed.cache.owners:2})
./distributed-cache=loginFailures:write-attribute(name=owners, value=${distributed.cache.owners:2})
./distributed-cache=actionTokens:write-attribute(name=owners, value=${distributed.cache.owners:2})

View file

@ -1,8 +0,0 @@
embed-server --server-config=standalone-ha.xml
cd subsystem=logging
./logger=org.keycloak.cluster.infinispan:add(level=DEBUG)
./logger=org.keycloak.connections.infinispan:add(level=DEBUG)
./logger=org.keycloak.models.cache.infinispan:add(level=DEBUG)
./logger=org.keycloak.models.sessions.infinispan:add(level=DEBUG)

View file

@ -1,5 +0,0 @@
# default is cpuCount * 2
/subsystem=io/worker=default:write-attribute(name=io-threads,value=${env.WORKER_IO_THREADS:2})
# default is cpuCount * 16
/subsystem=io/worker=default:write-attribute(name=task-max-threads,value=${env.WORKER_TASK_MAX_THREADS:16})

View file

@ -1,7 +0,0 @@
embed-server --server-config=standalone.xml
run-batch --file=io-worker-threads-batch.cli
stop-embedded-server
embed-server --server-config=standalone-ha.xml
run-batch --file=io-worker-threads-batch.cli

View file

@ -1,7 +0,0 @@
embed-server --server-config=standalone-ha.xml
# remove dynamic-load-provider
/subsystem=modcluster/mod-cluster-config=configuration/dynamic-load-provider=configuration:remove
# add simple-load-provider with factor=1 to assure round-robin balancing
/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=simple-load-provider, value=1)

View file

@ -1,20 +0,0 @@
/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb, driver-module-name=org.mariadb.jdbc, driver-xa-datasource-class-name=org.mariadb.jdbc.MySQLDataSource)
cd /subsystem=datasources/data-source=KeycloakDS
:write-attribute(name=connection-url, value=jdbc:mariadb:${env.MARIADB_HA_MODE:}//${env.MARIADB_HOSTS:mariadb:3306}/${env.MARIADB_DATABASE:keycloak}${env.MARIADB_OPTIONS:})
:write-attribute(name=driver-name, value=mariadb)
:write-attribute(name=user-name, value=${env.MARIADB_USER:keycloak})
:write-attribute(name=password, value=${env.MARIADB_PASSWORD:keycloak})
:write-attribute(name=check-valid-connection-sql, value="SELECT 1")
:write-attribute(name=background-validation, value=true)
:write-attribute(name=background-validation-millis, value=60000)
:write-attribute(name=min-pool-size, value=${env.DS_MIN_POOL_SIZE:10})
:write-attribute(name=max-pool-size, value=${env.DS_MAX_POOL_SIZE:100})
:write-attribute(name=pool-prefill, value=${env.DS_POOL_PREFILL:true})
:write-attribute(name=prepared-statements-cache-size, value=${env.DS_PS_CACHE_SIZE:100})
:write-attribute(name=flush-strategy, value=IdleConnections)
:write-attribute(name=use-ccm, value=true)

View file

@ -1,7 +0,0 @@
embed-server --server-config=standalone.xml
run-batch --file=set-keycloak-ds-batch.cli
stop-embedded-server
embed-server --server-config=standalone-ha.xml
run-batch --file=set-keycloak-ds-batch.cli

View file

@ -1,2 +0,0 @@
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding, value=${env.PROXY_ADDRESS_FORWARDING})
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=max-connections,value=${env.HTTP_MAX_CONNECTIONS:50000})

View file

@ -1,9 +0,0 @@
embed-server --server-config=standalone.xml
run-batch --file=undertow-batch.cli
stop-embedded-server
embed-server --server-config=standalone-ha.xml
run-batch --file=undertow-batch.cli
/subsystem=undertow/server=default-server/ajp-listener=ajp:write-attribute(name=max-connections,value=${env.AJP_MAX_CONNECTIONS:50000})

View file

@ -1,11 +0,0 @@
#!/bin/bash
. get-ips.sh
CODE=`curl -s -o /dev/null -w "%{http_code}" http://$PUBLIC_IP:8080/auth/realms/master`
if [ "$CODE" -eq "200" ]; then
exit 0
fi
exit 1

View file

@ -1,75 +0,0 @@
/*
* Copyright(c) 2010 Red Hat Middleware, LLC,
* and individual contributors as indicated by the @authors tag.
* See the copyright.txt in the distribution for a
* full listing of individual contributors.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library in the file COPYING.LIB;
* if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
*
* @author Jean-Frederic Clere
* @version $Revision: 420067 $, $Date: 2006-07-08 09:16:58 +0200 (sub, 08 srp 2006) $
*/
import java.net.MulticastSocket;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.DatagramPacket;
public class Advertise
{
/*
* Client to test Advertize... Run it on node boxes to test for firewall.
*/
public static void main(String[] args) throws Exception
{
if (args.length != 2 && args.length != 3) {
System.out.println("Usage: Advertize multicastaddress port [bindaddress]");
System.out.println("java Advertize 224.0.1.105 23364");
System.out.println("or");
System.out.println("java Advertize 224.0.1.105 23364 10.33.144.3");
System.out.println("receive from 224.0.1.105:23364");
System.exit(1);
}
InetAddress group = InetAddress.getByName(args[0]);
int port = Integer.parseInt(args[1]);
InetAddress socketInterface = null;
if (args.length == 3)
socketInterface = InetAddress.getByName(args[2]);
MulticastSocket s = null;
String value = System.getProperty("os.name");
if ((value != null) && (value.toLowerCase().startsWith("linux") || value.toLowerCase().startsWith("mac") || value.toLowerCase().startsWith("hp"))) {
System.out.println("Linux like OS");
s = new MulticastSocket(new InetSocketAddress(group, port));
} else
s = new MulticastSocket(port);
s.setTimeToLive(0);
if (socketInterface != null) {
s.setInterface(socketInterface);
}
s.joinGroup(group);
boolean ok = true;
System.out.println("ready waiting...");
while (ok) {
byte[] buf = new byte[1000];
DatagramPacket recv = new DatagramPacket(buf, buf.length);
s.receive(recv);
String data = new String(buf);
System.out.println("received: " + data);
System.out.println("received from " + recv.getSocketAddress());
}
s.leaveGroup(group);
}
}

View file

@ -1,7 +0,0 @@
FROM jboss/base-jdk:8
ADD SAdvertise.java .
RUN javac SAdvertise.java
ENTRYPOINT ["java", "SAdvertise"]
CMD ["0.0.0.0", "224.0.1.105", "23364"]

View file

@ -1,7 +0,0 @@
FROM jboss/base-jdk:8
ADD Advertise.java .
RUN javac Advertise.java
ENTRYPOINT ["java", "Advertise"]
CMD ["224.0.1.105", "23364"]

View file

@ -1,4 +0,0 @@
# mod_cluster advertize
Docker container for testing mod_cluster "advertise" functionality.

View file

@ -1,60 +0,0 @@
/*
* Copyright(c) 2010 Red Hat Middleware, LLC,
* and individual contributors as indicated by the @authors tag.
* See the copyright.txt in the distribution for a
* full listing of individual contributors.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library in the file COPYING.LIB;
* if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
*
* @author Jean-Frederic Clere
* @version $Revision: 420067 $, $Date: 2006-07-08 09:16:58 +0200 (sub, 08 srp 2006) $
*/
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.DatagramPacket;
import java.net.MulticastSocket;
public class SAdvertise {
/*
* Server to test Advertize... Run it on httpd box to test for firewall.
*/
public static void main(String[] args) throws Exception {
if (args.length != 3) {
System.out.println("Usage: SAdvertize localaddress multicastaddress port");
System.out.println("java SAdvertize 10.16.88.178 224.0.1.105 23364");
System.out.println("send from 10.16.88.178:23364 to 224.0.1.105:23364");
System.exit(1);
}
InetAddress group = InetAddress.getByName(args[1]);
InetAddress addr = InetAddress.getByName(args[0]);
int port = Integer.parseInt(args[2]);
InetSocketAddress addrs = new InetSocketAddress(addr, port);
MulticastSocket s = new MulticastSocket(addrs);
s.setTimeToLive(29);
s.joinGroup(group);
boolean ok = true;
while (ok) {
byte[] buf = new byte[1000];
DatagramPacket recv = new DatagramPacket(buf, buf.length, group, port);
System.out.println("sending from: " + addr);
s.send(recv);
Thread.currentThread().sleep(2000);
}
s.leaveGroup(group);
}
}

View file

@ -1,71 +0,0 @@
<project name="keycloak-server-configuration" basedir="." >
<target name="check-configuration-state">
<available property="configured" file="${project.build.directory}/configured"/>
<available property="management.configured" file="${project.build.directory}/management-configured"/>
<echo>configured: ${configured}</echo>
<echo>management.configured: ${management.configured}</echo>
</target>
<target name="configure-modcluster" unless="configured" depends="check-configuration-state">
<chmod perm="ug+x">
<fileset dir="${server.unpacked.home}/bin">
<include name="*.sh"/>
</fileset>
</chmod>
<copy todir="${server.unpacked.home}/bin" >
<fileset dir="${scripts.dir}/jboss-cli"/>
</copy>
<exec executable="./${jboss.cli.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="--file=mod-cluster-balancer.cli"/>
</exec>
<exec executable="./${jboss.cli.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="--file=undertow.cli"/>
</exec>
<exec executable="./${jboss.cli.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="--file=io-worker-threads.cli"/>
</exec>
<delete dir="${server.unpacked.home}/standalone/configuration/standalone_xml_history"/>
<delete dir="${server.unpacked.home}/standalone/log"/>
<delete dir="${server.unpacked.home}/standalone/data"/>
<delete dir="${server.unpacked.home}/standalone/tmp"/>
<replace file="${server.unpacked.home}/bin/standalone.sh">
<replacetoken><![CDATA[JBOSS_PID=$!]]></replacetoken>
<replacevalue><![CDATA[JBOSS_PID=$!
if [ "$JSTAT" = "true" ] ; then
echo "Starting jstat"
mkdir -p $JBOSS_LOG_DIR
jstat -gc -t $JBOSS_PID 1000 > $JBOSS_LOG_DIR/jstat-gc.log &
fi
]]></replacevalue>
</replace>
<touch file="${project.build.directory}/configured"/>
</target>
<target name="add-management-user" unless="management.configured" depends="check-configuration-state">
<echo>Adding management user: `${management.user}`</echo>
<exec executable="./${add.user.script}" dir="${server.unpacked.home}/bin" failonerror="true">
<arg value="-u"/>
<arg value="${management.user}"/>
<arg value="-p"/>
<arg value="${management.user.password}"/>
</exec>
<touch file="${project.build.directory}/management-configured"/>
</target>
<target name="prepare-docker-config">
<copy todir="${project.build.directory}/docker" overwrite="false">
<fileset dir="${scripts.dir}">
<include name="Dockerfile"/>
<include name="*.sh"/>
</fileset>
</copy>
<copy todir="${project.build.directory}/docker/wildfly" overwrite="false">
<fileset dir="${server.unpacked.home}">
<exclude name="bin/*.cli"/>
</fileset>
</copy>
</target>
</project>

View file

@ -1,155 +0,0 @@
<?xml version="1.0"?>
<!--
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
~ and other contributors as indicated by the @author tags.
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<parent>
<groupId>org.keycloak.testsuite</groupId>
<artifactId>performance</artifactId>
<version>999-SNAPSHOT</version>
<relativePath>../../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>performance-keycloak-wildfly-modcluster</artifactId>
<name>Keycloak Performance TestSuite - Wildfly ModCluster Load Balancer</name>
<packaging>pom</packaging>
<properties>
<server.groupId>org.wildfly</server.groupId>
<server.artifactId>wildfly-server-dist</server.artifactId>
<server.unpacked.home>${project.build.directory}/wildfly-${wildfly.version}</server.unpacked.home>
<script.extension>sh</script.extension>
<jboss.cli.script>jboss-cli.${script.extension}</jboss.cli.script>
<add.user.script>add-user.${script.extension}</add.user.script>
<scripts.dir>${project.build.scriptSourceDirectory}</scripts.dir>
<resources.dir>${project.basedir}/src/main/resources</resources.dir>
<skip.add.management.user>true</skip.add.management.user>
<skip.docker.config>false</skip.docker.config>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>unpack-wildfly</id>
<phase>generate-resources</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<overWriteIfNewer>true</overWriteIfNewer>
<artifactItems>
<artifactItem>
<groupId>org.wildfly</groupId>
<artifactId>wildfly-dist</artifactId>
<version>${wildfly.version}</version>
<type>zip</type>
<outputDirectory>${project.build.directory}</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>configure-modcluster</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<ant antfile="configure.xml" target="configure-modcluster" />
</target>
</configuration>
</execution>
<execution>
<id>add-management-user</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<skip>${skip.add.management.user}</skip>
<target>
<ant antfile="configure.xml" target="add-management-user" />
</target>
</configuration>
</execution>
<execution>
<id>prepare-docker-config</id>
<phase>process-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<skip>${skip.docker.config}</skip>
<target>
<ant antfile="configure.xml" target="prepare-docker-config" />
</target>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>add-management-user</id>
<activation>
<property>
<name>management.user</name>
</property>
</activation>
<properties>
<skip.add.management.user>false</skip.add.management.user>
<!--it seems to be necessary to explicitly re-set these properties here
otherwise the antrun plugin won't pick them up-->
<management.user>${management.user}</management.user>
<management.user.password>${management.user.password}</management.user.password>
</properties>
</profile>
<profile>
<id>windows</id>
<activation>
<os>
<family>windows</family>
</os>
</activation>
<properties>
<script.extension>ps1</script.extension>
</properties>
</profile>
</profiles>
</project>

View file

@ -1,28 +0,0 @@
FROM jboss/base-jdk:8
ENV JBOSS_HOME /opt/jboss/wildfly
WORKDIR $JBOSS_HOME
ENV CONFIGURATION standalone.xml
# Ensure signals are forwarded to the JVM process correctly for graceful shutdown
ENV LAUNCH_JBOSS_IN_BACKGROUND 1
ENV JSTAT false
USER root
RUN yum -y install iproute
ADD wildfly ./
ADD *.sh /usr/local/bin/
RUN chown -R jboss . ;\
chgrp -R jboss . ;\
chmod -R -v ug+x bin/*.sh ;\
chmod -R -v ug+x /usr/local/bin/
USER jboss
EXPOSE 8080
EXPOSE 9990
HEALTHCHECK --interval=5s --timeout=5s --retries=12 CMD ["wildfly-healthcheck.sh"]
ENTRYPOINT ["docker-entrypoint.sh"]

View file

@ -1,14 +0,0 @@
#!/bin/bash
cat $JBOSS_HOME/standalone/configuration/standalone.xml
. get-ips.sh
PARAMS="-b $PUBLIC_IP -bmanagement $PUBLIC_IP -bprivate $PRIVATE_IP $@"
echo "Server startup params: $PARAMS"
# Note: External container connectivity is always provided by eth0 -- irrespective of which is considered public/private by KC.
# In case the container needs to be accessible on the host computer override -b $PUBLIC_IP by adding: `-b 0.0.0.0` to the docker command.
exec /opt/jboss/wildfly/bin/standalone.sh $PARAMS
exit $?

View file

@ -1,37 +0,0 @@
#!/bin/bash
# This script:
# - assumes there are 2 network interfaces connected to $PUBLIC_SUBNET and $PRIVATE_SUBNET respectively
# - finds IP addresses from those interfaces using `ip` command
# - exports the IPs as variables
function getIpForSubnet {
if [ -z $1 ]; then
echo Subnet parameter undefined
exit 1
else
ip r | grep $1 | sed 's/.*src\ \([^\ ]*\).*/\1/'
fi
}
if [ -z $PUBLIC_SUBNET ]; then
PUBLIC_IP=0.0.0.0
echo PUBLIC_SUBNET undefined. PUBLIC_IP defaults to $PUBLIC_IP
else
export PUBLIC_IP=`getIpForSubnet $PUBLIC_SUBNET`
fi
if [ -z $PRIVATE_SUBNET ]; then
PRIVATE_IP=127.0.0.1
echo PRIVATE_SUBNET undefined. PRIVATE_IP defaults to $PRIVATE_IP
else
export PRIVATE_IP=`getIpForSubnet $PRIVATE_SUBNET`
fi
echo Routing table:
ip r
echo PUBLIC_SUBNET=$PUBLIC_SUBNET
echo PRIVATE_SUBNET=$PRIVATE_SUBNET
echo PUBLIC_IP=$PUBLIC_IP
echo PRIVATE_IP=$PRIVATE_IP

View file

@ -1,7 +0,0 @@
embed-server --server-config=standalone.xml
# default is cpuCount * 2
/subsystem=io/worker=default:write-attribute(name=io-threads,value=${env.WORKER_IO_THREADS:2})
# default is cpuCount * 16
/subsystem=io/worker=default:write-attribute(name=task-max-threads,value=${env.WORKER_TASK_MAX_THREADS:16})

View file

@ -1,12 +0,0 @@
embed-server --server-config=standalone.xml
/interface=private:add(inet-address=${jboss.bind.address.private:127.0.0.1})
/socket-binding-group=standard-sockets/socket-binding=modcluster-advertise:add(interface=private, multicast-port=23364, multicast-address=${jboss.default.multicast.address:224.0.1.105})
/socket-binding-group=standard-sockets/socket-binding=modcluster-management:add(interface=private, port=6666)
/extension=org.jboss.as.modcluster/:add
/subsystem=undertow/configuration=filter/mod-cluster=modcluster:add(advertise-socket-binding=modcluster-advertise, advertise-frequency=${modcluster.advertise-frequency:2000}, management-socket-binding=modcluster-management, enable-http2=true)
/subsystem=undertow/server=default-server/host=default-host/filter-ref=modcluster:add
/subsystem=undertow/server=default-server/http-listener=modcluster:add(socket-binding=modcluster-management, enable-http2=true)

View file

@ -1,2 +0,0 @@
embed-server --server-config=standalone.xml
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=max-connections,value=${env.HTTP_MAX_CONNECTIONS:500})

View file

@ -1,14 +0,0 @@
#!/bin/bash
#$JBOSS_HOME/bin/jboss-cli.sh -c ":read-attribute(name=server-state)" | grep -q "running"
. get-ips.sh
CODE=`curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/`
if [ "$CODE" -eq "200" ]; then
exit 0
fi
exit 1

View file

@ -1,5 +0,0 @@
FROM google/cadvisor:v0.26.1
RUN apk add --no-cache bash curl
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x -v /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

View file

@ -1,10 +0,0 @@
# CAdvisor configured for InfluxDB
Google CAdvisor tool configured to export data into InfluxDB service.
## Customization for InfluxDB
This image adds a custom `entrypoint.sh` on top of the original CAdvisor image which:
- checks if `$INFLUX_DATABASE` exists on `$INFLUX_HOST`
- creates the DB if it doesn't exist
- starts `cadvisor` with storage driver set for the Influx DB

View file

@ -1,18 +0,0 @@
#!/bin/bash
if [ -z $INFLUX_HOST ]; then export INFLUX_HOST=influx; fi
if [ -z $INFLUX_DATABASE ]; then export INFLUX_DATABASE=cadvisor; fi
# Check if DB exists
curl -s -G "http://$INFLUX_HOST:8086/query?pretty=true" --data-urlencode "q=SHOW DATABASES" | grep $INFLUX_DATABASE
DB_EXISTS=$?
if [ $DB_EXISTS -eq 0 ]; then
echo "Database '$INFLUX_DATABASE' already exists on InfluxDB server '$INFLUX_HOST:8086'"
else
echo "Creating database '$INFLUX_DATABASE' on InfluxDB server '$INFLUX_HOST:8086'"
curl -i -XPOST http://$INFLUX_HOST:8086/query --data-urlencode "q=CREATE DATABASE $INFLUX_DATABASE"
fi
/usr/bin/cadvisor -logtostderr -docker_only -storage_driver=influxdb -storage_driver_db=cadvisor -storage_driver_host=$INFLUX_HOST:8086 $@

View file

@ -1,8 +0,0 @@
FROM grafana/grafana:4.4.3
ENV GF_DASHBOARDS_JSON_ENABLED true
ENV GF_DASHBOARDS_JSON_PATH /etc/grafana/dashboards/
COPY resource-usage-per-container.json /etc/grafana/dashboards/
COPY resource-usage-combined.json /etc/grafana/dashboards/
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x -v /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

View file

@ -1,9 +0,0 @@
# Grafana configured with InfluxDB
## Customization for InfluxDB
This image adds a custom `entrypoint.sh` on top of the original Grafana image which:
- checks if `$INFLUX_DATASOURCE_NAME` exists in Grafana
- creates the datasource if it doesn't exist
- adds a custom dashboard configured to query for the datasource
- starts Grafana

View file

@ -1,41 +0,0 @@
#!/bin/bash
if [ -z $INFLUX_DATASOURCE_NAME ]; then INFLUX_DATASOURCE_NAME=influx_datasource; fi
if [ -z $INFLUX_HOST ]; then export INFLUX_HOST=influx; fi
if [ -z $INFLUX_DATABASE ]; then export INFLUX_DATABASE=cadvisor; fi
echo Starting Grafana
./run.sh "${@}" &
timeout 10 bash -c "until </dev/tcp/localhost/3000; do sleep 1; done"
echo Checking if datasource '$INFLUX_DATASOURCE_NAME' exists
curl -s 'http://admin:admin@localhost:3000/api/datasources' | grep $INFLUX_DATASOURCE_NAME
DS_EXISTS=$?
if [ $DS_EXISTS -eq 0 ]; then
echo "Datasource '$INFLUX_DATASOURCE_NAME' already exists in Grafana."
else
echo "Datasource '$INFLUX_DATASOURCE_NAME' not found in Grafana. Creating..."
curl -s -H "Content-Type: application/json" \
-X POST http://admin:admin@localhost:3000/api/datasources \
-d @- <<EOF
{
"name": "${INFLUX_DATASOURCE_NAME}",
"type": "influxdb",
"isDefault": false,
"access": "proxy",
"url": "http://${INFLUX_HOST}:8086",
"database": "${INFLUX_DATABASE}"
}
EOF
fi
echo Restarting Grafana
pkill grafana-server
timeout 10 bash -c "while </dev/tcp/localhost/3000; do sleep 1; done"
exec ./run.sh "${@}"

View file

@ -1,610 +0,0 @@
{
"__inputs": [
{
"name": "DS_INFLUXDB_CADVISOR",
"label": "influxdb_cadvisor",
"description": "",
"type": "datasource",
"pluginId": "influxdb",
"pluginName": "InfluxDB"
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "4.4.3"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "datasource",
"id": "influxdb",
"name": "InfluxDB",
"version": "1.0.0"
}
],
"annotations": {
"list": []
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"hideControls": true,
"id": null,
"links": [],
"refresh": "5s",
"rows": [
{
"collapse": false,
"height": 250,
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "influxdb_cadvisor",
"fill": 1,
"id": 2,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"show": true,
"total": true,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"span": 6,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "$tag_container_name",
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$interval"
],
"type": "time"
},
{
"params": [
"container_name"
],
"type": "tag"
}
],
"measurement": "cpu_usage_total",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT derivative(mean(\"value\"), 10s) FROM \"cpu_usage_total\" WHERE \"container_name\" =~ /^performance_keycloak.*$*/ AND $timeFilter GROUP BY time($interval), \"container_name\"",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
"10s"
],
"type": "derivative"
}
]
],
"tags": [
{
"key": "container_name",
"operator": "=~",
"value": "/^performance_keycloak.*$*/"
},
{
"condition": "AND",
"key": "machine",
"operator": "=~",
"value": "/^$host$/"
}
]
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "CPU",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "hertz",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "influxdb_cadvisor",
"fill": 1,
"id": 4,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"show": true,
"total": true,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"span": 6,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "$tag_container_name",
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$interval"
],
"type": "time"
},
{
"params": [
"container_name"
],
"type": "tag"
}
],
"measurement": "rx_bytes",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT derivative(mean(\"value\"), 10s) FROM \"rx_bytes\" WHERE \"container_name\" =~ /^performance_keycloak.*$*/ AND $timeFilter GROUP BY time($interval), \"container_name\"",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
"10s"
],
"type": "derivative"
}
]
],
"tags": [
{
"key": "machine",
"operator": "=~",
"value": "/^$host$/"
},
{
"condition": "AND",
"key": "container_name",
"operator": "=~",
"value": "/^performance_keycloak.*$*/"
}
]
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Network IN",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": false,
"title": "Dashboard Row",
"titleSize": "h6"
},
{
"collapse": false,
"height": 250,
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "influxdb_cadvisor",
"fill": 1,
"id": 1,
"interval": "",
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"rightSide": false,
"show": true,
"total": true,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"span": 6,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "$tag_container_name",
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"container_name"
],
"type": "tag"
}
],
"measurement": "memory_usage",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT mean(\"value\") FROM \"memory_usage\" WHERE \"container_name\" =~ /^performance_keycloak.*$*/ AND $timeFilter GROUP BY time($__interval), \"container_name\"",
"rawQuery": true,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "container_name",
"operator": "=~",
"value": "/^performance_keycloak.*$*/"
},
{
"condition": "AND",
"key": "machine",
"operator": "=~",
"value": "/^$host$/"
}
]
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Memory",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "decbytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "influxdb_cadvisor",
"fill": 1,
"id": 5,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"show": true,
"total": true,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"span": 6,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "$tag_container_name",
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$interval"
],
"type": "time"
},
{
"params": [
"container_name"
],
"type": "tag"
}
],
"measurement": "tx_bytes",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT derivative(mean(\"value\"), 10s) FROM \"tx_bytes\" WHERE \"container_name\" =~ /^performance_keycloak.*$*/ AND $timeFilter GROUP BY time($interval), \"container_name\"",
"rawQuery": true,
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
"10s"
],
"type": "derivative"
}
]
],
"tags": [
{
"key": "machine",
"operator": "=~",
"value": "/^$host$/"
},
{
"condition": "AND",
"key": "container_name",
"operator": "=~",
"value": "/^performance_keycloak.*$*/"
}
]
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Network OUT",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": false,
"title": "Dashboard Row",
"titleSize": "h6"
}
],
"schemaVersion": 14,
"style": "dark",
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-5m",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "browser",
"title": "Resource Usage - Combined",
"version": 0
}

View file

@ -1,669 +0,0 @@
{
"__inputs": [
{
"name": "DS_INFLUXDB_CADVISOR",
"label": "influxdb_cadvisor",
"description": "",
"type": "datasource",
"pluginId": "influxdb",
"pluginName": "InfluxDB"
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "4.4.3"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "datasource",
"id": "influxdb",
"name": "InfluxDB",
"version": "1.0.0"
}
],
"annotations": {
"list": []
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"hideControls": false,
"id": null,
"links": [],
"refresh": "5s",
"rows": [
{
"collapse": false,
"height": 250,
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "influxdb_cadvisor",
"fill": 1,
"id": 2,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"show": true,
"total": true,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"span": 6,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "$tag_container_name",
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$interval"
],
"type": "time"
},
{
"params": [
"machine"
],
"type": "tag"
},
{
"params": [
"container_name"
],
"type": "tag"
}
],
"measurement": "cpu_usage_total",
"orderByTime": "ASC",
"policy": "default",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
"10s"
],
"type": "derivative"
}
]
],
"tags": [
{
"key": "container_name",
"operator": "=~",
"value": "/^$container$*/"
},
{
"condition": "AND",
"key": "machine",
"operator": "=~",
"value": "/^$host$/"
}
]
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "CPU",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "hertz",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "influxdb_cadvisor",
"fill": 1,
"id": 4,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"show": true,
"total": true,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"span": 6,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "$tag_container_name",
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$interval"
],
"type": "time"
},
{
"params": [
"container_name"
],
"type": "tag"
},
{
"params": [
"machine"
],
"type": "tag"
}
],
"measurement": "rx_bytes",
"orderByTime": "ASC",
"policy": "default",
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
"10s"
],
"type": "derivative"
}
]
],
"tags": [
{
"key": "machine",
"operator": "=~",
"value": "/^$host$/"
},
{
"condition": "AND",
"key": "container_name",
"operator": "=~",
"value": "/^$container$*/"
}
]
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Network IN",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": false,
"title": "Dashboard Row",
"titleSize": "h6"
},
{
"collapse": false,
"height": 250,
"panels": [
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "influxdb_cadvisor",
"fill": 1,
"id": 1,
"interval": "",
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"rightSide": false,
"show": true,
"total": true,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"span": 6,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "$tag_container_name",
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$__interval"
],
"type": "time"
},
{
"params": [
"machine"
],
"type": "tag"
},
{
"params": [
"container_name"
],
"type": "tag"
}
],
"measurement": "memory_usage",
"orderByTime": "ASC",
"policy": "default",
"query": "SELECT \"value\" FROM \"memory_usage\" WHERE \"container_name\" =~ /^$container$/ AND \"machine\" =~ /^$host$/ AND $timeFilter",
"rawQuery": false,
"refId": "A",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
}
]
],
"tags": [
{
"key": "container_name",
"operator": "=~",
"value": "/^$container$*/"
},
{
"condition": "AND",
"key": "machine",
"operator": "=~",
"value": "/^$host$/"
}
]
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Memory",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "decbytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "influxdb_cadvisor",
"fill": 1,
"id": 5,
"legend": {
"alignAsTable": true,
"avg": true,
"current": true,
"max": true,
"min": true,
"show": true,
"total": true,
"values": true
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"percentage": false,
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"span": 6,
"stack": false,
"steppedLine": false,
"targets": [
{
"alias": "$tag_container_name",
"dsType": "influxdb",
"groupBy": [
{
"params": [
"$interval"
],
"type": "time"
},
{
"params": [
"container_name"
],
"type": "tag"
},
{
"params": [
"machine"
],
"type": "tag"
}
],
"measurement": "tx_bytes",
"orderByTime": "ASC",
"policy": "default",
"refId": "B",
"resultFormat": "time_series",
"select": [
[
{
"params": [
"value"
],
"type": "field"
},
{
"params": [],
"type": "mean"
},
{
"params": [
"10s"
],
"type": "derivative"
}
]
],
"tags": [
{
"key": "machine",
"operator": "=~",
"value": "/^$host$/"
},
{
"condition": "AND",
"key": "container_name",
"operator": "=~",
"value": "/^$container$*/"
}
]
}
],
"thresholds": [],
"timeFrom": null,
"timeShift": null,
"title": "Network OUT",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "Bps",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
]
}
],
"repeat": null,
"repeatIteration": null,
"repeatRowId": null,
"showTitle": false,
"title": "Dashboard Row",
"titleSize": "h6"
}
],
"schemaVersion": 14,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"allValue": "",
"current": {},
"datasource": "influxdb_cadvisor",
"hide": 0,
"includeAll": true,
"label": "Host",
"multi": false,
"name": "host",
"options": [],
"query": "show tag values with key = \"machine\"",
"refresh": 1,
"regex": "",
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
},
{
"allValue": null,
"current": {},
"datasource": "influxdb_cadvisor",
"hide": 0,
"includeAll": false,
"label": "Container",
"multi": false,
"name": "container",
"options": [],
"query": "show tag values with key = \"container_name\" WHERE machine =~ /^$host$/",
"refresh": 1,
"regex": "/([^.]+)/",
"sort": 0,
"tagValuesQuery": "",
"tags": [],
"tagsQuery": "",
"type": "query",
"useTags": false
}
]
},
"time": {
"from": "now-5m",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "browser",
"title": "Resource Usage - Per Container",
"version": 2
}

View file

@ -1,59 +0,0 @@
<?xml version="1.0"?>
<!--
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
~ and other contributors as indicated by the @author tags.
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<parent>
<artifactId>keycloak-testsuite-pom</artifactId>
<groupId>org.keycloak</groupId>
<version>999-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>org.keycloak.testsuite</groupId>
<artifactId>performance</artifactId>
<name>Keycloak Performance TestSuite</name>
<packaging>pom</packaging>
<properties>
<server.version>${project.version}</server.version>
<management.user/>
<management.user.password/>
<jstat>false</jstat>
<keycloak.jstat>${jstat}</keycloak.jstat>
<infinispan.jstat>${jstat}</infinispan.jstat>
<lb.jstat>${jstat}</lb.jstat>
</properties>
<modules>
<module>keycloak</module>
<module>load-balancer/wildfly-modcluster</module>
<module>tests</module>
</modules>
<profiles>
<profile>
<id>crossdc</id>
<modules>
<module>infinispan</module>
</modules>
</profile>
</profiles>
</project>

View file

@ -1,179 +0,0 @@
#!/bin/bash
# Execution Parameters
MVN="${MVN:-mvn}"
KEYCLOAK_PROJECT_HOME=${KEYCLOAK_PROJECT_HOME:-$(cd "$(dirname "$0")/../.."; pwd)}
DRY_RUN=${DRY_RUN:-false}
# Performance Testsuite Parameters
DATASET=${DATASET:-1r_10c_100u}
WARMUP_PERIOD=${WARMUP_PERIOD:-120}
RAMPUP_PERIOD=${RAMPUP_PERIOD:-60}
MEASUREMENT_PERIOD=${MEASUREMENT_PERIOD:-120}
FILTER_RESULTS=${FILTER_RESULTS:-true}
# Stress Test Parameters
STRESS_TEST_ALGORITHM=${STRESS_TEST_ALGORITHM:-incremental}
STRESS_TEST_MAX_ITERATIONS=${STRESS_TEST_MAX_ITERATIONS:-10}
STRESS_TEST_PROVISIONING=${STRESS_TEST_PROVISIONING:-false}
STRESS_TEST_PROVISIONING_GENERATE_DATASET=${STRESS_TEST_PROVISIONING_GENERATE_DATASET:-false}
STRESS_TEST_PROVISIONING_PARAMETERS=${STRESS_TEST_PROVISIONING_PARAMETERS:-}
# Stress Test - Incremental Algorithm Parameters
STRESS_TEST_UPS_FIRST=${STRESS_TEST_UPS_FIRST:-1.000}
STRESS_TEST_UPS_INCREMENT=${STRESS_TEST_UPS_INCREMENT:-1.000}
# Stress Test - Bisection Algorithm Parameters
lower_bound=${STRESS_TEST_UPS_LOWER_BOUND:-0.000}
upper_bound=${STRESS_TEST_UPS_UPPER_BOUND:-10.000}
STRESS_TEST_UPS_RESOLUTION=${STRESS_TEST_UPS_RESOLUTION:-1.000}
if $STRESS_TEST_PROVISIONING_GENERATE_DATASET; then DATASET_PROFILE="generate-data"; else DATASET_PROFILE="import-dump"; fi
PROVISIONING_COMMAND="$MVN -f $KEYCLOAK_PROJECT_HOME/testsuite/performance/pom.xml verify -P provision,$DATASET_PROFILE $STRESS_TEST_PROVISIONING_PARAMETERS -Ddataset=$DATASET"
TEARDOWN_COMMAND="$MVN -f $KEYCLOAK_PROJECT_HOME/testsuite/performance/pom.xml verify -P teardown"
function run_command {
echo " $1"
echo
if ! $DRY_RUN; then eval "$1"; fi
}
function run_test {
if [[ $i == 0 || $STRESS_TEST_PROVISIONING == true ]]; then # use specified WARMUP_PERIOD only in the first iteration, or if STRESS_TEST_PROVISIONING is enabled
WARMUP_PARAMETER="-DwarmUpPeriod=$WARMUP_PERIOD ";
else
WARMUP_PARAMETER="-DwarmUpPeriod=0 ";
fi
test_command="$MVN -f $KEYCLOAK_PROJECT_HOME/testsuite/performance/tests/pom.xml verify -Ptest $@ -Ddataset=$DATASET $WARMUP_PARAMETER -DrampUpPeriod=$RAMPUP_PERIOD -DmeasurementPeriod=$MEASUREMENT_PERIOD -DfilterResults=$FILTER_RESULTS -DusersPerSec=$users_per_sec"
if $STRESS_TEST_PROVISIONING; then
run_command "$PROVISIONING_COMMAND"
if [[ $? != 0 ]]; then
echo "Provisioning failed."
run_command "$TEARDOWN_COMMAND" || break
break
fi
run_command "$test_command"
export test_result=$?
run_command "$TEARDOWN_COMMAND" || exit 1
else
run_command "$test_command"
export test_result=$?
fi
[[ $test_result != 0 ]] && echo "Test exit code: $test_result"
}
cat <<EOM
Stress Test Summary:
Script Execution Parameters:
MVN: $MVN
KEYCLOAK_PROJECT_HOME: $KEYCLOAK_PROJECT_HOME
DRY_RUN: $DRY_RUN
Performance Testsuite Parameters:
DATASET: $DATASET
WARMUP_PERIOD: $WARMUP_PERIOD seconds
RAMPUP_PERIOD: $RAMPUP_PERIOD seconds
MEASUREMENT_PERIOD: $MEASUREMENT_PERIOD seconds
FILTER_RESULTS: $FILTER_RESULTS
Stress Test Parameters:
STRESS_TEST_ALGORITHM: $STRESS_TEST_ALGORITHM
STRESS_TEST_MAX_ITERATIONS: $STRESS_TEST_MAX_ITERATIONS
STRESS_TEST_PROVISIONING: $STRESS_TEST_PROVISIONING
EOM
if $STRESS_TEST_PROVISIONING; then cat <<EOM
STRESS_TEST_PROVISIONING_GENERATE_DATASET: $STRESS_TEST_PROVISIONING_GENERATE_DATASET (MVN -P $DATASET_PROFILE)
STRESS_TEST_PROVISIONING_PARAMETERS: $STRESS_TEST_PROVISIONING_PARAMETERS
EOM
fi
users_per_sec_max=0
case "${STRESS_TEST_ALGORITHM}" in
incremental)
cat <<EOM
Incremental Stress Test Parameters:
STRESS_TEST_UPS_FIRST: $STRESS_TEST_UPS_FIRST users per second
STRESS_TEST_UPS_INCREMENT: $STRESS_TEST_UPS_INCREMENT users per second
EOM
for (( i=0; i < $STRESS_TEST_MAX_ITERATIONS; i++)); do
users_per_sec=`bc <<<"scale=10; $STRESS_TEST_UPS_FIRST + $i * $STRESS_TEST_UPS_INCREMENT"`
echo
echo "STRESS TEST ITERATION: $(( i+1 )) / $STRESS_TEST_MAX_ITERATIONS Load: $users_per_sec users per second"
echo
run_test $@
if [[ $test_result == 0 ]]; then
users_per_sec_max=$users_per_sec
else
echo "INFO: Last iteration failed. Stopping the loop."
break
fi
done
;;
bisection)
cat <<EOM
Bisection Stress Test Parameters:
STRESS_TEST_UPS_LOWER_BOUND: $lower_bound users per second
STRESS_TEST_UPS_UPPER_BOUND: $upper_bound users per second
STRESS_TEST_UPS_RESOLUTION: $STRESS_TEST_UPS_RESOLUTION users per second
EOM
for (( i=0; i < $STRESS_TEST_MAX_ITERATIONS; i++)); do
interval_size=`bc<<<"scale=10; $upper_bound - $lower_bound"`
users_per_sec=`bc<<<"scale=10; $lower_bound + $interval_size * 0.5"`
echo
echo "STRESS TEST ITERATION: $(( i+1 )) / $STRESS_TEST_MAX_ITERATIONS Bisection interval: [$lower_bound, $upper_bound], interval_size: $interval_size, STRESS_TEST_UPS_RESOLUTION: $STRESS_TEST_UPS_RESOLUTION, Load: $users_per_sec users per second"
echo
if [[ `bc<<<"scale=10; $interval_size < $STRESS_TEST_UPS_RESOLUTION"` == 1 ]]; then echo "INFO: interval_size < STRESS_TEST_UPS_RESOLUTION. Stopping the loop."; break; fi
if [[ `bc<<<"scale=10; $interval_size < 0"` == 1 ]]; then echo "ERROR: Invalid state: lower_bound > upper_bound. Stopping the loop."; exit 1; fi
run_test $@
if [[ $test_result == 0 ]]; then
users_per_sec_max=$users_per_sec
echo "INFO: Last iteration succeeded. Continuing with the upper half of the interval."
lower_bound=$users_per_sec
else
echo "INFO: Last iteration failed. Continuing with the lower half of the interval."
upper_bound=$users_per_sec
fi
done
;;
*)
echo "Algorithm '${STRESS_TEST_ALGORITHM}' not supported."
exit 1
;;
esac
echo "Maximal load with passing test: $users_per_sec_max users per second"
if ! $DRY_RUN; then # Generate a Jenkins Plot Plugin-compatible data file
mkdir -p "$KEYCLOAK_PROJECT_HOME/testsuite/performance/tests/target"
echo "YVALUE=$users_per_sec_max" > "$KEYCLOAK_PROJECT_HOME/testsuite/performance/tests/target/stress-test-result.properties"
fi

View file

@ -1,6 +0,0 @@
#!/bin/bash
PROJECT_BASEDIR=${PROJECT_BASEDIR:-$(cd "$(dirname "$0")"; pwd)}
PROJECT_BUILD_DIRECTORY=$PROJECT_BASEDIR/target
export PROVISIONED_SYSTEM_PROPERTIES_FILE="$PROJECT_BUILD_DIRECTORY/provisioned-system.properties"

View file

@ -1,22 +0,0 @@
#!/bin/bash
cd "$(dirname "$0")"
. ./common.sh
HOST_PORT=${1:-localhost:8443}
TRUSTSTORE_PASSWORD=${2:-password}
#secure-sso-sso-perf-01.apps.summit-aws.sysdeseng.com:443
mkdir -p $PROJECT_BUILD_DIRECTORY
echo "Obtaining certificate from $HOST_PORT"
openssl s_client -showcerts -connect $HOST_PORT </dev/null 2>/dev/null|openssl x509 -outform PEM >$PROJECT_BUILD_DIRECTORY/keycloak.pem
if [ ! -s "$PROJECT_BUILD_DIRECTORY/keycloak.pem" ]; then echo "Obtaining cerfificate failed."; exit 1; fi
cat $PROJECT_BUILD_DIRECTORY/keycloak.pem
echo "Importing certificate"
rm $PROJECT_BUILD_DIRECTORY/truststore.jks
keytool -importcert -file $PROJECT_BUILD_DIRECTORY/keycloak.pem -keystore $PROJECT_BUILD_DIRECTORY/truststore.jks -alias "keycloak" -storepass "$TRUSTSTORE_PASSWORD" -noprompt
echo "Keystore file: $PROJECT_BUILD_DIRECTORY/truststore.jks"

View file

@ -1,477 +0,0 @@
#!/bin/bash
### FUNCTIONS ###
function runCommand() {
echo "$1"
if ! eval "$1" ; then
echo "Execution of command failed."
echo "Command: \"$1\""
exit 1;
fi
}
function generateDockerComposeFile() {
echo "Generating $( basename "$DOCKER_COMPOSE_FILE" )"
local TEMPLATES_PATH=$DEPLOYMENT
cat $TEMPLATES_PATH/docker-compose-base.yml > $DOCKER_COMPOSE_FILE
case "$DEPLOYMENT" in
cluster)
I=0
for CPUSET in $KEYCLOAK_CPUSETS ; do
I=$((I+1))
sed -e s/%I%/$I/ -e s/%CPUSET%/$CPUSET/ $TEMPLATES_PATH/docker-compose-keycloak.yml >> $DOCKER_COMPOSE_FILE
done
;;
crossdc)
I=0
for CPUSET in $KEYCLOAK_DC1_CPUSETS ; do
I=$((I+1))
sed -e s/%I%/$I/ -e s/%CPUSET%/$CPUSET/ $TEMPLATES_PATH/docker-compose-keycloak_dc1.yml >> $DOCKER_COMPOSE_FILE
done
I=0
for CPUSET in $KEYCLOAK_DC2_CPUSETS ; do
I=$((I+1))
sed -e s/%I%/$I/ -e s/%CPUSET%/$CPUSET/ $TEMPLATES_PATH/docker-compose-keycloak_dc2.yml >> $DOCKER_COMPOSE_FILE
done
;;
esac
}
function inspectDockerPortMapping() {
local PORT="$1"
local CONTAINER="$2"
local INSPECT_COMMAND="docker inspect --format='{{(index (index .NetworkSettings.Ports \"$PORT\") 0).HostPort}}' $CONTAINER"
MAPPED_PORT="$( eval "$INSPECT_COMMAND" )"
if [ -z "$MAPPED_PORT" ]; then
echo "Error finding mapped port for $CONTAINER."
exit 1
fi
}
function generateProvisionedSystemProperties() {
echo "Generating $PROVISIONED_SYSTEM_PROPERTIES_FILE"
echo "deployment=$DEPLOYMENT" > $PROVISIONED_SYSTEM_PROPERTIES_FILE
echo "# Docker Compose" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
echo "keycloak.docker.services=$KEYCLOAK_SERVICES" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
case "$DEPLOYMENT" in
singlenode)
echo "# HTTP" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
inspectDockerPortMapping 8080/tcp ${PROJECT_NAME}_keycloak_1
echo "keycloak.frontend.servers=http://localhost:$MAPPED_PORT/auth" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
echo "# JMX" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
inspectDockerPortMapping 9990/tcp ${PROJECT_NAME}_keycloak_1
echo "keycloak.frontend.servers.jmx=service:jmx:remote+http://localhost:$MAPPED_PORT" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
;;
cluster)
echo "# HTTP" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
inspectDockerPortMapping 8080/tcp ${PROJECT_NAME}_loadbalancer_1
echo "keycloak.frontend.servers=http://localhost:$MAPPED_PORT/auth" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
BACKEND_URLS=""
for SERVICE in $KEYCLOAK_SERVICES ; do
inspectDockerPortMapping 8080/tcp ${PROJECT_NAME}_${SERVICE}_1
BACKEND_URLS="$BACKEND_URLS http://localhost:$MAPPED_PORT/auth"
done
echo "keycloak.backend.servers=$BACKEND_URLS" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
echo "# JMX" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
inspectDockerPortMapping 9990/tcp ${PROJECT_NAME}_loadbalancer_1
echo "keycloak.frontend.servers.jmx=service:jmx:remote+http://localhost:$MAPPED_PORT" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
BACKEND_URLS=""
for SERVICE in $KEYCLOAK_SERVICES ; do
inspectDockerPortMapping 9990/tcp ${PROJECT_NAME}_${SERVICE}_1
BACKEND_URLS="$BACKEND_URLS service:jmx:remote+http://localhost:$MAPPED_PORT"
done
echo "keycloak.backend.servers.jmx=$BACKEND_URLS" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
;;
crossdc)
echo "# HTTP" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
inspectDockerPortMapping 8080/tcp ${PROJECT_NAME}_loadbalancer_dc1_1
KC_DC1_PORT=$MAPPED_PORT
inspectDockerPortMapping 8080/tcp ${PROJECT_NAME}_loadbalancer_dc2_1
KC_DC2_PORT=$MAPPED_PORT
echo "keycloak.frontend.servers=http://localhost:$KC_DC1_PORT/auth http://localhost:$KC_DC2_PORT/auth" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
BACKEND_URLS=""
for SERVICE in $KEYCLOAK_SERVICES ; do
inspectDockerPortMapping 8080/tcp ${PROJECT_NAME}_${SERVICE}_1
BACKEND_URLS="$BACKEND_URLS http://localhost:$MAPPED_PORT/auth"
done
echo "keycloak.backend.servers=$BACKEND_URLS" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
echo "# JMX" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
inspectDockerPortMapping 9990/tcp ${PROJECT_NAME}_loadbalancer_dc1_1
KC_DC1_PORT=$MAPPED_PORT
inspectDockerPortMapping 9990/tcp ${PROJECT_NAME}_loadbalancer_dc2_1
KC_DC2_PORT=$MAPPED_PORT
echo "keycloak.frontend.servers.jmx=service:jmx:remote+http://localhost:$KC_DC1_PORT service:jmx:remote+http://localhost:$KC_DC2_PORT" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
BACKEND_URLS=""
for SERVICE in $KEYCLOAK_SERVICES ; do
inspectDockerPortMapping 9990/tcp ${PROJECT_NAME}_${SERVICE}_1
BACKEND_URLS="$BACKEND_URLS service:jmx:remote+http://localhost:$MAPPED_PORT"
done
echo "keycloak.backend.servers.jmx=$BACKEND_URLS" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
inspectDockerPortMapping 9990/tcp ${PROJECT_NAME}_infinispan_dc1_1
ISPN_DC1_PORT=$MAPPED_PORT
inspectDockerPortMapping 9990/tcp ${PROJECT_NAME}_infinispan_dc2_1
ISPN_DC2_PORT=$MAPPED_PORT
echo "infinispan.servers.jmx=service:jmx:remote+http://localhost:$ISPN_DC1_PORT service:jmx:remote+http://localhost:$ISPN_DC2_PORT" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
;;
esac
echo "keycloak.admin.user=$KEYCLOAK_ADMIN_USER" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
echo "keycloak.admin.password=$KEYCLOAK_ADMIN_PASSWORD" >> $PROVISIONED_SYSTEM_PROPERTIES_FILE
}
function loadProvisionedSystemProperties() {
if [ -f $PROVISIONED_SYSTEM_PROPERTIES_FILE ]; then
echo "Loading $PROVISIONED_SYSTEM_PROPERTIES_FILE"
export DEPLOYMENT=$( sed -n -e '/deployment=/ s/.*\= *//p' $PROVISIONED_SYSTEM_PROPERTIES_FILE )
export KEYCLOAK_SERVICES=$( sed -n -e '/keycloak.docker.services=/ s/.*\= *//p' $PROVISIONED_SYSTEM_PROPERTIES_FILE )
export KEYCLOAK_ADMIN_USER=$( sed -n -e '/keycloak.admin.user=/ s/.*\= *//p' $PROVISIONED_SYSTEM_PROPERTIES_FILE )
export KEYCLOAK_ADMIN_PASSWORD=$( sed -n -e '/keycloak.admin.password=/ s/.*\= *//p' $PROVISIONED_SYSTEM_PROPERTIES_FILE )
else
echo "$PROVISIONED_SYSTEM_PROPERTIES_FILE not found."
fi
}
function removeProvisionedSystemProperties() {
rm -f $PROVISIONED_SYSTEM_PROPERTIES_FILE
}
function isTestDeployment() {
IS_TEST_DEPLOYMENT=false
case "$DEPLOYMENT" in
singlenode|cluster|crossdc) IS_TEST_DEPLOYMENT=true ;;
esac
$IS_TEST_DEPLOYMENT
}
function validateNotEmpty() {
VARIABLE_NAME=$1
VARIABLE_VALUE=$2
# echo "$VARIABLE_NAME: $VARIABLE_VALUE"
if [ -z "$VARIABLE_VALUE" ]; then echo "$VARIABLE_NAME must contain at least one item."; exit 1; fi
}
### SCRIPT ###
cd "$(dirname "$0")"
. ./common.sh
cd $PROJECT_BUILD_DIRECTORY/docker-compose
PROJECT_NAME=performance
export DEPLOYMENT="${DEPLOYMENT:-singlenode}"
if isTestDeployment ; then loadProvisionedSystemProperties; fi
export OPERATION="${OPERATION:-provision}"
echo "DEPLOYMENT: $DEPLOYMENT"
case "$DEPLOYMENT" in
singlenode) DOCKER_COMPOSE_FILE=docker-compose.yml ;;
cluster) DOCKER_COMPOSE_FILE=docker-compose-cluster.yml ;;
crossdc) DOCKER_COMPOSE_FILE=docker-compose-crossdc.yml ;;
monitoring) DOCKER_COMPOSE_FILE=docker-compose-monitoring.yml ; DELETE_DATA="${DELETE_DATA:-false}" ;;
*)
echo "Deployment '$DEPLOYMENT' not supported by provisioner '$PROVISIONER'."
exit 1
;;
esac
echo "OPERATION: $OPERATION"
case "$OPERATION" in
provision)
case "$DEPLOYMENT" in
singlenode)
BASE_SERVICES="mariadb"
KEYCLOAK_SERVICES="keycloak"
validateNotEmpty DB_CPUSETS $DB_CPUSETS
DB_CPUSETS_ARRAY=( $DB_CPUSETS )
export DB_CPUSET=${DB_CPUSETS_ARRAY[0]}
validateNotEmpty KEYCLOAK_CPUSETS $KEYCLOAK_CPUSETS
KEYCLOAK_CPUSETS_ARRAY=( $KEYCLOAK_CPUSETS )
export KEYCLOAK_CPUSET=${KEYCLOAK_CPUSETS_ARRAY[0]}
echo "DB_CPUSET: $DB_CPUSET"
echo "KEYCLOAK_CPUSET: $KEYCLOAK_CPUSET"
echo "BASE_SERVICES: $BASE_SERVICES"
echo "KEYCLOAK_SERVICES: $KEYCLOAK_SERVICES"
;;
cluster)
BASE_SERVICES="mariadb loadbalancer"
validateNotEmpty DB_CPUSETS $DB_CPUSETS
DB_CPUSETS_ARRAY=( $DB_CPUSETS )
export DB_CPUSET=${DB_CPUSETS_ARRAY[0]}
validateNotEmpty LB_CPUSETS $LB_CPUSETS
LB_CPUSETS_ARRAY=( $LB_CPUSETS )
export LB_CPUSET=${LB_CPUSETS_ARRAY[0]}
validateNotEmpty KEYCLOAK_CPUSETS $KEYCLOAK_CPUSETS
KEYCLOAK_CPUSETS_ARRAY=( $KEYCLOAK_CPUSETS )
KEYCLOAK_MAX_SCALE=${#KEYCLOAK_CPUSETS_ARRAY[@]}
KEYCLOAK_SCALE="${KEYCLOAK_SCALE:-$KEYCLOAK_MAX_SCALE}"
if [ $KEYCLOAK_SCALE -gt $KEYCLOAK_MAX_SCALE ]; then KEYCLOAK_SCALE=$KEYCLOAK_MAX_SCALE; fi
if [ $KEYCLOAK_SCALE -lt 1 ]; then KEYCLOAK_SCALE=1; fi
echo "DB_CPUSET: $DB_CPUSET"
echo "LB_CPUSET: $LB_CPUSET"
echo "KEYCLOAK_CPUSETS: $KEYCLOAK_CPUSETS"
echo "KEYCLOAK_SCALE: ${KEYCLOAK_SCALE} (max ${KEYCLOAK_MAX_SCALE})"
KEYCLOAK_SERVICES=""
STOPPED_KEYCLOAK_SERVICES=""
for ((i=1; i<=$KEYCLOAK_MAX_SCALE; i++)) ; do
if (( $i <= $KEYCLOAK_SCALE )) ; then
KEYCLOAK_SERVICES="$KEYCLOAK_SERVICES keycloak_$i"
else
STOPPED_KEYCLOAK_SERVICES="$STOPPED_KEYCLOAK_SERVICES keycloak_$i"
fi
done
echo "BASE_SERVICES: $BASE_SERVICES"
echo "KEYCLOAK_SERVICES: $KEYCLOAK_SERVICES"
generateDockerComposeFile
;;
crossdc)
BASE_SERVICES="mariadb_dc1 mariadb_dc2 infinispan_dc1 infinispan_dc2 loadbalancer_dc1 loadbalancer_dc2"
validateNotEmpty DB_DC1_CPUSETS $DB_DC1_CPUSETS
validateNotEmpty DB_DC2_CPUSETS $DB_DC2_CPUSETS
DB_DC1_CPUSETS_ARRAY=( $DB_DC1_CPUSETS )
DB_DC2_CPUSETS_ARRAY=( $DB_DC2_CPUSETS )
export DB_DC1_CPUSET=${DB_DC1_CPUSETS_ARRAY[0]}
export DB_DC2_CPUSET=${DB_DC2_CPUSETS_ARRAY[0]}
echo "DB_DC1_CPUSET: $DB_DC1_CPUSET"
echo "DB_DC2_CPUSET: $DB_DC2_CPUSET"
validateNotEmpty LB_DC1_CPUSETS $LB_DC1_CPUSETS
validateNotEmpty LB_DC2_CPUSETS $LB_DC2_CPUSETS
LB_DC1_CPUSETS_ARRAY=( $LB_DC1_CPUSETS )
LB_DC2_CPUSETS_ARRAY=( $LB_DC2_CPUSETS )
export LB_DC1_CPUSET=${LB_DC1_CPUSETS_ARRAY[0]}
export LB_DC2_CPUSET=${LB_DC2_CPUSETS_ARRAY[0]}
echo "LB_DC1_CPUSET: $LB_DC1_CPUSET"
echo "LB_DC2_CPUSET: $LB_DC2_CPUSET"
validateNotEmpty INFINISPAN_DC1_CPUSETS $INFINISPAN_DC1_CPUSETS
validateNotEmpty INFINISPAN_DC2_CPUSETS $INFINISPAN_DC2_CPUSETS
INFINISPAN_DC1_CPUSETS_ARRAY=( $INFINISPAN_DC1_CPUSETS )
INFINISPAN_DC2_CPUSETS_ARRAY=( $INFINISPAN_DC2_CPUSETS )
export INFINISPAN_DC1_CPUSET=${INFINISPAN_DC1_CPUSETS_ARRAY[0]}
export INFINISPAN_DC2_CPUSET=${INFINISPAN_DC2_CPUSETS_ARRAY[0]}
echo "INFINISPAN_DC1_CPUSET: $INFINISPAN_DC1_CPUSET"
echo "INFINISPAN_DC2_CPUSET: $INFINISPAN_DC2_CPUSET"
validateNotEmpty KEYCLOAK_DC1_CPUSETS $KEYCLOAK_DC1_CPUSETS
validateNotEmpty KEYCLOAK_DC2_CPUSETS $KEYCLOAK_DC2_CPUSETS
KEYCLOAK_DC1_CPUSETS_ARRAY=( $KEYCLOAK_DC1_CPUSETS )
KEYCLOAK_DC2_CPUSETS_ARRAY=( $KEYCLOAK_DC2_CPUSETS )
KEYCLOAK_DC1_MAX_SCALE=${#KEYCLOAK_DC1_CPUSETS_ARRAY[@]}
KEYCLOAK_DC2_MAX_SCALE=${#KEYCLOAK_DC2_CPUSETS_ARRAY[@]}
KEYCLOAK_DC1_SCALE="${KEYCLOAK_DC1_SCALE:-$KEYCLOAK_DC1_MAX_SCALE}"
KEYCLOAK_DC2_SCALE="${KEYCLOAK_DC2_SCALE:-$KEYCLOAK_DC2_MAX_SCALE}"
if [ $KEYCLOAK_DC1_SCALE -gt $KEYCLOAK_DC1_MAX_SCALE ]; then KEYCLOAK_DC1_SCALE=$KEYCLOAK_DC1_MAX_SCALE; fi
if [ $KEYCLOAK_DC1_SCALE -lt 1 ]; then KEYCLOAK_DC1_SCALE=1; fi
if [ $KEYCLOAK_DC2_SCALE -gt $KEYCLOAK_DC2_MAX_SCALE ]; then KEYCLOAK_DC2_SCALE=$KEYCLOAK_DC2_MAX_SCALE; fi
if [ $KEYCLOAK_DC2_SCALE -lt 1 ]; then KEYCLOAK_DC2_SCALE=1; fi
echo "KEYCLOAK_DC1_CPUSETS: ${KEYCLOAK_DC1_CPUSETS}"
echo "KEYCLOAK_DC2_CPUSETS: ${KEYCLOAK_DC2_CPUSETS}"
echo "KEYCLOAK_DC1_SCALE: ${KEYCLOAK_DC1_SCALE} (max ${KEYCLOAK_DC1_MAX_SCALE})"
echo "KEYCLOAK_DC2_SCALE: ${KEYCLOAK_DC2_SCALE} (max ${KEYCLOAK_DC2_MAX_SCALE})"
KEYCLOAK_SERVICES=""
STOPPED_KEYCLOAK_SERVICES=""
for ((i=1; i<=$KEYCLOAK_DC1_MAX_SCALE; i++)) ; do
if (( $i <= $KEYCLOAK_DC1_SCALE )) ; then
KEYCLOAK_SERVICES="$KEYCLOAK_SERVICES keycloak_dc1_$i"
else
STOPPED_KEYCLOAK_SERVICES="$STOPPED_KEYCLOAK_SERVICES keycloak_dc1_$i"
fi
done
for ((i=1; i<=$KEYCLOAK_DC2_MAX_SCALE; i++)) ; do
if (( $i <= $KEYCLOAK_DC2_SCALE )) ; then
KEYCLOAK_SERVICES="$KEYCLOAK_SERVICES keycloak_dc2_$i"
else
STOPPED_KEYCLOAK_SERVICES="$STOPPED_KEYCLOAK_SERVICES keycloak_dc2_$i"
fi
done
echo "BASE_SERVICES: $BASE_SERVICES"
echo "KEYCLOAK_SERVICES: $KEYCLOAK_SERVICES"
generateDockerComposeFile
;;
monitoring)
validateNotEmpty MONITORING_CPUSETS "$MONITORING_CPUSETS"
MONITORING_CPUSETS_ARRAY=( $MONITORING_CPUSETS )
export MONITORING_CPUSET=${MONITORING_CPUSETS_ARRAY[0]}
echo "MONITORING_CPUSET: $MONITORING_CPUSET"
;;
esac
runCommand "docker-compose -f $DOCKER_COMPOSE_FILE -p ${PROJECT_NAME} up -d --build $BASE_SERVICES $KEYCLOAK_SERVICES"
if [ ! -z "$STOPPED_KEYCLOAK_SERVICES" ] ; then
echo "STOPPED_KEYCLOAK_SERVICES: $STOPPED_KEYCLOAK_SERVICES"
runCommand "docker-compose -f $DOCKER_COMPOSE_FILE -p ${PROJECT_NAME} stop $STOPPED_KEYCLOAK_SERVICES"
fi
if isTestDeployment ; then
generateProvisionedSystemProperties;
$PROJECT_BASEDIR/healthcheck.sh
fi
;;
teardown)
DELETE_DATA="${DELETE_DATA:-true}"
echo "DELETE_DATA: $DELETE_DATA"
if "$DELETE_DATA" ; then VOLUMES_ARG="-v"; else VOLUMES_ARG=""; fi
runCommand "docker-compose -f $DOCKER_COMPOSE_FILE -p ${PROJECT_NAME} down $VOLUMES_ARG"
if isTestDeployment ; then removeProvisionedSystemProperties; fi
;;
export-dump|import-dump)
loadProvisionedSystemProperties
echo "KEYCLOAK_SERVICES: $KEYCLOAK_SERVICES"
if [ -z "$KEYCLOAK_SERVICES" ]; then echo "Unable to load KEYCLOAK_SERVICES"; exit 1; fi
case "$DEPLOYMENT" in
singlenode|cluster) export DB_CONTAINER=${PROJECT_NAME}_mariadb_1 ;;
crossdc) export DB_CONTAINER=${PROJECT_NAME}_mariadb_dc1_1 ;;
*) echo "Deployment '$DEPLOYMENT' doesn't support operation '$OPERATION'." ; exit 1 ;;
esac
if [ ! -f "$DATASET_PROPERTIES_FILE" ]; then echo "Operation '$OPERATION' requires a valid DATASET_PROPERTIES_FILE parameter."; exit 1; fi
DATASET_PROPERTIES_FILENAME=`basename $DATASET_PROPERTIES_FILE`
DATASET=${DATASET_PROPERTIES_FILENAME%.properties}
echo "DATASET_PROPERTIES_FILE: $DATASET_PROPERTIES_FILE"
echo "DATASET: $DATASET"
echo "Stopping Keycloak services."
runCommand "docker-compose -f $DOCKER_COMPOSE_FILE -p ${PROJECT_NAME} stop $KEYCLOAK_SERVICES"
cd `dirname $DATASET_PROPERTIES_FILE`
case "$OPERATION" in
export-dump)
echo "Exporting $DATASET.sql."
if docker exec $DB_CONTAINER /usr/bin/mysqldump -u root --password=root keycloak > $DATASET.sql ; then
echo "Compressing $DATASET.sql."
gzip $DATASET.sql
fi
;;
import-dump)
DUMP_DOWNLOAD_SITE=${DUMP_DOWNLOAD_SITE:-https://downloads.jboss.org/keycloak-qe}
if [ ! -f "$DATASET.sql.gz" ]; then
echo "Downloading dump file: $DUMP_DOWNLOAD_SITE/$DATASET.sql.gz"
if ! curl -f -O $DUMP_DOWNLOAD_SITE/$DATASET.properties -O $DUMP_DOWNLOAD_SITE/$DATASET.sql.gz ; then
echo Download failed.
exit 1
fi
fi
echo "Importing $DATASET.sql.gz"
set -o pipefail
if ! gunzip -c $DATASET.sql.gz | docker exec -i $DB_CONTAINER /usr/bin/mysql -u root --password=root keycloak ; then
echo Import failed.
exit 1
fi
;;
esac
cd $PROJECT_BUILD_DIRECTORY/docker-compose
echo "Starting Keycloak services."
runCommand "docker-compose -f $DOCKER_COMPOSE_FILE -p ${PROJECT_NAME} up -d --no-recreate $KEYCLOAK_SERVICES"
# need to update mapped ports
generateProvisionedSystemProperties
$PROJECT_BASEDIR/healthcheck.sh
;;
collect)
TIMESTAMP=`date -u "+%Y-%m-%d_%T_%Z"`
ARTIFACTS_DIR="${PROJECT_BUILD_DIRECTORY}/collected-artifacts/${DEPLOYMENT}_${TIMESTAMP}"
SERVICES=`docker-compose -f $DOCKER_COMPOSE_FILE -p ${PROJECT_NAME} config --services`
GNUPLOT_SCRIPTS_DIR="$PROJECT_BASEDIR/src/main/gnuplot/jstat"
GNUPLOT_COMMON="$GNUPLOT_SCRIPTS_DIR/common.gp"
echo "Collecting service logs."
rm -rf ${ARTIFACTS_DIR}; mkdir -p ${ARTIFACTS_DIR}
for SERVICE in ${SERVICES}; do
mkdir -p "${ARTIFACTS_DIR}/${SERVICE}"
# log files & configs
if [[ $SERVICE =~ .*keycloak.* ]]; then
docker cp "${PROJECT_NAME}_${SERVICE}_1:/opt/jboss/keycloak/standalone/configuration" "${ARTIFACTS_DIR}/${SERVICE}/configuration"
docker cp "${PROJECT_NAME}_${SERVICE}_1:/opt/jboss/keycloak/standalone/log" "${ARTIFACTS_DIR}/${SERVICE}/log"
elif [[ $SERVICE =~ .*infinispan.* ]]; then
docker cp "${PROJECT_NAME}_${SERVICE}_1:/opt/jboss/infinispan-server/standalone/configuration" "${ARTIFACTS_DIR}/${SERVICE}/configuration"
docker cp "${PROJECT_NAME}_${SERVICE}_1:/opt/jboss/infinispan-server/standalone/log" "${ARTIFACTS_DIR}/${SERVICE}/log"
elif [[ $SERVICE =~ .*loadbalancer.* ]]; then
docker cp "${PROJECT_NAME}_${SERVICE}_1:/opt/jboss/wildfly/standalone/configuration" "${ARTIFACTS_DIR}/${SERVICE}/configuration"
docker cp "${PROJECT_NAME}_${SERVICE}_1:/opt/jboss/wildfly/standalone/log" "${ARTIFACTS_DIR}/${SERVICE}/log"
else
docker logs "${PROJECT_NAME}_${SERVICE}_1" > ${ARTIFACTS_DIR}/${SERVICE}/docker.log 2>&1;
if [[ $? != 0 ]]; then echo "ERROR collecting from: ${SERVICE}"; rm ${ARTIFACTS_DIR}/${SERVICE}/docker.log; fi
fi
# jstat charts
if ${JSTAT:-false}; then
JSTAT_DATAFILE="${ARTIFACTS_DIR}/${SERVICE}/log/jstat-gc.log"
if [ -f "$JSTAT_DATAFILE" ] && ${GNUPLOT:-false}; then
mkdir -p "${ARTIFACTS_DIR}/${SERVICE}/jstat-charts"
HTML="${ARTIFACTS_DIR}/${SERVICE}/jstat-charts/index.html"
echo "<html><head><title>JStat Charts for $SERVICE</title>" > "$HTML"
echo "<style>div.box{ display: -webkit-inline-box }</style></head>" >> "$HTML"
echo "<body><h1>JStat Charts for $SERVICE</h1>" >> "$HTML"
for GP_SCRIPT in gc-all gc-s0 gc-s1 gc-e gc-o gc-m gc-cc gc-ev gc-t ; do
gnuplot -e "datafile='$JSTAT_DATAFILE'" "$GNUPLOT_COMMON" "$GNUPLOT_SCRIPTS_DIR/${GP_SCRIPT}.gp" > "${ARTIFACTS_DIR}/${SERVICE}/jstat-charts/${GP_SCRIPT}.png"
if [ $? == 0 ]; then
echo "<div class='box'>" >> "$HTML"
echo "<b>${GP_SCRIPT}</b><br/>" >> "$HTML"
echo "<a href='${GP_SCRIPT}.png'><img src='${GP_SCRIPT}.png' width='400' height='300'/></a>" >> "$HTML"
echo "</div>" >> "$HTML"
fi
done
echo "</body></html>" >> "$HTML"
fi
fi
done
if [ -z "$(ls -A ${ARTIFACTS_DIR})" ]; then echo "No logs were collected."; rm -rf ${ARTIFACTS_DIR}; fi
;;
*)
echo "Unsupported operation: '$OPERATION'"
exit 1
;;
esac

View file

@ -1,49 +0,0 @@
#!/bin/bash
cd "$(dirname "$0")"
. ./common.sh
CHECK_TIMEOUT=${CHECK_TIMEOUT:-5}
function isKeycloakServerReady {
CODE=`curl --connect-timeout $CHECK_TIMEOUT -m $CHECK_TIMEOUT -s -o /dev/null -w "%{http_code}" $1/realms/master`
[ "$CODE" -eq "200" ]
}
if [ -f "$PROVISIONED_SYSTEM_PROPERTIES_FILE" ] ; then
FRONTEND_SERVERS=$( sed -n -e '/keycloak.frontend.servers=/ s/.*\= *//p' "$PROVISIONED_SYSTEM_PROPERTIES_FILE" )
BACKEND_SERVERS=$( sed -n -e '/keycloak.backend.servers=/ s/.*\= *//p' "$PROVISIONED_SYSTEM_PROPERTIES_FILE" )
KEYCLOAK_SERVERS="$FRONTEND_SERVERS $BACKEND_SERVERS"
HEALTHCHECK_ITERATIONS=${HEALTHCHECK_ITERATIONS:-20}
HEALTHCHECK_WAIT=${HEALTHCHECK_WAIT:-6s}
echo "Waiting for Keycloak servers to be ready."
echo "Check intervals: $HEALTHCHECK_ITERATIONS x $HEALTHCHECK_WAIT."
C=0
READY=false
while ! $READY; do
C=$((C+1))
if [ $C -gt $HEALTHCHECK_ITERATIONS ]; then
echo System healthcheck failed.
exit 1
fi
echo $( date "+%Y-%m-%d %H:%M:%S" )
READY=true
for SERVER in $KEYCLOAK_SERVERS ; do
if isKeycloakServerReady $SERVER; then
echo -e "Keycloak server: $SERVER\tREADY"
else
echo -e "Keycloak server: $SERVER\tNOT READY"
READY=false
fi
done
if ! $READY; then sleep $HEALTHCHECK_WAIT ; fi
done
echo All servers ready.
else
echo Healthcheck skipped.
fi

View file

@ -1,6 +0,0 @@
#!/bin/bash
cd "$(dirname "$0")"
. ./common.sh
java -cp $PROJECT_BUILD_DIRECTORY/classes org.keycloak.performance.log.LogProcessor "$@"

View file

@ -1,25 +0,0 @@
#provisioner=docker-compose
#deployment=singlenode
# Keycloak Settings
keycloak.scale=1
keycloak.docker.cpusets=1
keycloak.docker.memlimit=2500m
keycloak.jvm.memory=-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
keycloak.http.max-connections=50000
keycloak.worker.io-threads=2
keycloak.worker.task-max-threads=16
keycloak.ds.min-pool-size=10
keycloak.ds.max-pool-size=100
keycloak.ds.pool-prefill=true
keycloak.ds.ps-cache-size=100
keycloak.admin.user=admin
keycloak.admin.password=admin
# Database Settings
db.docker.cpusets=0
db.docker.memlimit=2g
db.max.connections=100
# Monitoring Settings
monitoring.docker.cpusets=0

View file

@ -1,34 +0,0 @@
#provisioner=docker-compose
#deployment=cluster
# Keycloak Settings
keycloak.scale=1
keycloak.docker.cpusets=2 3
keycloak.docker.memlimit=2500m
keycloak.jvm.memory=-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
keycloak.http.max-connections=50000
keycloak.ajp.max-connections=50000
keycloak.worker.io-threads=2
keycloak.worker.task-max-threads=16
keycloak.ds.min-pool-size=10
keycloak.ds.max-pool-size=100
keycloak.ds.pool-prefill=true
keycloak.ds.ps-cache-size=100
keycloak.admin.user=admin
keycloak.admin.password=admin
# Database Settings
db.docker.cpusets=1
db.docker.memlimit=2g
db.max.connections=100
# Load Balancer Settings
lb.docker.cpusets=1
lb.docker.memlimit=1500m
lb.jvm.memory=-Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
lb.http.max-connections=50000
lb.worker.io-threads=2
lb.worker.task-max-threads=16
# Monitoring Settings
monitoring.docker.cpusets=0

View file

@ -1,44 +0,0 @@
#provisioner=docker-compose
#deployment=crossdc
# Keycloak Settings
keycloak.dc1.scale=1
keycloak.dc2.scale=1
keycloak.dc1.docker.cpusets=2
keycloak.dc2.docker.cpusets=3
keycloak.docker.memlimit=2500m
keycloak.jvm.memory=-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
keycloak.http.max-connections=50000
keycloak.ajp.max-connections=50000
keycloak.worker.io-threads=2
keycloak.worker.task-max-threads=16
keycloak.ds.min-pool-size=10
keycloak.ds.max-pool-size=100
keycloak.ds.pool-prefill=true
keycloak.ds.ps-cache-size=100
keycloak.admin.user=admin
keycloak.admin.password=admin
# Database Settings
db.dc1.docker.cpusets=1
db.dc2.docker.cpusets=1
db.docker.memlimit=2g
db.max.connections=100
# Load Balancer Settings
lb.dc1.docker.cpusets=1
lb.dc2.docker.cpusets=1
lb.docker.memlimit=1500m
lb.jvm.memory=-Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
lb.http.max-connections=50000
lb.worker.io-threads=2
lb.worker.task-max-threads=16
# Infinispan Settings
infinispan.dc1.docker.cpusets=1
infinispan.dc2.docker.cpusets=1
infinispan.docker.memlimit=1500m
infinispan.jvm.memory=-Xms64m -Xmx1g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC
# Monitoring Settings
monitoring.docker.cpusets=0

View file

@ -1,25 +0,0 @@
#provisioner=docker-compose
#deployment=singlenode
# Keycloak Settings
keycloak.scale=1
keycloak.docker.cpusets=2-3
keycloak.docker.memlimit=2500m
keycloak.jvm.memory=-Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
keycloak.http.max-connections=50000
keycloak.worker.io-threads=2
keycloak.worker.task-max-threads=16
keycloak.ds.min-pool-size=10
keycloak.ds.max-pool-size=100
keycloak.ds.pool-prefill=true
keycloak.ds.ps-cache-size=100
keycloak.admin.user=admin
keycloak.admin.password=admin
# Database Settings
db.docker.cpusets=1
db.docker.memlimit=2g
db.max.connections=100
# Monitoring Settings
monitoring.docker.cpusets=0

View file

@ -1,8 +0,0 @@
gatling.simulationClass=keycloak.AdminConsoleSimulation
usersPerSec=0.2
rampUpPeriod=15
warmUpPeriod=15
measurementPeriod=30
filterResults=false
userThinkTime=0
refreshTokenPeriod=0

View file

@ -1,10 +0,0 @@
gatling.simulationClass=keycloak.OIDCLoginAndLogoutSimulation
usersPerSec=1.0
rampUpPeriod=15
warmUpPeriod=15
measurementPeriod=30
filterResults=false
userThinkTime=0
refreshTokenPeriod=0
refreshTokenCount=1
badLoginAttempts=1

View file

@ -1,10 +0,0 @@
gatling.simulationClass=keycloak.OIDCRegisterAndLogoutSimulation
usersPerSec=1.0
rampUpPeriod=15
warmUpPeriod=15
measurementPeriod=30
filterResults=false
userThinkTime=0
refreshTokenPeriod=0
refreshTokenCount=1
badLoginAttempts=1

View file

@ -1,854 +0,0 @@
<?xml version="1.0"?>
<!--
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
~ and other contributors as indicated by the @author tags.
~
~ Licensed under the Apache License, Version 2.0 (the "License");
~ you may not use this file except in compliance with the License.
~ You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<parent>
<groupId>org.keycloak.testsuite</groupId>
<artifactId>performance</artifactId>
<version>999-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>performance-tests</artifactId>
<name>Keycloak Performance TestSuite - Tests</name>
<properties>
<provisioner>docker-compose</provisioner>
<deployment>singlenode</deployment>
<provisioning.properties>${provisioner}/4cpus/${deployment}</provisioning.properties>
<dataset>default</dataset>
<test.properties>oidc-login-logout</test.properties>
<provisioning.properties.file>${project.basedir}/parameters/provisioning/${provisioning.properties}.properties</provisioning.properties.file>
<dataset.properties.file>${project.basedir}/src/test/resources/dataset/${dataset}.properties</dataset.properties.file>
<test.properties.file>${project.basedir}/parameters/test/${test.properties}.properties</test.properties.file>
<provisioned.system.properties.file>${project.build.directory}/provisioned-system.properties</provisioned.system.properties.file>
<!--other-->
<db.dump.download.site>https://downloads.jboss.org/keycloak-qe/${server.version}</db.dump.download.site>
<numOfWorkers>1</numOfWorkers>
<scala.version>2.11.7</scala.version>
<gatling.version>2.1.7</gatling.version>
<gatling-plugin.version>2.2.1</gatling-plugin.version>
<scala-maven-plugin.version>3.2.2</scala-maven-plugin.version>
<jboss-logging.version>3.3.0.Final</jboss-logging.version>
<gatling.simulationClass>keycloak.OIDCLoginAndLogoutSimulation</gatling.simulationClass>
<gatling.skip.run>true</gatling.skip.run>
<surefire.skip.run>true</surefire.skip.run>
<trustStoreArg/>
<trustStorePasswordArg/>
</properties>
<dependencies>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-adapter-core</artifactId>
<version>${server.version}</version>
</dependency>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-adapter-spi</artifactId>
<version>${server.version}</version>
</dependency>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-core</artifactId>
<version>${server.version}</version>
</dependency>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-common</artifactId>
<version>${server.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>${jackson.annotations.version}</version>
</dependency>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-admin-client</artifactId>
<version>${server.version}</version>
</dependency>
<dependency>
<groupId>commons-configuration</groupId>
<artifactId>commons-configuration</artifactId>
<version>1.10</version>
</dependency>
<dependency>
<groupId>commons-validator</groupId>
<artifactId>commons-validator</artifactId>
<version>1.6</version>
</dependency>
<dependency>
<groupId>org.freemarker</groupId>
<artifactId>freemarker</artifactId>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.spec.javax.ws.rs</groupId>
<artifactId>jboss-jaxrs-api_2.1_spec</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.logging</groupId>
<artifactId>jboss-logging</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-core</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-client</artifactId>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jackson2-provider</artifactId>
</dependency>
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>io.gatling.highcharts</groupId>
<artifactId>gatling-charts-highcharts</artifactId>
<version>${gatling.version}</version>
</dependency>
</dependencies>
<build>
<testResources>
<testResource>
<filtering>true</filtering>
<directory>src/test/resources</directory>
<excludes>
<exclude>**/*.gz</exclude>
<exclude>**/*.properties</exclude>
</excludes>
</testResource>
</testResources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<executions>
<execution>
<id>enforce-teardown-before-clean</id>
<goals>
<goal>enforce</goal>
</goals>
<phase>pre-clean</phase>
<configuration>
<rules>
<requireFilesDontExist>
<message><![CDATA[
WARNING: A previously provisioned system still appears to be running.
Please tear it down with `mvn verify -P teardown [-Pcluster|crossdc]` before runing `mvn clean`,
or delete the `provisioned-system.properties` file manually.
]]></message>
<files>
<file>${provisioned.system.properties.file}</file>
</files>
</requireFilesDontExist>
</rules>
<fail>true</fail>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0.0</version>
<executions>
<execution>
<id>read-parameters</id>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<quiet>true</quiet>
<files>
<file>${provisioning.properties.file}</file>
<file>${test.properties.file}</file>
</files>
</configuration>
</execution>
<execution>
<id>read-existing-provisioned-system-properties</id>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
<configuration>
<files>
<file>${project.build.directory}/provisioned-system.properties</file>
</files>
<quiet>true</quiet>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>${scala-maven-plugin.version}</version>
<executions>
<execution>
<id>add-source</id>
<!--phase>process-resources</phase-->
<goals>
<goal>add-source</goal>
<!--goal>compile</goal-->
</goals>
</execution>
<execution>
<id>compile</id>
<!--phase>process-test-resources</phase-->
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
<configuration>
<args>
<arg>-target:jvm-1.8</arg>
<arg>-deprecation</arg>
<arg>-feature</arg>
<arg>-unchecked</arg>
<arg>-language:implicitConversions</arg>
<arg>-language:postfixOps</arg>
</args>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<!--
Execute test directly by using:
mvn gatling:execute -Ptest -f testsuite/performance/gatling -Dgatling.simulationClass=keycloak.DemoSimulation2
For more usage info see: http://gatling.io/docs/current/extensions/maven_plugin/
-->
<groupId>io.gatling</groupId>
<artifactId>gatling-maven-plugin</artifactId>
<version>${gatling-plugin.version}</version>
<configuration>
<configFolder>${project.build.testOutputDirectory}</configFolder>
<skip>${gatling.skip.run}</skip>
<disableCompiler>true</disableCompiler>
<runMultipleSimulations>true</runMultipleSimulations>
<propagateSystemProperties>true</propagateSystemProperties>
<jvmArgs>
<!--common params-->
<param>-Dproject.build.directory=${project.build.directory}</param>
<param>-Dkeycloak.server.uris=${keycloak.frontend.servers}</param>
<param>-DauthUser=${keycloak.admin.user}</param>
<param>-DauthPassword=${keycloak.admin.password}</param>
<!--dataset params-->
<param>-Ddataset.properties.file=${dataset.properties.file}</param>
<!-- <param>-DnumOfRealms=${numOfRealms}</param>
<param>-DusersPerRealm=${usersPerRealm}</param>
<param>-DclientsPerRealm=${clientsPerRealm}</param>
<param>-DrealmRoles=${realmRoles}</param>
<param>-DrealmRolesPerUser=${realmRolesPerUser}</param>
<param>-DclientRolesPerUser=${clientRolesPerUser}</param>
<param>-DclientRolesPerClient=${clientRolesPerClient}</param>
<param>-DhashIterations=${hashIterations}</param>-->
<!--test params-->
<param>-DusersPerSec=${usersPerSec}</param>
<param>-DrampUpPeriod=${rampUpPeriod}</param>
<param>-DwarmUpPeriod=${warmUpPeriod}</param>
<param>-DmeasurementPeriod=${measurementPeriod}</param>
<param>-DfilterResults=${filterResults}</param>
<param>-DuserThinkTime=${userThinkTime}</param>
<param>-DrefreshTokenPeriod=${refreshTokenPeriod}</param>
<param>-DrefreshTokenCount=${refreshTokenCount}</param>
<param>-DbadLoginAttempts=${badLoginAttempts}</param>
<param>${trustStoreArg}</param>
<param>${trustStorePasswordArg}</param>
</jvmArgs>
</configuration>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
</goals>
<phase>integration-test</phase>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<configuration>
<workingDirectory>${project.basedir}</workingDirectory>
</configuration>
</plugin>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<systemPropertyVariables>
<dataset.properties.file>${dataset.properties.file}</dataset.properties.file>
</systemPropertyVariables>
<skip>${surefire.skip.run}</skip>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>cluster</id>
<properties>
<deployment>cluster</deployment>
</properties>
</profile>
<profile>
<id>crossdc</id>
<properties>
<deployment>crossdc</deployment>
</properties>
</profile>
<profile>
<id>ssl</id>
<activation>
<property>
<name>trustStore</name>
</property>
</activation>
<properties>
<trustStoreArg>-Djavax.net.ssl.trustStore=${trustStore}</trustStoreArg>
<trustStorePasswordArg>-Djavax.net.ssl.trustStorePassword=${trustStorePassword}</trustStorePasswordArg>
</properties>
</profile>
<profile>
<id>provision</id>
<properties>
<project.basedir>${project.basedir}</project.basedir>
<project.build.directory>${project.build.directory}</project.build.directory>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>prepare-provisioning</id>
<phase>generate-resources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<ant antfile="prepare-provisioning.xml" target="prepare-${provisioner}" />
</target>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>provision</id>
<phase>process-test-resources</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./${provisioner}.sh</executable>
<environmentVariables>
<PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>${deployment}</DEPLOYMENT>
<OPERATION>provision</OPERATION>
<KEYCLOAK_VERSION>${project.version}</KEYCLOAK_VERSION>
<MANAGEMENT_USER>${management.user}</MANAGEMENT_USER>
<MANAGEMENT_USER_PASS>${management.user.password}</MANAGEMENT_USER_PASS>
<KEYCLOAK_JSTAT>${keycloak.jstat}</KEYCLOAK_JSTAT>
<KEYCLOAK_SCALE>${keycloak.scale}</KEYCLOAK_SCALE>
<KEYCLOAK_DC1_SCALE>${keycloak.dc1.scale}</KEYCLOAK_DC1_SCALE>
<KEYCLOAK_DC2_SCALE>${keycloak.dc2.scale}</KEYCLOAK_DC2_SCALE>
<KEYCLOAK_CPUSETS>${keycloak.docker.cpusets}</KEYCLOAK_CPUSETS>
<KEYCLOAK_DC1_CPUSETS>${keycloak.dc1.docker.cpusets}</KEYCLOAK_DC1_CPUSETS>
<KEYCLOAK_DC2_CPUSETS>${keycloak.dc2.docker.cpusets}</KEYCLOAK_DC2_CPUSETS>
<KEYCLOAK_MEMLIMIT>${keycloak.docker.memlimit}</KEYCLOAK_MEMLIMIT>
<KEYCLOAK_JVM_MEMORY>${keycloak.jvm.memory}</KEYCLOAK_JVM_MEMORY>
<KEYCLOAK_HTTP_MAX_CONNECTIONS>${keycloak.http.max-connections}</KEYCLOAK_HTTP_MAX_CONNECTIONS>
<KEYCLOAK_AJP_MAX_CONNECTIONS>${keycloak.ajp.max-connections}</KEYCLOAK_AJP_MAX_CONNECTIONS>
<KEYCLOAK_WORKER_IO_THREADS>${keycloak.worker.io-threads}</KEYCLOAK_WORKER_IO_THREADS>
<KEYCLOAK_WORKER_TASK_MAX_THREADS>${keycloak.worker.task-max-threads}</KEYCLOAK_WORKER_TASK_MAX_THREADS>
<KEYCLOAK_DS_MIN_POOL_SIZE>${keycloak.ds.min-pool-size}</KEYCLOAK_DS_MIN_POOL_SIZE>
<KEYCLOAK_DS_MAX_POOL_SIZE>${keycloak.ds.max-pool-size}</KEYCLOAK_DS_MAX_POOL_SIZE>
<KEYCLOAK_DS_POOL_PREFILL>${keycloak.ds.pool-prefill}</KEYCLOAK_DS_POOL_PREFILL>
<KEYCLOAK_DS_PS_CACHE_SIZE>${keycloak.ds.ps-cache-size}</KEYCLOAK_DS_PS_CACHE_SIZE>
<KEYCLOAK_ADMIN_USER>${keycloak.admin.user}</KEYCLOAK_ADMIN_USER>
<KEYCLOAK_ADMIN_PASSWORD>${keycloak.admin.password}</KEYCLOAK_ADMIN_PASSWORD>
<DB_CPUSETS>${db.docker.cpusets}</DB_CPUSETS>
<DB_DC1_CPUSETS>${db.dc1.docker.cpusets}</DB_DC1_CPUSETS>
<DB_DC2_CPUSETS>${db.dc2.docker.cpusets}</DB_DC2_CPUSETS>
<DB_MEMLIMIT>${db.docker.memlimit}</DB_MEMLIMIT>
<DB_MAX_CONNECTIONS>${db.max.connections}</DB_MAX_CONNECTIONS>
<LB_CPUSETS>${lb.docker.cpusets}</LB_CPUSETS>
<LB_DC1_CPUSETS>${lb.dc1.docker.cpusets}</LB_DC1_CPUSETS>
<LB_DC2_CPUSETS>${lb.dc2.docker.cpusets}</LB_DC2_CPUSETS>
<LB_MEMLIMIT>${lb.docker.memlimit}</LB_MEMLIMIT>
<LB_JVM_MEMORY>${lb.jvm.memory}</LB_JVM_MEMORY>
<LB_HTTP_MAX_CONNECTIONS>${lb.http.max-connections}</LB_HTTP_MAX_CONNECTIONS>
<LB_WORKER_IO_THREADS>${lb.worker.io-threads}</LB_WORKER_IO_THREADS>
<LB_WORKER_TASK_MAX_THREADS>${lb.worker.task-max-threads}</LB_WORKER_TASK_MAX_THREADS>
<LB_JSTAT>${lb.jstat}</LB_JSTAT>
<INFINISPAN_DC1_CPUSETS>${infinispan.dc1.docker.cpusets}</INFINISPAN_DC1_CPUSETS>
<INFINISPAN_DC2_CPUSETS>${infinispan.dc2.docker.cpusets}</INFINISPAN_DC2_CPUSETS>
<INFINISPAN_MEMLIMIT>${infinispan.docker.memlimit}</INFINISPAN_MEMLIMIT>
<INFINISPAN_JVM_MEMORY>${infinispan.jvm.memory}</INFINISPAN_JVM_MEMORY>
<INFINISPAN_JSTAT>${infinispan.jstat}</INFINISPAN_JSTAT>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0.0</version>
<executions>
<execution>
<id>read-new-provisioned-system-properties</id>
<goals>
<goal>read-project-properties</goal>
</goals>
<phase>pre-integration-test</phase>
<configuration>
<files>
<file>${project.build.directory}/provisioned-system.properties</file>
</files>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>generate-data</id>
<properties>
<delete>false</delete>
<log.every>5</log.every>
<queue.timeout>60</queue.timeout>
<shutdown.timeout>60</shutdown.timeout>
<template.cache.size>10000</template.cache.size>
<randoms.cache.size>10000</randoms.cache.size>
<entity.cache.size>100000</entity.cache.size>
<max.heap>2g</max.heap>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>load-data</id>
<phase>pre-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>java</executable>
<workingDirectory>${project.build.directory}</workingDirectory>
<arguments>
<argument>-classpath</argument>
<classpath/>
<argument>-Xms64m</argument>
<argument>-Xmx${max.heap}</argument>
<argument>-XX:MetaspaceSize=96M</argument>
<argument>-XX:MaxMetaspaceSize=256m</argument>
<argument>-Dkeycloak.server.uris=${keycloak.frontend.servers}</argument>
<argument>-DauthUser=${keycloak.admin.user}</argument>
<argument>-DauthPassword=${keycloak.admin.password}</argument>
<argument>-DnumOfWorkers=${numOfWorkers}</argument>
<argument>-Ddataset.properties.file=${dataset.properties.file}</argument>
<argument>${trustStoreArg}</argument>
<argument>${trustStorePasswordArg}</argument>
<argument>-Ddelete=${delete}</argument>
<argument>-Dlog.every=${log.every}</argument>
<argument>-Dqueue.timeout=${queue.timeout}</argument>
<argument>-Dshutdown.timeout=${shutdown.timeout}</argument>
<argument>-Dtemplate.cache.size=${template.cache.size}</argument>
<argument>-Drandoms.cache.size=${randoms.cache.size}</argument>
<argument>-Dentity.cache.size=${entity.cache.size}</argument>
<argument>org.keycloak.performance.dataset.DatasetLoader</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>export-dump</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>export-dump</id>
<phase>pre-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./${provisioner}.sh</executable>
<environmentVariables>
<PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>${deployment}</DEPLOYMENT>
<OPERATION>export-dump</OPERATION>
<DATASET_PROPERTIES_FILE>${dataset.properties.file}</DATASET_PROPERTIES_FILE>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>import-dump</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>import-dump</id>
<phase>pre-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./${provisioner}.sh</executable>
<environmentVariables>
<PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>${deployment}</DEPLOYMENT>
<OPERATION>import-dump</OPERATION>
<DATASET_PROPERTIES_FILE>${dataset.properties.file}</DATASET_PROPERTIES_FILE>
<DUMP_DOWNLOAD_SITE>${db.dump.download.site}</DUMP_DOWNLOAD_SITE>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>test</id>
<properties>
<gatling.skip.run>false</gatling.skip.run>
</properties>
</profile>
<profile>
<id>junit</id>
<properties>
<surefire.skip.run>false</surefire.skip.run>
</properties>
</profile>
<profile>
<id>collect</id>
<properties>
<gnuplot>false</gnuplot>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>collect-artifacts</id>
<phase>post-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./${provisioner}.sh</executable>
<environmentVariables>
<PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>${deployment}</DEPLOYMENT>
<OPERATION>collect</OPERATION>
<JSTAT>${jstat}</JSTAT>
<GNUPLOT>${gnuplot}</GNUPLOT>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>teardown</id>
<properties>
<delete.data>true</delete.data>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>teardown</id>
<phase>post-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./${provisioner}.sh</executable>
<environmentVariables>
<PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>${deployment}</DEPLOYMENT>
<OPERATION>teardown</OPERATION>
<DELETE_DATA>${delete.data}</DELETE_DATA>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>keep-data</id>
<properties>
<delete.data>false</delete.data>
</properties>
</profile>
<profile>
<id>monitoring</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>monitoring-on</id>
<phase>pre-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./${provisioner}.sh</executable>
<environmentVariables>
<PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>monitoring</DEPLOYMENT>
<OPERATION>provision</OPERATION>
<MONITORING_CPUSETS>${monitoring.docker.cpusets}</MONITORING_CPUSETS>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>monitoring-off</id>
<properties>
<delete.monitoring.data>false</delete.monitoring.data>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>monitoring-off</id>
<phase>post-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./${provisioner}.sh</executable>
<environmentVariables>
<PROVISIONER>${provisioner}</PROVISIONER>
<DEPLOYMENT>monitoring</DEPLOYMENT>
<OPERATION>teardown</OPERATION>
<DELETE_DATA>${delete.monitoring.data}</DELETE_DATA>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>delete-monitoring-data</id>
<properties>
<delete.monitoring.data>true</delete.monitoring.data>
</properties>
</profile>
<profile>
<id>start-sar</id>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>start-sar</id>
<phase>pre-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./sar.sh</executable>
<environmentVariables>
<SAR_OPERATION>start</SAR_OPERATION>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>stop-sar</id>
<properties>
<gnuplot>false</gnuplot>
<bzip>false</bzip>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<executions>
<execution>
<id>stop-sar</id>
<phase>post-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>./sar.sh</executable>
<environmentVariables>
<SAR_OPERATION>stop</SAR_OPERATION>
<GNUPLOT>${gnuplot}</GNUPLOT>
<BZIP>${bzip}</BZIP>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
<profile>
<id>gnuplot</id>
<properties>
<gnuplot>true</gnuplot>
</properties>
</profile>
<profile>
<id>jstat</id>
<properties>
<jstat>true</jstat>
</properties>
</profile>
</profiles>
</project>

View file

@ -1,24 +0,0 @@
<project name="prepare-provisioning" basedir="." >
<target name="prepare-docker-compose">
<copy todir="${project.build.directory}/docker-compose" overwrite="false" >
<fileset dir="${project.basedir}/src/main/docker-compose"/>
</copy>
<copy todir="${project.build.directory}/docker-compose" overwrite="false" failonerror="true">
<fileset dir="${project.basedir}/..">
<include name="db/**"/>
<include name="monitoring/**"/>
</fileset>
</copy>
<copy todir="${project.build.directory}/docker-compose/infinispan" overwrite="false" failonerror="false">
<fileset dir="${project.basedir}/../infinispan/target/docker"/>
</copy>
<copy todir="${project.build.directory}/docker-compose/load-balancer/wildfly-modcluster" overwrite="false" failonerror="false">
<fileset dir="${project.basedir}/../load-balancer/wildfly-modcluster/target/docker"/>
</copy>
<copy todir="${project.build.directory}/docker-compose/keycloak" overwrite="false" failonerror="true">
<fileset dir="${project.basedir}/../keycloak/target/docker"/>
</copy>
</target>
</project>

View file

@ -1,132 +0,0 @@
#!/bin/bash
cd "$(dirname "$0")"
. ./common.sh
SAR_OPERATION=${SAR_OPERATION:-stop}
SAR_FOLDER="$PROJECT_BUILD_DIRECTORY/sar"
PID_FILE="$SAR_FOLDER/sar.pid"
TIMESTAMP_FILE="$SAR_FOLDER/sar.timestamp"
if [[ -f "$TIMESTAMP_FILE" ]]; then
TIMESTAMP=`cat $TIMESTAMP_FILE`
else
TIMESTAMP=`date +%s`
fi
SAR_RESULTS_FOLDER="$SAR_FOLDER/$TIMESTAMP"
SAR_OUTPUT_FILE="$SAR_RESULTS_FOLDER/sar-output.bin"
BZIP=${BZIP:-false}
CPU_COUNT=${CPU_COUNT:-`grep -c ^processor /proc/cpuinfo`}
GNUPLOT=${GNUPLOT:-false}
GNUPLOT_SCRIPTS_DIR="$PROJECT_BASEDIR/src/main/gnuplot/sar"
GNUPLOT_COMMON="$GNUPLOT_SCRIPTS_DIR/common.gplot"
function process_cpu_results() {
RESULTS_FOLDER="$SAR_RESULTS_FOLDER/cpu"; mkdir -p "$RESULTS_FOLDER"
CPU=${1:-ALL}
if [ "$CPU" == "ALL" ]; then SAR_PARAMS="-u"; else SAR_PARAMS="-u -P $CPU"; fi
TXT_FILE="$RESULTS_FOLDER/cpu-$CPU.txt"
CSV_FILE="${TXT_FILE%.txt}.csv"
PNG_FILE="${TXT_FILE%.txt}.png"
sar $SAR_PARAMS -f $SAR_OUTPUT_FILE > "$TXT_FILE"
sadf -d -- $SAR_PARAMS $SAR_OUTPUT_FILE > "$CSV_FILE"
if $GNUPLOT; then
gnuplot -e "datafile='$CSV_FILE'" "$GNUPLOT_COMMON" "$GNUPLOT_SCRIPTS_DIR/cpu.gplot" > "$PNG_FILE"
fi
}
#function process_net_results() {
# IFACE=${IFACE:-docker0}
# RESULTS_FOLDER="$SAR_RESULTS_FOLDER/net"; mkdir -p "$RESULTS_FOLDER"
# TXT_FILE="$RESULTS_FOLDER/net-$IFACE.txt"
# CSV_FILE="${TXT_FILE%.txt}.csv"
# PNG_FILE="${TXT_FILE%.txt}.png"
# sar -n DEV -f $SAR_OUTPUT_FILE > "${TXT_FILE}.tmp"
# sadf -d -- -n DEV $SAR_OUTPUT_FILE > "${CSV_FILE}.tmp"
# head -n 3 "${TXT_FILE}.tmp" > "$TXT_FILE"; grep "$IFACE" "${TXT_FILE}.tmp" >> "$TXT_FILE"; rm "${TXT_FILE}.tmp"
# head -n 1 "${CSV_FILE}.tmp" > "$CSV_FILE"; grep "$IFACE" "${CSV_FILE}.tmp" >> "$CSV_FILE"; rm "${CSV_FILE}.tmp"
# if $GNUPLOT; then
# gnuplot -e "datafile='$CSV_FILE'" "$GNUPLOT_COMMON" "$GNUPLOT_SCRIPTS_DIR/net.gplot" > "$PNG_FILE"
# fi
#}
function process_io_results() {
RESULTS_FOLDER="$SAR_RESULTS_FOLDER"
TXT_FILE="$RESULTS_FOLDER/io.txt"
CSV_FILE="${TXT_FILE%.txt}.csv"
sar -b -f $SAR_OUTPUT_FILE > "${TXT_FILE}"
sadf -d -- -b $SAR_OUTPUT_FILE > "${CSV_FILE}"
if $GNUPLOT; then
gnuplot -e "datafile='$CSV_FILE'" "$GNUPLOT_COMMON" "$GNUPLOT_SCRIPTS_DIR/io-requests.gplot" > "${TXT_FILE%.txt}-requests.png"
gnuplot -e "datafile='$CSV_FILE'" "$GNUPLOT_COMMON" "$GNUPLOT_SCRIPTS_DIR/io-data.gplot" > "${TXT_FILE%.txt}-data.png"
fi
}
function process_mem_results() {
RESULTS_FOLDER="$SAR_RESULTS_FOLDER"
TXT_FILE="$RESULTS_FOLDER/mem.txt"
CSV_FILE="${TXT_FILE%.txt}.csv"
PNG_FILE="${TXT_FILE%.txt}.png"
sar -r -f $SAR_OUTPUT_FILE > "${TXT_FILE}"
sadf -d -- -r $SAR_OUTPUT_FILE > "${CSV_FILE}"
if $GNUPLOT; then
gnuplot -e "datafile='$CSV_FILE'" "$GNUPLOT_COMMON" "$GNUPLOT_SCRIPTS_DIR/mem.gplot" > "$PNG_FILE"
fi
}
function process_cswch_results() {
RESULTS_FOLDER="$SAR_RESULTS_FOLDER"
TXT_FILE="$RESULTS_FOLDER/cswch.txt"
CSV_FILE="${TXT_FILE%.txt}.csv"
PNG_FILE="${TXT_FILE%.txt}.png"
sar -w -f $SAR_OUTPUT_FILE > "${TXT_FILE}"
sadf -d -- -w $SAR_OUTPUT_FILE > "${CSV_FILE}"
if $GNUPLOT; then
gnuplot -e "datafile='$CSV_FILE'" "$GNUPLOT_COMMON" "$GNUPLOT_SCRIPTS_DIR/cswch.gplot" > "$PNG_FILE"
fi
}
case "$SAR_OPERATION" in
start)
if [[ ! -f "$PID_FILE" ]]; then
echo "Starting sar command."
mkdir -p $SAR_RESULTS_FOLDER
echo $TIMESTAMP > $TIMESTAMP_FILE
sar -A -o "$SAR_OUTPUT_FILE" 2 &>/dev/null & SAR_PID=$! && echo $SAR_PID > $PID_FILE
fi
;;
stop)
if [[ -f "$PID_FILE" ]]; then
echo "Stopping sar command."
SAR_PID=`cat $PID_FILE`
kill $SAR_PID && rm $PID_FILE && rm $TIMESTAMP_FILE
echo "Processing sar output. GNUPLOT: $GNUPLOT"
# CPU
mkdir $SAR_RESULTS_FOLDER/cpu
process_cpu_results
for CPU in $(seq -f "%02g" 0 $(( CPU_COUNT-1 )) ); do
process_cpu_results $CPU
done
# for IFACE in $(ls /sys/class/net); do
# process_net_results $IFACE
# done
process_io_results
process_mem_results
process_cswch_results
if $BZIP; then bzip2 "$SAR_OUTPUT_FILE"; fi
echo "Done."
fi
;;
esac

View file

@ -1,55 +0,0 @@
version: "2.2"
networks:
keycloak:
ipam:
config:
- subnet: 10.0.1.0/24
# loadbalancing:
# ipam:
# config:
# - subnet: 10.0.2.0/24
services:
mariadb:
build:
context: db/mariadb
args:
MAX_CONNECTIONS: ${DB_MAX_CONNECTIONS:-100}
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
cpuset: ${DB_CPUSET:-1}
mem_limit: ${DB_MEMLIMIT:-2g}
networks:
- keycloak
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: keycloak
ports:
- "3306:3306"
loadbalancer:
build: load-balancer/wildfly-modcluster
image: keycloak_test_loadbalancer:${KEYCLOAK_VERSION:-latest}
# depends_on:
# keycloak:
# condition: service_healthy
cpuset: ${LB_CPUSET:-1}
mem_limit: ${LB_MEMLIMIT:-1500m}
networks:
- keycloak
# - loadbalancing
environment:
PRIVATE_SUBNET: 10.0.1.0/24
# PUBLIC_SUBNET: 10.0.2.0/24
JAVA_OPTS: ${LB_JVM_MEMORY:--Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
HTTP_MAX_CONNECTIONS: ${LB_HTTP_MAX_CONNECTIONS:-50000}
WORKER_IO_THREADS: ${LB_WORKER_IO_THREADS:-2}
WORKER_TASK_MAX_THREADS: ${LB_WORKER_TASK_MAX_THREADS:-16}
JSTAT: "${LB_JSTAT:-false}"
ports:
- "8080:8080"
- "9990:9990"

View file

@ -1,35 +0,0 @@
keycloak_%I%:
build: keycloak
image: keycloak_test_keycloak:${KEYCLOAK_VERSION:-latest}
depends_on:
mariadb:
condition: service_healthy
cpuset: "%CPUSET%"
mem_limit: ${KEYCLOAK_MEMLIMIT:-2500m}
networks:
- keycloak
environment:
CONFIGURATION: standalone-ha.xml
PUBLIC_SUBNET: 10.0.1.0/24
PRIVATE_SUBNET: 10.0.1.0/24
MARIADB_HOSTS: mariadb:3306
MARIADB_DATABASE: keycloak
MARIADB_USER: keycloak
MARIADB_PASSWORD: keycloak
KEYCLOAK_ADMIN_USER: ${KEYCLOAK_ADMIN_USER:-admin}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD:-admin}
JSTAT: "${KEYCLOAK_JSTAT:-false}"
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-50000}
AJP_MAX_CONNECTIONS: ${KEYCLOAK_AJP_MAX_CONNECTIONS:-50000}
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
ports:
- "8080"
- "9990"

View file

@ -1,160 +0,0 @@
version: "2.2"
networks:
# DC 1
dc1_keycloak:
ipam:
config:
- subnet: 10.1.1.0/24
# DC 2
dc2_keycloak:
ipam:
config:
- subnet: 10.2.1.0/24
# # cross-DC
# loadbalancing:
# ipam:
# config:
# - subnet: 10.0.2.0/24
# cross-DC
db_replication:
ipam:
config:
- subnet: 10.0.3.0/24
# cross-DC
ispn_replication:
ipam:
config:
- subnet: 10.0.4.0/24
services:
infinispan_dc1:
build: infinispan
image: keycloak_test_infinispan:${KEYCLOAK_VERSION:-latest}
cpuset: ${INFINISPAN_DC1_CPUSET:-1}
mem_limit: ${INFINISPAN_MEMLIMIT:-1500m}
networks:
- ispn_replication
- dc1_keycloak
environment:
CONFIGURATION: clustered-dc1.xml
PUBLIC_SUBNET: 10.1.1.0/24
PRIVATE_SUBNET: 10.0.4.0/24
MGMT_USER: admin
MGMT_USER_PASSWORD: admin
TCP_PING_INITIAL_HOSTS: infinispan_dc1[7600]
JAVA_OPTS: ${INFINISPAN_JVM_MEMORY:--Xms64m -Xmx1g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
JSTAT: "${INFINISPAN_JSTAT:-false}"
ports:
- "9990"
infinispan_dc2:
build: infinispan
image: keycloak_test_infinispan:${KEYCLOAK_VERSION:-latest}
depends_on:
infinispan_dc1:
condition: service_healthy
cpuset: ${INFINISPAN_DC2_CPUSET:-1}
mem_limit: ${INFINISPAN_MEMLIMIT:-1500m}
networks:
- ispn_replication
- dc2_keycloak
environment:
CONFIGURATION: clustered-dc2.xml
PUBLIC_SUBNET: 10.2.1.0/24
PRIVATE_SUBNET: 10.0.4.0/24
MGMT_USER: admin
MGMT_USER_PASSWORD: admin
TCP_PING_INITIAL_HOSTS: infinispan_dc1[7600],infinispan_dc2[7600]
JAVA_OPTS: ${INFINISPAN_JVM_MEMORY:--Xms64m -Xmx1g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
JSTAT: "${INFINISPAN_JSTAT:-false}"
ports:
- "9990"
mariadb_dc1:
build:
context: db/mariadb
args:
MAX_CONNECTIONS: ${DB_MAX_CONNECTIONS:-100}
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
cpuset: ${DB_DC1_CPUSET:-1}
mem_limit: ${DB_MEMLIMIT:-2g}
networks:
- db_replication
- dc1_keycloak
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_INITDB_SKIP_TZINFO: foo
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: keycloak
entrypoint: docker-entrypoint-wsrep.sh
command: --wsrep-new-cluster
ports:
- "3307:3306"
mariadb_dc2:
build:
context: db/mariadb
args:
MAX_CONNECTIONS: ${DB_MAX_CONNECTIONS:-100}
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
depends_on:
mariadb_dc1:
condition: service_healthy
cpuset: ${DB_DC2_CPUSET:-1}
mem_limit: ${DB_MEMLIMIT:-2g}
networks:
- db_replication
- dc2_keycloak
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_INITDB_SKIP_TZINFO: foo
entrypoint: docker-entrypoint-wsrep.sh
command: --wsrep_cluster_address=gcomm://mariadb_dc1
ports:
- "3308:3306"
loadbalancer_dc1:
build: load-balancer/wildfly-modcluster
image: keycloak_test_loadbalancer:${KEYCLOAK_VERSION:-latest}
cpuset: ${LB_DC1_CPUSET:-1}
mem_limit: ${LB_MEMLIMIT:-1500m}
networks:
- dc1_keycloak
# - loadbalancing
environment:
PRIVATE_SUBNET: 10.1.1.0/24
# PUBLIC_SUBNET: 10.0.2.0/24
JAVA_OPTS: ${LB_JVM_MEMORY:--Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
HTTP_MAX_CONNECTIONS: ${LB_HTTP_MAX_CONNECTIONS:-50000}
WORKER_IO_THREADS: ${LB_WORKER_IO_THREADS:-2}
WORKER_TASK_MAX_THREADS: ${LB_WORKER_TASK_MAX_THREADS:-16}
JSTAT: "${LB_JSTAT:-false}"
ports:
- "8081:8080"
- "9991:9990"
loadbalancer_dc2:
build: load-balancer/wildfly-modcluster
image: keycloak_test_loadbalancer:${KEYCLOAK_VERSION:-latest}
cpuset: ${LB_DC2_CPUSET:-1}
mem_limit: ${LB_MEMLIMIT:-1500m}
networks:
- dc2_keycloak
# - loadbalancing
environment:
PRIVATE_SUBNET: 10.2.1.0/24
# PUBLIC_SUBNET: 10.0.2.0/24
JAVA_OPTS: ${LB_JVM_MEMORY:--Xms64m -Xmx1024m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
HTTP_MAX_CONNECTIONS: ${LB_HTTP_MAX_CONNECTIONS:-50000}
WORKER_IO_THREADS: ${LB_WORKER_IO_THREADS:-2}
WORKER_TASK_MAX_THREADS: ${LB_WORKER_TASK_MAX_THREADS:-16}
JSTAT: "${LB_JSTAT:-false}"
ports:
- "8082:8080"
- "9992:9990"

View file

@ -1,43 +0,0 @@
keycloak_dc1_%I%:
build: ./keycloak
image: keycloak_test_keycloak_dc1:${KEYCLOAK_VERSION:-latest}
depends_on:
# wait for the db cluster to be ready before starting keycloak
mariadb_dc2:
condition: service_healthy
# wait for the ispn cluster to be ready before starting keycloak
infinispan_dc2:
condition: service_healthy
cpuset: "%CPUSET%"
mem_limit: ${KEYCLOAK_MEMLIMIT:-2500m}
networks:
- dc1_keycloak
environment:
CONFIGURATION: standalone-ha.xml
PUBLIC_SUBNET: 10.1.1.0/24
PRIVATE_SUBNET: 10.1.1.0/24
MARIADB_HOSTS: mariadb_dc1:3306
MARIADB_DATABASE: keycloak
MARIADB_USER: keycloak
MARIADB_PASSWORD: keycloak
KEYCLOAK_ADMIN_USER: ${KEYCLOAK_ADMIN_USER:-admin}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD:-admin}
JSTAT: "${KEYCLOAK_JSTAT:-false}"
INFINISPAN_HOST: infinispan_dc1
SITE: dc1
HOTROD_VERSION: 2.8
CACHE_STATISTICS: "true"
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-50000}
AJP_MAX_CONNECTIONS: ${KEYCLOAK_AJP_MAX_CONNECTIONS:-50000}
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
ports:
- "8080"
- "9990"

View file

@ -1,40 +0,0 @@
keycloak_dc2_%I%:
build: ./keycloak
image: keycloak_test_keycloak_dc2:${KEYCLOAK_VERSION:-latest}
depends_on:
# wait for first kc instance to be ready before starting another
keycloak_dc1_1:
condition: service_healthy
cpuset: "%CPUSET%"
mem_limit: ${KEYCLOAK_MEMLIMIT:-2500m}
networks:
- dc2_keycloak
environment:
CONFIGURATION: standalone-ha.xml
PUBLIC_SUBNET: 10.2.1.0/24
PRIVATE_SUBNET: 10.2.1.0/24
MARIADB_HOSTS: mariadb_dc2:3306
MARIADB_DATABASE: keycloak
MARIADB_USER: keycloak
MARIADB_PASSWORD: keycloak
KEYCLOAK_ADMIN_USER: ${KEYCLOAK_ADMIN_USER:-admin}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD:-admin}
JSTAT: "${KEYCLOAK_JSTAT:-false}"
INFINISPAN_HOST: infinispan_dc2
SITE: dc2
HOTROD_VERSION: 2.8
CACHE_STATISTICS: "true"
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-50000}
AJP_MAX_CONNECTIONS: ${KEYCLOAK_AJP_MAX_CONNECTIONS:-50000}
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
ports:
- "8080"
- "9990"

View file

@ -1,79 +0,0 @@
#version: '3'
version: '2.2'
networks:
monitoring:
ipam:
config:
- subnet: 10.0.5.0/24
services:
monitoring_influxdb:
image: influxdb
cpuset: ${MONITORING_CPUSET:-0}
volumes:
- influx:/var/lib/influxdb
networks:
- monitoring
ports:
- "8086:8086"
# deploy:
# replicas: 1
# placement:
# constraints:
# - node.role == manager
monitoring_cadvisor:
build: monitoring/cadvisor
image: monitoring_cadvisor
cpuset: ${MONITORING_CPUSET:-0}
hostname: '{{.Node.ID}}'
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
networks:
- monitoring
environment:
INFLUX_HOST: monitoring_influxdb
INFLUX_DATABASE: cadvisor
privileged: true
depends_on:
- monitoring_influxdb
command: --storage_driver_buffer_duration="5s"
ports:
- "8087:8080"
# deploy:
# mode: global
monitoring_grafana:
build: monitoring/grafana
image: monitoring_grafana
cpuset: ${MONITORING_CPUSET:-0}
depends_on:
- monitoring_influxdb
volumes:
- grafana:/var/lib/grafana
networks:
- monitoring
environment:
INFLUX_DATASOURCE_NAME: influxdb_cadvisor
INFLUX_HOST: monitoring_influxdb
INFLUX_DATABASE: cadvisor
ports:
- "3000:3000"
# deploy:
# replicas: 1
# placement:
# constraints:
# - node.role == manager
volumes:
influx:
driver: local
grafana:
driver: local

View file

@ -1,59 +0,0 @@
version: "2.2"
networks:
keycloak:
ipam:
config:
- subnet: 10.0.1.0/24
services:
mariadb:
build:
context: db/mariadb
args:
MAX_CONNECTIONS: ${DB_MAX_CONNECTIONS:-100}
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
cpuset: ${DB_CPUSET:-1}
mem_limit: ${DB_MEMLIMIT:-2g}
networks:
- keycloak
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: keycloak
MYSQL_INITDB_SKIP_TZINFO: 1
ports:
- "3306:3306"
keycloak:
build: keycloak
image: keycloak_test_keycloak:${KEYCLOAK_VERSION:-latest}
depends_on:
mariadb:
condition: service_healthy
cpuset: ${KEYCLOAK_CPUSET:-2-3}
mem_limit: ${KEYCLOAK_MEMLIMIT:-2500m}
networks:
- keycloak
environment:
MARIADB_HOSTS: mariadb:3306
MARIADB_DATABASE: keycloak
MARIADB_USER: keycloak
MARIADB_PASSWORD: keycloak
KEYCLOAK_ADMIN_USER: ${KEYCLOAK_ADMIN_USER:-admin}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD:-admin}
# docker-compose syntax note: ${ENV_VAR:-<DEFAULT_VALUE>}
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx2g -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-50000}
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
JSTAT: "${KEYCLOAK_JSTAT:-false}"
ports:
- "8080:8080"
- "9990:9990"

View file

@ -1,17 +0,0 @@
set datafile separator whitespace
set datafile commentschar ""
set xlabel "Runtime (s)"
set ylabel "Memory (kB)"
set terminal pngcairo size 1280,800
set xtics rotate
set yrange [0:*]
set key below
set grid
set style fill solid 1.0 border -1
set linetype 1 lc rgb '#ff0000'
set linetype 2 lc rgb '#00ff39'
set linetype 3 lc rgb '#4d00ff'
set linetype 4 lc rgb '#ff00fb'
set linetype 5 lc rgb '#00ffff'
set linetype 6 lc rgb '#f7ff00'

View file

@ -1,8 +0,0 @@
set title "Heap + Non-heap Memory"
plot\
datafile using 1:(column('S0C')+column('S1C')+column('EC')+column('OC')+column('MC')+column('CCSC')) title 'CCSC' with filledcurves x1, \
datafile using 1:(column('S0C')+column('S1C')+column('EC')+column('OC')+column('MC')) title 'MC' with filledcurves x1, \
datafile using 1:(column('S0C')+column('S1C')+column('EC')+column('OC')) title 'OC' with filledcurves x1, \
datafile using 1:(column('S0C')+column('S1C')+column('EC')) title 'EC' with filledcurves x1, \
datafile using 1:(column('S0C')+column('S1C')) title 'S1C' with filledcurves x1, \
datafile using 1:'S0C' title 'S0C' with filledcurves x1

View file

@ -1,2 +0,0 @@
set title "Utilisation of Compressed Classes space"
plot for [i in "CCSU CCSC"] datafile using 1:i title columnheader(i) with lines

View file

@ -1,2 +0,0 @@
set title "Utilisation of Eden space"
plot for [i in "EU EC"] datafile using 1:i title columnheader(i) with lines

View file

@ -1,3 +0,0 @@
set ylabel "Number of events"
set title "Utilisation of Garbage collection events (young, full)"
plot for [i in "YGC FGC"] datafile using 1:i title columnheader(i) with lines

View file

@ -1,2 +0,0 @@
set title "Utilisation of Meta space"
plot for [i in "MU MC"] datafile using 1:i title columnheader(i) with lines

View file

@ -1,2 +0,0 @@
set title "Utilisation of Old space"
plot for [i in "OU OC"] datafile using 1:i title columnheader(i) with lines

Some files were not shown because too many files have changed in this diff Show more