Performance Testsuite
This commit is contained in:
parent
fd025ae76b
commit
f0ce4d4236
106 changed files with 7767 additions and 15 deletions
50
testsuite/performance/README.docker-compose.md
Normal file
50
testsuite/performance/README.docker-compose.md
Normal file
|
@ -0,0 +1,50 @@
|
|||
# Keycloak Performance Testsuite - Docker Compose
|
||||
|
||||
## Requirements:
|
||||
- Maven 3.1.1+
|
||||
- Unpacked Keycloak server distribution in `keycloak/target` folder.
|
||||
- Docker 1.13+
|
||||
- Docker Compose 1.14+
|
||||
|
||||
### Keycloak Server Distribution
|
||||
To unpack the current Keycloak server distribution into `keycloak/target` folder:
|
||||
1. Build and install the distribution by running `mvn install -Pdistribution` from the root of the Keycloak project.
|
||||
2. Unpack the installed artifact by running `mvn process-resources` from the Performance Testsuite module.
|
||||
|
||||
## Deployments
|
||||
|
||||
### Singlenode Deployment
|
||||
- Build / rebuild: `docker-compose build`
|
||||
- Start services: `docker-compose up -d --build`
|
||||
Note: The `--build` parameter triggers a rebuild/restart of changed services if they are already running.
|
||||
- Stop services: `docker-compose down -v`. If you wish to keep the container volumes skip the `-v` option.
|
||||
|
||||
### Keycloak Cluster Deployment
|
||||
- Build / rebuild: `docker-compose -f docker-compose-cluster.yml build`
|
||||
- Start services: `docker-compose -f docker-compose-cluster.yml up -d --build`
|
||||
- Scaling KC nodes: `docker-compose -f docker-compose-cluster.yml up -d --build --scale keycloak=2`
|
||||
- Stop services: `docker-compose -f docker-compose-cluster.yml down -v`. If you wish to keep the container volumes skip the `-v` option.
|
||||
|
||||
### Cross-DC Deployment
|
||||
- Build / rebuild: `docker-compose -f docker-compose-crossdc.yml build`
|
||||
- Start services: `docker-compose -f docker-compose-crossdc.yml up -d --build`
|
||||
- Scaling KC nodes: `docker-compose -f docker-compose-crossdc.yml up -d --build --scale keycloak_dc1=2 --scale keycloak_dc2=3`
|
||||
- Stop services: `docker-compose -f docker-compose-crossdc.yml down -v`. If you wish to keep the container volumes skip the `-v` option.
|
||||
|
||||
## Debugging docker containers:
|
||||
- List started containers: `docker ps`. It's useful to watch continuously: `watch docker ps`.
|
||||
To list compose-only containers use: `docker-compose ps`, but this doesn't show container health status.
|
||||
- Watch logs of a specific container: `docker logs -f crossdc_mariadb_dc1_1`.
|
||||
- Watch logs of all containers managed by docker-compose: `docker-compose logs -f`.
|
||||
- List networks: `docker network ls`
|
||||
- Inspect network: `docker network inspect NETWORK_NAME`. Shows network configuration and currently connected containers.
|
||||
|
||||
## Network Addresses
|
||||
### KC
|
||||
`10.i.1.0/24` One network per DC. For single-DC deployments `i = 0`, for cross-DC deployment `i ∈ ℕ` is an index of particular DC.
|
||||
### Load Balancing
|
||||
`10.0.2.0/24` Network spans all DCs.
|
||||
### DB Replication
|
||||
`10.0.3.0/24` Network spans all DCs.
|
||||
### ISPN Replication
|
||||
`10.0.4.0/24` Network spans all DCs.
|
78
testsuite/performance/README.log-tool.md
Normal file
78
testsuite/performance/README.log-tool.md
Normal file
|
@ -0,0 +1,78 @@
|
|||
Example of using log-tool.sh
|
||||
----------------------------
|
||||
|
||||
Perform the usual test run:
|
||||
|
||||
```
|
||||
mvn verify -Pteardown
|
||||
mvn verify -Pprovision
|
||||
mvn verify -Pimport-data -Ddataset=100users -Dimport.workers=10 -DhashIterations=100
|
||||
mvn verify -Ptest -Ddataset=100users -DrunUsers=200 -DrampUpPeriod=10 -DuserThinkTime=0 -DbadLoginAttempts=1 -DrefreshTokenCount=1 -DnumOfIterations=3
|
||||
```
|
||||
|
||||
Now analyze the generated simulation.log (adjust LOG_DIR, FROM, and TO):
|
||||
|
||||
```
|
||||
LOG_DIR=$HOME/devel/keycloak/keycloak/testsuite/performance/tests/target/gatling/keycloaksimulation-1502735555123
|
||||
```
|
||||
|
||||
Get general statistics about the run to help with deciding about the interval to extract:
|
||||
```
|
||||
./log-tool.sh -s -f $LOG_DIR/simulation.log
|
||||
./log-tool.sh -s -f $LOG_DIR/simulation.log --lastRequest "Browser logout"
|
||||
```
|
||||
|
||||
Set start and end times for the extraction, and create new directory for results:
|
||||
```
|
||||
FROM=1502735573285
|
||||
TO=1502735581063
|
||||
|
||||
RESULT_DIR=tests/target/gatling/keycloaksimulation-$FROM\_$TO
|
||||
|
||||
mkdir $RESULT_DIR
|
||||
```
|
||||
|
||||
Extract a portion of the original log, and inspect statistics of resulting log:
|
||||
```
|
||||
./log-tool.sh -f $LOG_DIR/simulation.log -o $RESULT_DIR/simulation-$FROM\_$TO.log -e --start $FROM --end $TO
|
||||
|
||||
./log-tool.sh -f $RESULT_DIR/simulation-$FROM\_$TO.log -s
|
||||
```
|
||||
|
||||
Generate another set of reports from extracted log:
|
||||
```
|
||||
GATLING_HOME=$HOME/devel/gatling-charts-highcharts-bundle-2.1.7
|
||||
|
||||
cd $GATLING_HOME
|
||||
bin/gatling.sh -ro $RESULT_DIR
|
||||
|
||||
```
|
||||
|
||||
|
||||
Installing Gatling Highcharts 2.1.7
|
||||
-----------------------------------
|
||||
|
||||
```
|
||||
git clone http://github.com/gatling/gatling
|
||||
cd gatling
|
||||
git checkout v2.1.7
|
||||
git checkout -b v2.1.7
|
||||
sbt clean compile
|
||||
sbt publishLocal publishM2
|
||||
cd ..
|
||||
|
||||
git clone http://github.com/gatling/gatling-highcharts
|
||||
cd gatling-highcharts/
|
||||
git checkout v2.1.7
|
||||
git checkout -b v2.1.7
|
||||
sbt clean compile
|
||||
sbt publishLocal publishM2
|
||||
cd ..
|
||||
|
||||
unzip ~/.ivy2/local/io.gatling.highcharts/gatling-charts-highcharts-bundle/2.1.7/zips/gatling-charts-highcharts-bundle-bundle.zip
|
||||
cd gatling-charts-highcharts-bundle-2.1.7
|
||||
|
||||
bin/gatling.sh
|
||||
```
|
||||
|
||||
|
199
testsuite/performance/README.md
Normal file
199
testsuite/performance/README.md
Normal file
|
@ -0,0 +1,199 @@
|
|||
# Keycloak Performance Testsuite
|
||||
|
||||
## Requirements:
|
||||
- Maven 3.1.1+
|
||||
- Keycloak server distribution installed in the local Maven repository. To do this run `mvn install -Pdistribution` from the root of the Keycloak project.
|
||||
- Docker 1.13+
|
||||
- Docker Compose 1.14+
|
||||
- Bash
|
||||
|
||||
## Getting started for the impatient
|
||||
|
||||
Here's how to perform a simple tests run:
|
||||
|
||||
```
|
||||
# Clone keycloak repository if you don't have it yet
|
||||
# git clone https://github.com/keycloak/keycloak.git
|
||||
|
||||
# Build Keycloak distribution - needed to build docker image with latest Keycloak server
|
||||
mvn clean install -DskipTests -Pdistribution
|
||||
|
||||
# Now build, provision and run the test
|
||||
cd testsuite/performance
|
||||
mvn clean install
|
||||
|
||||
# Make sure your Docker daemon is running THEN
|
||||
mvn verify -Pprovision
|
||||
mvn verify -Pimport-data -Ddataset=100u -DnumOfWorkers=10 -DhashIterations=100
|
||||
mvn verify -Ptest -Ddataset=100u -DrunUsers=200 -DrampUpPeriod=10 -DuserThinkTime=0 -DbadLoginAttempts=1 -DrefreshTokenCount=1 -DnumOfIterations=3
|
||||
|
||||
```
|
||||
|
||||
Now open the generated report in a browser - the link to .html file is displayed at the end of the test.
|
||||
|
||||
After the test run you may want to tear down the docker instances for the next run to be able to import data:
|
||||
```
|
||||
mvn verify -Pteardown
|
||||
```
|
||||
|
||||
You can perform all phases in a single run:
|
||||
```
|
||||
mvn verify -Pprovision,import-data,test,teardown -Ddataset=100u -DnumOfWorkers=10 -DhashIterations=100 -DrunUsers=200 -DrampUpPeriod=10
|
||||
```
|
||||
Note: The order in which maven profiles are listed does not determine the order in which profile related plugins are executed. `teardown` profile always executes last.
|
||||
|
||||
Keep reading for more information.
|
||||
|
||||
|
||||
## Provisioning
|
||||
|
||||
### Provision
|
||||
|
||||
Usage: `mvn verify -Pprovision[,cluster] [-D<PARAM>=<VALUE> ...]`.
|
||||
|
||||
- Single node deployment: `mvn verify -Pprovision`
|
||||
- Cluster deployment: `mvn verify -Pprovision,cluster [-Dkeycloak.scale=N]`. Default `N=1`.
|
||||
|
||||
Available parameters are described in [README.provisioning-parameters](README.provisioning-parameters.md).
|
||||
|
||||
### Teardown
|
||||
|
||||
Usage: `mvn verify -Pteardown[,cluster]`
|
||||
|
||||
- Single node deployment: `mvn verify -Pteardown`
|
||||
- Cluster deployment: `mvn verify -Pteardown,cluster`
|
||||
|
||||
Provisioning/teardown is performed via `docker-compose` tool. More details in [README.docker-compose](README.docker-compose.md).
|
||||
|
||||
|
||||
## Testing
|
||||
|
||||
### Import Data
|
||||
|
||||
Usage: `mvn verify -Pimport-data[,cluster] [-Ddataset=DATASET] [-D<dataset.property>=<value>]`.
|
||||
|
||||
Dataset properties are loaded from `datasets/${dataset}.properties` file. Individual properties can be overriden by specifying `-D` params.
|
||||
|
||||
Dataset data is first generated as a .json file, and then imported into Keycloak via Admin Client REST API.
|
||||
|
||||
#### Examples:
|
||||
- `mvn verify -Pimport-data` - import default dataset
|
||||
- `mvn verify -Pimport-data -DusersPerRealm=5` - import default dataset, override the `usersPerRealm` property
|
||||
- `mvn verify -Pimport-data -Ddataset=100u` - import `100u` dataset
|
||||
- `mvn verify -Pimport-data -Ddataset=100r/default` - import dataset from `datasets/100r/default.properties`
|
||||
|
||||
The data can also be exported from the database, and stored locally as `datasets/${dataset}.sql.gz`
|
||||
`DATASET=100u ./prepare-dump.sh`
|
||||
|
||||
If there is a data dump file available then -Pimport-dump can be used to import the data directly into the database,
|
||||
by-passing Keycloak server completely.
|
||||
|
||||
Usage: `mvn verify -Pimport-dump [-Ddataset=DATASET]`
|
||||
|
||||
#### Example:
|
||||
- `mvn verify -Pimport-dump -Ddataset=100u` - import `datasets/100u.sql.gz` dump file created using `prepare-dump.sh`.
|
||||
|
||||
|
||||
### Run Tests
|
||||
|
||||
Usage: `mvn verify -Ptest[,cluster] [-DrunUsers=N] [-DrampUpPeriod=SECONDS] [-DnumOfIterations=N] [-Ddataset=DATASET] [-D<dataset.property>=<value>]* [-D<test.property>=<value>]* `.
|
||||
|
||||
_*Note:* The same dataset properties which were used for data import should be supplied to the `test` phase._
|
||||
|
||||
The default test `keycloak.DefaultSimulation` takes the following additional properties:
|
||||
|
||||
`[-DuserThinkTime=SECONDS] [-DbadLoginAttempts=N] [-DrefreshTokenCount=N] [-DrefreshTokenPeriod=SECONDS]`
|
||||
|
||||
|
||||
If you want to run a different test you need to specify the test class name using `[-Dgatling.simulationClass=CLASSNAME]`.
|
||||
|
||||
For example:
|
||||
|
||||
`mvn verify -Ptest -DrunUsers=1 -DnumOfIterations=10 -DuserThinkTime=0 -Ddataset=100u -DrefreshTokenPeriod=10 -Dgatling.simulationClass=keycloak.AdminSimulation`
|
||||
|
||||
|
||||
## Debugging & Profiling
|
||||
|
||||
Keycloak docker container exposes JMX management interface on port `9990`.
|
||||
|
||||
### JVisualVM
|
||||
|
||||
- Start JVisualVM with `jboss-client.jar` on classpath: `./jvisualvm --cp:a $JBOSS_HOME/bin/client/jboss-client.jar`.
|
||||
- Add a local JMX connection: `service:jmx:remote+http://localhost:9990`.
|
||||
- Check "Use security credentials" and set `admin:admin`. (The default credentials can be overriden by providing env. variables `DEBUG_USER` and `DEBUG_USER_PASSWORD` to the container.)
|
||||
- Open the added connection.
|
||||
|
||||
_Note: The above applies for the singlenode deployment.
|
||||
In cluster/crossdc deployments there are multiple KC containers running at the same time so their exposed ports are mapped to random available ports on `0.0.0.0`.
|
||||
To find the actual mapped ports run command: `docker ps | grep performance_keycloak`._
|
||||
|
||||
|
||||
## Monitoring
|
||||
|
||||
There is a docker-based solution for monitoring of CPU, memory and network usage per container.
|
||||
(It uses CAdvisor service to export container metrics into InfluxDB time series database, and Grafana web app to query the DB and present results as graphs.)
|
||||
|
||||
- To enable run: `mvn verify -Pmonitoring`
|
||||
- To disable run: `mvn verify -Pmonitoring-off[,delete-monitoring-data]`.
|
||||
By default the monitoring history is preserved. If you wish to delete it enable the `delete-monitoring-data` profile when turning monitoring off.
|
||||
|
||||
To view monitoring dashboard open Grafana UI at: `http://localhost:3000/dashboard/file/resource-usage-combined.json`.
|
||||
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
### Single-node
|
||||
|
||||
- Provision single node of KC + DB, import data, run test, and tear down the provisioned system:
|
||||
|
||||
`mvn verify -Pprovision,import-data,test,teardown -Ddataset=100u -DrunUsers=100`
|
||||
|
||||
- Provision single node of KC + DB, import data, no test, no teardown:
|
||||
|
||||
`mvn verify -Pprovision,import-data -Ddataset=100u`
|
||||
|
||||
- Run test against provisioned system using 100 concurrent users ramped up over 10 seconds, then tear it down:
|
||||
|
||||
`mvn verify -Ptest,teardown -Ddataset=100u -DrunUsers=100 -DrampUpPeriod=10`
|
||||
|
||||
### Cluster
|
||||
|
||||
- Provision a 1-node KC cluster + DB, import data, run test against the provisioned system, then tear it down:
|
||||
|
||||
`mvn verify -Pprovision,cluster,import-data,test,teardown -Ddataset=100u -DrunUsers=100`
|
||||
|
||||
- Provision a 2-node KC cluster + DB, import data, run test against the provisioned system, then tear it down:
|
||||
|
||||
`mvn verify -Pprovision,cluster,import-data,test,teardown -Dkeycloak.scale=2 -DusersPerRealm=200 -DrunUsers=200`
|
||||
|
||||
|
||||
## Developing tests in IntelliJ IDEA
|
||||
|
||||
### Add scala support to IDEA
|
||||
|
||||
#### Install the correct Scala SDK
|
||||
|
||||
First you need to install Scala SDK. In Scala land it's very important that all libraries used are compatible with specific version of Scala.
|
||||
Gatling version that we use uses Scala version 2.11.7. In order to avoid conflicts between Scala used by IDEA, and Scala dependencies in pom.xml
|
||||
it's very important to use that same version of Scala SDK for development.
|
||||
|
||||
Thus, it's best to download and install [this SDK version](http://scala-lang.org/download/2.11.7.html)
|
||||
|
||||
#### Install IntelliJ's official Scala plugin
|
||||
|
||||
Open Preferences in IntelliJ. Type 'plugins' in the search box. In the right pane click on 'Install JetBrains plugin'.
|
||||
Type 'scala' in the search box, and click Install button of the Scala plugin.
|
||||
|
||||
#### Run DefaultSimulation from IntelliJ
|
||||
|
||||
In ProjectExplorer find Engine object (you can use ctrl-N / cmd-O). Right click on class name and select Run or Debug like for
|
||||
JUnit tests.
|
||||
|
||||
You'll have to create a test profile, and set 'VM options' with -Dkey=value to override default configuration values in TestConfig class.
|
||||
|
||||
Make sure to set 'Use classpath of module' to 'performance-test'.
|
||||
|
||||
When tests are executed via maven, the Engine object is not used. It exists only for running tests in IDE.
|
||||
|
||||
If test startup fails due to not being able to find the test classes try reimporting the 'performance' module from pom.xml (right click on 'performance' directory, select 'Maven' at the bottom of context menu, then 'Reimport')
|
63
testsuite/performance/README.provisioning-parameters.md
Normal file
63
testsuite/performance/README.provisioning-parameters.md
Normal file
|
@ -0,0 +1,63 @@
|
|||
# Keycloak Performance Testsuite - Provisioning Parameters
|
||||
|
||||
## Keycloak Server Settings:
|
||||
|
||||
| Category | Setting | Property | Default value |
|
||||
|-------------|-------------------------------|------------------------------------|------------------------------------------------------------------|
|
||||
| JVM | Memory settings | `keycloak.jvm.memory` | -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m |
|
||||
| Undertow | HTTP Listener max connections | `keycloak.http.max-connections` | 500 |
|
||||
| | AJP Listener max connections | `keycloak.ajp.max-connections` | 500 |
|
||||
| IO | Worker IO thread pool | `keycloak.worker.io-threads` | 2 |
|
||||
| | Worker Task thread pool | `keycloak.worker.task-max-threads` | 16 |
|
||||
| Datasources | Connection pool min size | `keycloak.ds.min-pool-size` | 10 |
|
||||
| | Connection pool max size | `keycloak.ds.max-pool-size` | 100 |
|
||||
| | Connection pool prefill | `keycloak.ds.pool-prefill` | true |
|
||||
| | Prepared statement cache size | `keycloak.ds.ps-cache-size` | 100 |
|
||||
|
||||
## Load Balancer Settings:
|
||||
|
||||
| Category | Setting | Property | Default value |
|
||||
|-------------|-------------------------------|---------------------------------------|------------------------------------------------------------------|
|
||||
| JVM | Memory settings | `keycloak-lb.jvm.memory` | -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m |
|
||||
| Undertow | HTTP Listener max connections | `keycloak-lb.http.max-connections` | 500 |
|
||||
| IO | Worker IO thread pool | `keycloak-lb.worker.io-threads` | 2 |
|
||||
| | Worker Task thread pool | `keycloak-lb.worker.task-max-threads` | 16 |
|
||||
|
||||
## Infinispan Server Settings
|
||||
|
||||
| Category | Setting | Property | Default value |
|
||||
|-------------|-------------------------------|-------------------------|-----------------------------------------------------------------------------------------|
|
||||
| JVM | Memory settings | `infinispan.jvm.memory` | -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC |
|
||||
|
||||
## CPUs
|
||||
|
||||
At the moment it is not possible to dynamically parametrize the number of CPUs for a service via Maven properties or environment variables.
|
||||
|
||||
To change the default value (`cpus: 1`) it is necessary to edit the Docker Compose file.
|
||||
|
||||
|
||||
### Example: Keycloak service using 2 CPU cores
|
||||
|
||||
`docker-compose.yml` and `docker-compose-cluster.yml`:
|
||||
```
|
||||
services:
|
||||
...
|
||||
keycloak:
|
||||
...
|
||||
cpus: 2
|
||||
...
|
||||
```
|
||||
|
||||
`docker-compose-crossdc.yml`:
|
||||
```
|
||||
services:
|
||||
...
|
||||
keycloak_dc1:
|
||||
...
|
||||
cpus: 2
|
||||
...
|
||||
keycloak_dc2:
|
||||
...
|
||||
cpus: 2
|
||||
...
|
||||
```
|
11
testsuite/performance/db/mariadb/Dockerfile
Normal file
11
testsuite/performance/db/mariadb/Dockerfile
Normal file
|
@ -0,0 +1,11 @@
|
|||
FROM mariadb:10.3
|
||||
|
||||
ADD wsrep.cnf /etc/mysql/conf.d/
|
||||
|
||||
ADD mariadb-healthcheck.sh /usr/local/bin/
|
||||
RUN chmod -v +x /usr/local/bin/mariadb-healthcheck.sh
|
||||
HEALTHCHECK --interval=5s --timeout=5s --retries=12 CMD ["mariadb-healthcheck.sh"]
|
||||
|
||||
ENV DATADIR /var/lib/mysql
|
||||
ADD docker-entrypoint-wsrep.sh /usr/local/bin/
|
||||
RUN chmod -v +x /usr/local/bin/docker-entrypoint-wsrep.sh
|
1
testsuite/performance/db/mariadb/README.md
Normal file
1
testsuite/performance/db/mariadb/README.md
Normal file
|
@ -0,0 +1 @@
|
|||
|
22
testsuite/performance/db/mariadb/docker-entrypoint-wsrep.sh
Normal file
22
testsuite/performance/db/mariadb/docker-entrypoint-wsrep.sh
Normal file
|
@ -0,0 +1,22 @@
|
|||
#!/bin/bash
|
||||
|
||||
echo "docker-entrypoint-wsrep.sh: $1"
|
||||
|
||||
if [ "$1" == "--wsrep-new-cluster" ]; then
|
||||
|
||||
echo "docker-entrypoint-wsrep.sh: Cluster 'bootstrap' node."
|
||||
|
||||
else
|
||||
|
||||
echo "docker-entrypoint-wsrep.sh: Cluster 'member' node."
|
||||
|
||||
if [ ! -d "$DATADIR/mysql" ]; then
|
||||
echo "docker-entrypoint-wsrep.sh: Creating empty datadir to be populated from the cluster 'seed' node."
|
||||
mkdir -p "$DATADIR/mysql"
|
||||
chown -R mysql:mysql "$DATADIR"
|
||||
fi
|
||||
|
||||
fi
|
||||
|
||||
echo "docker-entrypoint-wsrep.sh: Delegating to original mariadb docker-entrypoint.sh"
|
||||
docker-entrypoint.sh "$@";
|
25
testsuite/performance/db/mariadb/mariadb-healthcheck.sh
Normal file
25
testsuite/performance/db/mariadb/mariadb-healthcheck.sh
Normal file
|
@ -0,0 +1,25 @@
|
|||
#!/bin/bash
|
||||
set -eo pipefail
|
||||
|
||||
if [ "$MYSQL_RANDOM_ROOT_PASSWORD" ] && [ -z "$MYSQL_USER" ] && [ -z "$MYSQL_PASSWORD" ]; then
|
||||
# there's no way we can guess what the random MySQL password was
|
||||
echo >&2 'healthcheck error: cannot determine random root password (and MYSQL_USER and MYSQL_PASSWORD were not set)'
|
||||
exit 0
|
||||
fi
|
||||
|
||||
host="$(hostname --ip-address || echo '127.0.0.1')"
|
||||
user="${MYSQL_USER:-root}"
|
||||
export MYSQL_PWD="${MYSQL_PASSWORD:-$MYSQL_ROOT_PASSWORD}"
|
||||
|
||||
args=(
|
||||
# force mysql to not use the local "mysqld.sock" (test "external" connectibility)
|
||||
-h"$host"
|
||||
-u"$user"
|
||||
--silent
|
||||
)
|
||||
|
||||
if select="$(echo 'SELECT 1' | mysql "${args[@]}")" && [ "$select" = '1' ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
exit 1
|
23
testsuite/performance/db/mariadb/wsrep.cnf
Normal file
23
testsuite/performance/db/mariadb/wsrep.cnf
Normal file
|
@ -0,0 +1,23 @@
|
|||
[mysqld]
|
||||
general_log=OFF
|
||||
bind-address=0.0.0.0
|
||||
|
||||
innodb_flush_log_at_trx_commit=0
|
||||
query_cache_size=0
|
||||
query_cache_type=0
|
||||
|
||||
binlog_format=ROW
|
||||
default_storage_engine=InnoDB
|
||||
innodb_autoinc_lock_mode=2
|
||||
innodb_doublewrite=1
|
||||
|
||||
innodb_buffer_pool_size=122M
|
||||
|
||||
wsrep_on=ON
|
||||
wsrep_provider=/usr/lib/galera/libgalera_smm.so
|
||||
wsrep_provider_options="gcache.size=300M; gcache.page_size=300M"
|
||||
wsrep_cluster_address="gcomm://"
|
||||
wsrep_cluster_name="galera_cluster_keycloak"
|
||||
wsrep_sst_method=rsync
|
||||
#wsrep_sst_auth="root:root"
|
||||
wsrep_slave_threads=16
|
98
testsuite/performance/docker-compose-cluster.yml
Normal file
98
testsuite/performance/docker-compose-cluster.yml
Normal file
|
@ -0,0 +1,98 @@
|
|||
version: "2.2"
|
||||
|
||||
networks:
|
||||
keycloak:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.0.1.0/24
|
||||
loadbalancing:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.0.2.0/24
|
||||
|
||||
services:
|
||||
|
||||
mariadb:
|
||||
build: db/mariadb
|
||||
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
|
||||
cpus: 1
|
||||
networks:
|
||||
- keycloak
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: root
|
||||
MYSQL_DATABASE: keycloak
|
||||
MYSQL_USER: keycloak
|
||||
MYSQL_PASSWORD: keycloak
|
||||
ports:
|
||||
- "3306:3306"
|
||||
|
||||
keycloak:
|
||||
build: keycloak
|
||||
image: keycloak_test_keycloak:${KEYCLOAK_VERSION:-latest}
|
||||
depends_on:
|
||||
mariadb:
|
||||
condition: service_healthy
|
||||
cpus: 1
|
||||
networks:
|
||||
- keycloak
|
||||
environment:
|
||||
CONFIGURATION: standalone-ha.xml
|
||||
PUBLIC_SUBNET: 10.0.1.0/24
|
||||
PRIVATE_SUBNET: 10.0.1.0/24
|
||||
MARIADB_HOST: mariadb
|
||||
MARIADB_DATABASE: keycloak
|
||||
MARIADB_USER: keycloak
|
||||
MARIADB_PASSWORD: keycloak
|
||||
KEYCLOAK_USER: admin
|
||||
KEYCLOAK_PASSWORD: admin
|
||||
|
||||
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-500}
|
||||
AJP_MAX_CONNECTIONS: ${KEYCLOAK_AJP_MAX_CONNECTIONS:-500}
|
||||
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
|
||||
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
|
||||
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
|
||||
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
|
||||
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
|
||||
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
|
||||
ports:
|
||||
- "8080"
|
||||
- "9990"
|
||||
|
||||
keycloak_lb:
|
||||
build: load-balancer/wildfly-modcluster
|
||||
image: keycloak_test_wildfly_modcluster:${KEYCLOAK_VERSION:-latest}
|
||||
depends_on:
|
||||
keycloak:
|
||||
condition: service_healthy
|
||||
cpus: 1
|
||||
networks:
|
||||
- keycloak
|
||||
# - loadbalancing
|
||||
environment:
|
||||
PRIVATE_SUBNET: 10.0.1.0/24
|
||||
# PUBLIC_SUBNET: 10.0.2.0/24
|
||||
JAVA_OPTS: ${KEYCLOAK_LB_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_LB_HTTP_MAX_CONNECTIONS:-500}
|
||||
WORKER_IO_THREADS: ${KEYCLOAK_LB_WORKER_IO_THREADS:-2}
|
||||
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_LB_WORKER_TASK_MAX_THREADS:-16}
|
||||
ports:
|
||||
- "8080:8080"
|
||||
|
||||
# advertize_server:
|
||||
# build: load-balancer/debug-advertise
|
||||
# image: keycloak_test_modcluster_advertize_server:${KEYCLOAK_VERSION:-latest}
|
||||
# networks:
|
||||
# - keycloak
|
||||
# - loadbalancing
|
||||
# command: 0.0.0.0 224.0.1.105 23365
|
||||
#
|
||||
# advertize_client:
|
||||
# build:
|
||||
# context: load-balancer/debug-advertise
|
||||
# dockerfile: Dockerfile-client
|
||||
# image: keycloak_test_modcluster_advertize_client:${KEYCLOAK_VERSION:-latest}
|
||||
# networks:
|
||||
# - keycloak
|
||||
# - loadbalancing
|
||||
# command: 224.0.1.105 23364
|
229
testsuite/performance/docker-compose-crossdc.yml
Normal file
229
testsuite/performance/docker-compose-crossdc.yml
Normal file
|
@ -0,0 +1,229 @@
|
|||
version: "2.2"
|
||||
|
||||
networks:
|
||||
# DC 1
|
||||
dc1_keycloak:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.1.1.0/24
|
||||
# DC 2
|
||||
dc2_keycloak:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.2.1.0/24
|
||||
# cross-DC
|
||||
loadbalancing:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.0.2.0/24
|
||||
# cross-DC
|
||||
db_replication:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.0.3.0/24
|
||||
# cross-DC
|
||||
ispn_replication:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.0.4.0/24
|
||||
|
||||
services:
|
||||
|
||||
infinispan_dc1:
|
||||
build: infinispan
|
||||
image: keycloak_test_infinispan:${KEYCLOAK_VERSION:-latest}
|
||||
cpus: 1
|
||||
networks:
|
||||
- ispn_replication
|
||||
- dc1_keycloak
|
||||
environment:
|
||||
PUBLIC_SUBNET: 10.1.1.0/24
|
||||
PRIVATE_SUBNET: 10.0.4.0/24
|
||||
MGMT_USER: admin
|
||||
MGMT_USER_PASSWORD: admin
|
||||
# APP_USER: keycloak
|
||||
# APP_USER_PASSWORD: keycloak
|
||||
# APP_USER_GROUPS: keycloak
|
||||
JAVA_OPTS: ${INFINISPAN_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
ports:
|
||||
- "9991:9990"
|
||||
|
||||
infinispan_dc2:
|
||||
build: infinispan
|
||||
image: keycloak_test_infinispan:${KEYCLOAK_VERSION:-latest}
|
||||
depends_on:
|
||||
infinispan_dc1:
|
||||
condition: service_healthy
|
||||
cpus: 1
|
||||
networks:
|
||||
- ispn_replication
|
||||
- dc2_keycloak
|
||||
environment:
|
||||
PUBLIC_SUBNET: 10.2.1.0/24
|
||||
PRIVATE_SUBNET: 10.0.4.0/24
|
||||
MGMT_USER: admin
|
||||
MGMT_USER_PASSWORD: admin
|
||||
# APP_USER: keycloak
|
||||
# APP_USER_PASSWORD: keycloak
|
||||
# APP_USER_GROUPS: keycloak
|
||||
JAVA_OPTS: ${INFINISPAN_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
ports:
|
||||
- "9992:9990"
|
||||
|
||||
|
||||
mariadb_dc1:
|
||||
build: db/mariadb
|
||||
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
|
||||
cpus: 1
|
||||
networks:
|
||||
- db_replication
|
||||
- dc1_keycloak
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: root
|
||||
MYSQL_INITDB_SKIP_TZINFO: foo
|
||||
MYSQL_DATABASE: keycloak
|
||||
MYSQL_USER: keycloak
|
||||
MYSQL_PASSWORD: keycloak
|
||||
entrypoint: docker-entrypoint-wsrep.sh
|
||||
command: --wsrep-new-cluster
|
||||
ports:
|
||||
- "3306:3306"
|
||||
|
||||
mariadb_dc2:
|
||||
build: db/mariadb
|
||||
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
|
||||
depends_on:
|
||||
mariadb_dc1:
|
||||
condition: service_healthy
|
||||
cpus: 1
|
||||
networks:
|
||||
- db_replication
|
||||
- dc2_keycloak
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: root
|
||||
MYSQL_INITDB_SKIP_TZINFO: foo
|
||||
entrypoint: docker-entrypoint-wsrep.sh
|
||||
command: --wsrep_cluster_address=gcomm://mariadb_dc1
|
||||
ports:
|
||||
- "3307:3306"
|
||||
|
||||
|
||||
|
||||
keycloak_dc1:
|
||||
build:
|
||||
context: ./keycloak
|
||||
args:
|
||||
REMOTE_CACHES: "true"
|
||||
image: keycloak_test_keycloak:${KEYCLOAK_VERSION:-latest}
|
||||
depends_on:
|
||||
# wait for the db cluster to be ready before starting keycloak
|
||||
mariadb_dc2:
|
||||
condition: service_healthy
|
||||
# wait for the ispn cluster to be ready before starting keycloak
|
||||
infinispan_dc2:
|
||||
condition: service_healthy
|
||||
cpus: 1
|
||||
networks:
|
||||
- dc1_keycloak
|
||||
environment:
|
||||
CONFIGURATION: standalone-ha.xml
|
||||
PUBLIC_SUBNET: 10.1.1.0/24
|
||||
PRIVATE_SUBNET: 10.1.1.0/24
|
||||
MARIADB_HOST: mariadb_dc1
|
||||
MARIADB_DATABASE: keycloak
|
||||
MARIADB_USER: keycloak
|
||||
MARIADB_PASSWORD: keycloak
|
||||
KEYCLOAK_USER: admin
|
||||
KEYCLOAK_PASSWORD: admin
|
||||
INFINISPAN_HOST: infinispan_dc1
|
||||
SITE: dc1
|
||||
|
||||
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-500}
|
||||
AJP_MAX_CONNECTIONS: ${KEYCLOAK_AJP_MAX_CONNECTIONS:-500}
|
||||
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
|
||||
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
|
||||
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
|
||||
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
|
||||
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
|
||||
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
|
||||
ports:
|
||||
- "8080"
|
||||
- "9990"
|
||||
|
||||
|
||||
keycloak_dc2:
|
||||
build:
|
||||
context: ./keycloak
|
||||
args:
|
||||
REMOTE_CACHES: "true"
|
||||
image: keycloak_test_keycloak:${KEYCLOAK_VERSION:-latest}
|
||||
depends_on:
|
||||
# wait for first kc instance to be ready before starting another
|
||||
keycloak_dc1:
|
||||
condition: service_healthy
|
||||
cpus: 1
|
||||
networks:
|
||||
- dc2_keycloak
|
||||
environment:
|
||||
CONFIGURATION: standalone-ha.xml
|
||||
PUBLIC_SUBNET: 10.2.1.0/24
|
||||
PRIVATE_SUBNET: 10.2.1.0/24
|
||||
MARIADB_HOST: mariadb_dc2
|
||||
MARIADB_DATABASE: keycloak
|
||||
MARIADB_USER: keycloak
|
||||
MARIADB_PASSWORD: keycloak
|
||||
INFINISPAN_HOST: infinispan_dc2
|
||||
SITE: dc2
|
||||
|
||||
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-500}
|
||||
AJP_MAX_CONNECTIONS: ${KEYCLOAK_AJP_MAX_CONNECTIONS:-500}
|
||||
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
|
||||
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
|
||||
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
|
||||
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
|
||||
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
|
||||
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
|
||||
ports:
|
||||
- "8080"
|
||||
- "9990"
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
keycloak_lb_dc1:
|
||||
build: load-balancer/wildfly-modcluster
|
||||
image: keycloak_test_keycloak_lb:${KEYCLOAK_VERSION:-latest}
|
||||
cpus: 1
|
||||
networks:
|
||||
- dc1_keycloak
|
||||
# - loadbalancing
|
||||
environment:
|
||||
PRIVATE_SUBNET: 10.1.1.0/24
|
||||
# PUBLIC_SUBNET: 10.0.2.0/24
|
||||
JAVA_OPTS: ${KEYCLOAK_LB_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_LB_HTTP_MAX_CONNECTIONS:-500}
|
||||
WORKER_IO_THREADS: ${KEYCLOAK_LB_WORKER_IO_THREADS:-2}
|
||||
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_LB_WORKER_TASK_MAX_THREADS:-16}
|
||||
ports:
|
||||
- "8081:8080"
|
||||
|
||||
keycloak_lb_dc2:
|
||||
build: load-balancer/wildfly-modcluster
|
||||
image: keycloak_test_keycloak_lb:${KEYCLOAK_VERSION:-latest}
|
||||
cpus: 1
|
||||
networks:
|
||||
- dc2_keycloak
|
||||
# - loadbalancing
|
||||
environment:
|
||||
PRIVATE_SUBNET: 10.2.1.0/24
|
||||
# PUBLIC_SUBNET: 10.0.2.0/24
|
||||
JAVA_OPTS: ${KEYCLOAK_LB_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_LB_HTTP_MAX_CONNECTIONS:-500}
|
||||
WORKER_IO_THREADS: ${KEYCLOAK_LB_WORKER_IO_THREADS:-2}
|
||||
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_LB_WORKER_TASK_MAX_THREADS:-16}
|
||||
ports:
|
||||
- "8082:8080"
|
||||
|
76
testsuite/performance/docker-compose-monitoring.yml
Normal file
76
testsuite/performance/docker-compose-monitoring.yml
Normal file
|
@ -0,0 +1,76 @@
|
|||
#version: '3'
|
||||
version: '2.2'
|
||||
|
||||
networks:
|
||||
monitoring:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.0.5.0/24
|
||||
|
||||
services:
|
||||
|
||||
monitoring_influxdb:
|
||||
image: influxdb
|
||||
volumes:
|
||||
- influx:/var/lib/influxdb
|
||||
networks:
|
||||
- monitoring
|
||||
ports:
|
||||
- "8086:8086"
|
||||
# deploy:
|
||||
# replicas: 1
|
||||
# placement:
|
||||
# constraints:
|
||||
# - node.role == manager
|
||||
|
||||
monitoring_cadvisor:
|
||||
build: monitoring/cadvisor
|
||||
image: monitoring_cadvisor
|
||||
hostname: '{{.Node.ID}}'
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run:rw
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker/:/var/lib/docker:ro
|
||||
networks:
|
||||
- monitoring
|
||||
environment:
|
||||
INFLUX_HOST: monitoring_influxdb
|
||||
INFLUX_DATABASE: cadvisor
|
||||
privileged: true
|
||||
depends_on:
|
||||
- monitoring_influxdb
|
||||
command: --storage_driver_buffer_duration="5s"
|
||||
ports:
|
||||
- "8087:8080"
|
||||
# deploy:
|
||||
# mode: global
|
||||
|
||||
|
||||
monitoring_grafana:
|
||||
build: monitoring/grafana
|
||||
image: monitoring_grafana
|
||||
depends_on:
|
||||
- monitoring_influxdb
|
||||
volumes:
|
||||
- grafana:/var/lib/grafana
|
||||
networks:
|
||||
- monitoring
|
||||
environment:
|
||||
INFLUX_DATASOURCE_NAME: influxdb_cadvisor
|
||||
INFLUX_HOST: monitoring_influxdb
|
||||
INFLUX_DATABASE: cadvisor
|
||||
ports:
|
||||
- "3000:3000"
|
||||
# deploy:
|
||||
# replicas: 1
|
||||
# placement:
|
||||
# constraints:
|
||||
# - node.role == manager
|
||||
|
||||
|
||||
volumes:
|
||||
influx:
|
||||
driver: local
|
||||
grafana:
|
||||
driver: local
|
53
testsuite/performance/docker-compose.yml
Normal file
53
testsuite/performance/docker-compose.yml
Normal file
|
@ -0,0 +1,53 @@
|
|||
version: "2.2"
|
||||
|
||||
networks:
|
||||
keycloak:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 10.0.1.0/24
|
||||
|
||||
services:
|
||||
|
||||
mariadb:
|
||||
build: db/mariadb
|
||||
image: keycloak_test_mariadb:${KEYCLOAK_VERSION:-latest}
|
||||
cpus: 1
|
||||
networks:
|
||||
- keycloak
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: root
|
||||
MYSQL_DATABASE: keycloak
|
||||
MYSQL_USER: keycloak
|
||||
MYSQL_PASSWORD: keycloak
|
||||
MYSQL_INITDB_SKIP_TZINFO: 1
|
||||
ports:
|
||||
- "3306:3306"
|
||||
|
||||
keycloak:
|
||||
build: keycloak
|
||||
image: keycloak_test_keycloak:${KEYCLOAK_VERSION:-latest}
|
||||
depends_on:
|
||||
mariadb:
|
||||
condition: service_healthy
|
||||
cpus: 1
|
||||
networks:
|
||||
- keycloak
|
||||
environment:
|
||||
MARIADB_HOST: mariadb
|
||||
MARIADB_DATABASE: keycloak
|
||||
MARIADB_USER: keycloak
|
||||
MARIADB_PASSWORD: keycloak
|
||||
KEYCLOAK_USER: admin
|
||||
KEYCLOAK_PASSWORD: admin
|
||||
# docker-compose syntax note: ${ENV_VAR:-<DEFAULT_VALUE>}
|
||||
JAVA_OPTS: ${KEYCLOAK_JVM_MEMORY:--Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m} -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
|
||||
HTTP_MAX_CONNECTIONS: ${KEYCLOAK_HTTP_MAX_CONNECTIONS:-500}
|
||||
WORKER_IO_THREADS: ${KEYCLOAK_WORKER_IO_THREADS:-2}
|
||||
WORKER_TASK_MAX_THREADS: ${KEYCLOAK_WORKER_TASK_MAX_THREADS:-16}
|
||||
DS_MIN_POOL_SIZE: ${KEYCLOAK_DS_MIN_POOL_SIZE:-10}
|
||||
DS_MAX_POOL_SIZE: ${KEYCLOAK_DS_MAX_POOL_SIZE:-100}
|
||||
DS_POOL_PREFILL: "${KEYCLOAK_DS_POOL_PREFILL:-true}"
|
||||
DS_PS_CACHE_SIZE: ${KEYCLOAK_DS_PS_CACHE_SIZE:-100}
|
||||
ports:
|
||||
- "8080:8080"
|
||||
- "9990:9990"
|
34
testsuite/performance/healthcheck.sh
Executable file
34
testsuite/performance/healthcheck.sh
Executable file
|
@ -0,0 +1,34 @@
|
|||
#!/bin/bash
|
||||
|
||||
function isKeycloakHealthy {
|
||||
echo Checking instance $1
|
||||
CODE=`curl -s -o /dev/null -w "%{http_code}" $1/realms/master`
|
||||
[ "$CODE" -eq "200" ]
|
||||
}
|
||||
|
||||
function waitUntilSystemHealthy {
|
||||
echo Waiting until all Keycloak instances are healthy...
|
||||
if [ -z $1 ]; then ITERATIONS=60; else ITERATIONS=$1; fi
|
||||
C=0
|
||||
SYSTEM_HEALTHY=false
|
||||
while ! $SYSTEM_HEALTHY; do
|
||||
C=$((C+1))
|
||||
if [ $C -gt $ITERATIONS ]; then
|
||||
echo System healthcheck failed.
|
||||
exit 1
|
||||
fi
|
||||
sleep 2
|
||||
SYSTEM_HEALTHY=true
|
||||
for URI in $KEYCLOAK_SERVER_URIS ; do
|
||||
if ! isKeycloakHealthy $URI; then
|
||||
echo Instance is not healthy.
|
||||
SYSTEM_HEALTHY=false
|
||||
fi
|
||||
done
|
||||
done
|
||||
echo System is healthy.
|
||||
}
|
||||
|
||||
if [ -z "$KEYCLOAK_SERVER_URIS" ]; then KEYCLOAK_SERVER_URIS=http://localhost:8080/auth; fi
|
||||
|
||||
waitUntilSystemHealthy
|
22
testsuite/performance/infinispan/Dockerfile
Normal file
22
testsuite/performance/infinispan/Dockerfile
Normal file
|
@ -0,0 +1,22 @@
|
|||
FROM jboss/infinispan-server:8.2.6.Final
|
||||
#FROM jboss/infinispan-server:9.1.0.Final
|
||||
|
||||
USER root
|
||||
RUN yum -y install iproute
|
||||
USER jboss
|
||||
|
||||
ENV CONFIGURATION clustered.xml
|
||||
|
||||
ADD configs/ ./
|
||||
ADD *.sh /usr/local/bin/
|
||||
|
||||
USER root
|
||||
RUN chmod -v +x /usr/local/bin/*.sh
|
||||
USER jboss
|
||||
|
||||
RUN $INFINISPAN_SERVER_HOME/bin/ispn-cli.sh --file=add-keycloak-caches.cli; \
|
||||
$INFINISPAN_SERVER_HOME/bin/ispn-cli.sh --file=private-interface-for-jgroups-socket-bindings.cli; \
|
||||
cd $INFINISPAN_SERVER_HOME/standalone; rm -rf configuration/standalone_xml_history log data tmp
|
||||
|
||||
HEALTHCHECK --interval=5s --timeout=5s --retries=12 CMD ["infinispan-healthcheck.sh"]
|
||||
ENTRYPOINT [ "docker-entrypoint-custom.sh" ]
|
|
@ -0,0 +1,15 @@
|
|||
embed-server --server-config=clustered.xml
|
||||
|
||||
cd /subsystem=datagrid-infinispan/cache-container=clustered/configurations=CONFIGURATIONS
|
||||
|
||||
#./replicated-cache-configuration=sessions-cfg:add(mode=SYNC, start=EAGER, batching=false)
|
||||
./replicated-cache-configuration=sessions-cfg:add(mode=ASYNC, start=EAGER, batching=false)
|
||||
./replicated-cache-configuration=sessions-cfg/transaction=TRANSACTION:add(locking=PESSIMISTIC, mode=NON_XA)
|
||||
|
||||
cd /subsystem=datagrid-infinispan/cache-container=clustered
|
||||
|
||||
./replicated-cache=work:add(configuration=sessions-cfg)
|
||||
./replicated-cache=sessions:add(configuration=sessions-cfg)
|
||||
./replicated-cache=offlineSessions:add(configuration=sessions-cfg)
|
||||
./replicated-cache=actionTokens:add(configuration=sessions-cfg)
|
||||
./replicated-cache=loginFailures:add(configuration=sessions-cfg)
|
|
@ -0,0 +1,9 @@
|
|||
embed-server --server-config=clustered.xml
|
||||
|
||||
/interface=private:add(inet-address=${jboss.bind.address.private:127.0.0.1})
|
||||
|
||||
/socket-binding-group=standard-sockets/socket-binding=jgroups-mping:write-attribute(name=interface, value=private)
|
||||
/socket-binding-group=standard-sockets/socket-binding=jgroups-tcp:write-attribute(name=interface, value=private)
|
||||
/socket-binding-group=standard-sockets/socket-binding=jgroups-tcp-fd:write-attribute(name=interface, value=private)
|
||||
/socket-binding-group=standard-sockets/socket-binding=jgroups-udp:write-attribute(name=interface, value=private)
|
||||
/socket-binding-group=standard-sockets/socket-binding=jgroups-udp-fd:write-attribute(name=interface, value=private)
|
28
testsuite/performance/infinispan/docker-entrypoint-custom.sh
Executable file
28
testsuite/performance/infinispan/docker-entrypoint-custom.sh
Executable file
|
@ -0,0 +1,28 @@
|
|||
#!/bin/bash
|
||||
|
||||
cat $INFINISPAN_SERVER_HOME/standalone/configuration/$CONFIGURATION
|
||||
|
||||
. get-ips.sh
|
||||
|
||||
PARAMS="-b $PUBLIC_IP -bmanagement $PUBLIC_IP -bprivate $PRIVATE_IP -Djgroups.bind_addr=$PRIVATE_IP -c $CONFIGURATION $@"
|
||||
echo "Server startup params: $PARAMS"
|
||||
|
||||
# Note: External container connectivity is always provided by eth0 -- irrespective of which is considered public/private by KC.
|
||||
# In case the container needs to be accessible on the host computer override -b $PUBLIC_IP by adding: `-b 0.0.0.0` to the docker command.
|
||||
|
||||
if [ $MGMT_USER ] && [ $MGMT_USER_PASSWORD ]; then
|
||||
echo Adding mgmt user: $MGMT_USER
|
||||
$INFINISPAN_SERVER_HOME/bin/add-user.sh -u $MGMT_USER -p $MGMT_USER_PASSWORD
|
||||
fi
|
||||
|
||||
if [ $APP_USER ] && [ $APP_USER_PASSWORD ]; then
|
||||
echo Adding app user: $APP_USER
|
||||
if [ -z $APP_USER_GROUPS ]; then
|
||||
$INFINISPAN_SERVER_HOME/bin/add-user.sh -a --user $APP_USER --password $APP_USER_PASSWORD
|
||||
else
|
||||
$INFINISPAN_SERVER_HOME/bin/add-user.sh -a --user $APP_USER --password $APP_USER_PASSWORD --group $APP_USER_GROUPS
|
||||
fi
|
||||
fi
|
||||
|
||||
exec $INFINISPAN_SERVER_HOME/bin/standalone.sh $PARAMS
|
||||
exit $?
|
37
testsuite/performance/infinispan/get-ips.sh
Executable file
37
testsuite/performance/infinispan/get-ips.sh
Executable file
|
@ -0,0 +1,37 @@
|
|||
#!/bin/bash
|
||||
|
||||
# This script:
|
||||
# - assumes there are 2 network interfaces connected to $PUBLIC_SUBNET and $PRIVATE_SUBNET respectively
|
||||
# - finds IP addresses from those interfaces using `ip` command
|
||||
# - exports the IPs as variables
|
||||
|
||||
function getIpForSubnet {
|
||||
if [ -z $1 ]; then
|
||||
echo Subnet parameter undefined
|
||||
exit 1
|
||||
else
|
||||
ip r | grep $1 | sed 's/.*src\ \([^\ ]*\).*/\1/'
|
||||
fi
|
||||
}
|
||||
|
||||
if [ -z $PUBLIC_SUBNET ]; then
|
||||
PUBLIC_IP=0.0.0.0
|
||||
echo PUBLIC_SUBNET undefined. PUBLIC_IP defaults to $PUBLIC_IP
|
||||
else
|
||||
export PUBLIC_IP=`getIpForSubnet $PUBLIC_SUBNET`
|
||||
fi
|
||||
|
||||
if [ -z $PRIVATE_SUBNET ]; then
|
||||
PRIVATE_IP=127.0.0.1
|
||||
echo PRIVATE_SUBNET undefined. PRIVATE_IP defaults to $PRIVATE_IP
|
||||
else
|
||||
export PRIVATE_IP=`getIpForSubnet $PRIVATE_SUBNET`
|
||||
fi
|
||||
|
||||
echo Routing table:
|
||||
ip r
|
||||
echo PUBLIC_SUBNET=$PUBLIC_SUBNET
|
||||
echo PRIVATE_SUBNET=$PRIVATE_SUBNET
|
||||
echo PUBLIC_IP=$PUBLIC_IP
|
||||
echo PRIVATE_IP=$PRIVATE_IP
|
||||
|
14
testsuite/performance/infinispan/infinispan-healthcheck.sh
Executable file
14
testsuite/performance/infinispan/infinispan-healthcheck.sh
Executable file
|
@ -0,0 +1,14 @@
|
|||
#!/bin/bash
|
||||
|
||||
#$JBOSS_HOME/bin/jboss-cli.sh -c ":read-attribute(name=server-state)" | grep -q "running"
|
||||
|
||||
. get-ips.sh
|
||||
|
||||
CODE=`curl -s -o /dev/null -w "%{http_code}" http://$PUBLIC_IP:9990/console/index.html`
|
||||
|
||||
if [ "$CODE" -eq "200" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
exit 1
|
||||
|
44
testsuite/performance/keycloak/Dockerfile
Normal file
44
testsuite/performance/keycloak/Dockerfile
Normal file
|
@ -0,0 +1,44 @@
|
|||
FROM jboss/base-jdk:8
|
||||
|
||||
ENV JBOSS_HOME /opt/jboss/keycloak
|
||||
ARG REMOTE_CACHES=false
|
||||
WORKDIR $JBOSS_HOME
|
||||
|
||||
ENV CONFIGURATION standalone.xml
|
||||
ENV DEBUG_USER admin
|
||||
ENV DEBUG_USER_PASSWORD admin
|
||||
|
||||
# Enables signals getting passed from startup script to JVM
|
||||
# ensuring clean shutdown when container is stopped.
|
||||
ENV LAUNCH_JBOSS_IN_BACKGROUND 1
|
||||
ENV PROXY_ADDRESS_FORWARDING false
|
||||
|
||||
ADD target/keycloak configs/ ./
|
||||
ADD *.sh /usr/local/bin/
|
||||
|
||||
USER root
|
||||
RUN chown -R jboss .; chgrp -R jboss .;
|
||||
RUN chmod -R -v +x /usr/local/bin/
|
||||
RUN yum install -y epel-release jq iproute && yum clean all
|
||||
|
||||
USER jboss
|
||||
# install mariadb JDBC driver
|
||||
RUN mkdir -p modules/system/layers/base/org/mariadb/jdbc/main; \
|
||||
cd modules/system/layers/base/org/mariadb/jdbc/main; \
|
||||
curl -O http://central.maven.org/maven2/org/mariadb/jdbc/mariadb-java-client/2.0.3/mariadb-java-client-2.0.3.jar
|
||||
ADD module.xml modules/system/layers/base/org/mariadb/jdbc/main/
|
||||
# apply configurations
|
||||
RUN $JBOSS_HOME/bin/jboss-cli.sh --file=set-keycloak-ds.cli
|
||||
RUN $JBOSS_HOME/bin/jboss-cli.sh --file=io-worker-threads.cli
|
||||
RUN $JBOSS_HOME/bin/jboss-cli.sh --file=undertow.cli
|
||||
RUN $JBOSS_HOME/bin/jboss-cli.sh --file=distributed-cache-owners.cli
|
||||
RUN $JBOSS_HOME/bin/jboss-cli.sh --file=modcluster-simple-load-provider.cli
|
||||
RUN if [ "$REMOTE_CACHES" == "true" ]; then $JBOSS_HOME/bin/jboss-cli.sh --file=add-remote-cache-stores.cli; fi
|
||||
RUN cd $JBOSS_HOME/standalone; rm -rf configuration/standalone_xml_history log data tmp
|
||||
|
||||
RUN $JBOSS_HOME/bin/add-user.sh -u $DEBUG_USER -p $DEBUG_USER_PASSWORD
|
||||
|
||||
EXPOSE 8080
|
||||
EXPOSE 9990
|
||||
HEALTHCHECK --interval=5s --timeout=5s --retries=12 CMD ["keycloak-healthcheck.sh"]
|
||||
ENTRYPOINT ["docker-entrypoint.sh"]
|
|
@ -0,0 +1,20 @@
|
|||
embed-server --server-config=standalone-ha.xml
|
||||
|
||||
/subsystem=jgroups/stack=udp/transport=UDP:write-attribute(name=site, value=${env.SITE:dc1})
|
||||
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-cache:add(host=${env.INFINISPAN_HOST:localhost}, port=${env.INFINISPAN_PORT:11222})
|
||||
|
||||
|
||||
/subsystem=infinispan/cache-container=keycloak:write-attribute(name=module, value=org.keycloak.keycloak-model-infinispan)
|
||||
|
||||
/subsystem=infinispan/cache-container=keycloak/replicated-cache=work/store=remote:add(cache=work, fetch-state=false, passivation=false, preload=false, purge=false, remote-servers=["remote-cache"], shared=true)
|
||||
/subsystem=infinispan/cache-container=keycloak/replicated-cache=work/store=remote:write-attribute(name=properties, value={rawValues=true, marshaller=org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory})
|
||||
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions/store=custom:add(class=org.keycloak.models.sessions.infinispan.remotestore.KeycloakRemoteStoreConfigurationBuilder, fetch-state=false, passivation=false, preload=false, purge=false, shared=true)
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions/store=custom:write-attribute(name=properties, value={remoteCacheName=sessions, useConfigTemplateFromCache=work})
|
||||
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions/store=custom:add(class=org.keycloak.models.sessions.infinispan.remotestore.KeycloakRemoteStoreConfigurationBuilder, fetch-state=false, passivation=false, preload=false, purge=false, shared=true)
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions/store=custom:write-attribute(name=properties, value={remoteCacheName=offlineSessions, useConfigTemplateFromCache=work})
|
||||
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures/store=custom:add(class=org.keycloak.models.sessions.infinispan.remotestore.KeycloakRemoteStoreConfigurationBuilder, fetch-state=false, passivation=false, preload=false, purge=false, shared=true)
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures/store=custom:write-attribute(name=properties, value={remoteCacheName=loginFailures, useConfigTemplateFromCache=work})
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
embed-server --server-config=standalone-ha.xml
|
||||
|
||||
# increase number of "owners" for distributed keycloak caches to support failover
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=sessions:write-attribute(name=owners, value=${distributed.cache.owners:2})
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=authenticationSessions:write-attribute(name=owners, value=${distributed.cache.owners:2})
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=offlineSessions:write-attribute(name=owners, value=${distributed.cache.owners:2})
|
||||
/subsystem=infinispan/cache-container=keycloak/distributed-cache=loginFailures:write-attribute(name=owners, value=${distributed.cache.owners:2})
|
|
@ -0,0 +1,5 @@
|
|||
# default is cpuCount * 2
|
||||
/subsystem=io/worker=default:write-attribute(name=io-threads,value=${env.WORKER_IO_THREADS:2})
|
||||
|
||||
# default is cpuCount * 16
|
||||
/subsystem=io/worker=default:write-attribute(name=task-max-threads,value=${env.WORKER_TASK_MAX_THREADS:16})
|
|
@ -0,0 +1,7 @@
|
|||
embed-server --server-config=standalone.xml
|
||||
run-batch --file=io-worker-threads-batch.cli
|
||||
|
||||
stop-embedded-server
|
||||
|
||||
embed-server --server-config=standalone-ha.xml
|
||||
run-batch --file=io-worker-threads-batch.cli
|
|
@ -0,0 +1,7 @@
|
|||
embed-server --server-config=standalone-ha.xml
|
||||
|
||||
# remove dynamic-load-provider
|
||||
/subsystem=modcluster/mod-cluster-config=configuration/dynamic-load-provider=configuration:remove
|
||||
|
||||
# add simple-load-provider with factor=1 to assure round-robin balancing
|
||||
/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=simple-load-provider, value=1)
|
|
@ -0,0 +1,20 @@
|
|||
/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb, driver-module-name=org.mariadb.jdbc, driver-xa-datasource-class-name=org.mariadb.jdbc.Driver)
|
||||
|
||||
cd /subsystem=datasources/data-source=KeycloakDS
|
||||
|
||||
:write-attribute(name=connection-url, value=jdbc:mariadb://${env.MARIADB_HOST:mariadb}:${env.MARIADB_PORT:3306}/${env.MARIADB_DATABASE:keycloak})
|
||||
:write-attribute(name=driver-name, value=mariadb)
|
||||
:write-attribute(name=user-name, value=${env.MARIADB_USER:keycloak})
|
||||
:write-attribute(name=password, value=${env.MARIADB_PASSWORD:keycloak})
|
||||
|
||||
:write-attribute(name=check-valid-connection-sql, value="SELECT 1")
|
||||
:write-attribute(name=background-validation, value=true)
|
||||
:write-attribute(name=background-validation-millis, value=60000)
|
||||
|
||||
:write-attribute(name=min-pool-size, value=${env.DS_MIN_POOL_SIZE:10})
|
||||
:write-attribute(name=max-pool-size, value=${env.DS_MAX_POOL_SIZE:100})
|
||||
:write-attribute(name=pool-prefill, value=${env.DS_POOL_PREFILL:true})
|
||||
:write-attribute(name=prepared-statements-cache-size, value=${env.DS_PS_CACHE_SIZE:100})
|
||||
|
||||
:write-attribute(name=flush-strategy, value=IdleConnections)
|
||||
:write-attribute(name=use-ccm, value=true)
|
|
@ -0,0 +1,7 @@
|
|||
embed-server --server-config=standalone.xml
|
||||
run-batch --file=set-keycloak-ds-batch.cli
|
||||
|
||||
stop-embedded-server
|
||||
|
||||
embed-server --server-config=standalone-ha.xml
|
||||
run-batch --file=set-keycloak-ds-batch.cli
|
|
@ -0,0 +1,2 @@
|
|||
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=proxy-address-forwarding, value=${env.PROXY_ADDRESS_FORWARDING})
|
||||
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=max-connections,value=${env.HTTP_MAX_CONNECTIONS:500})
|
9
testsuite/performance/keycloak/configs/undertow.cli
Normal file
9
testsuite/performance/keycloak/configs/undertow.cli
Normal file
|
@ -0,0 +1,9 @@
|
|||
embed-server --server-config=standalone.xml
|
||||
run-batch --file=undertow-batch.cli
|
||||
|
||||
stop-embedded-server
|
||||
|
||||
embed-server --server-config=standalone-ha.xml
|
||||
run-batch --file=undertow-batch.cli
|
||||
|
||||
/subsystem=undertow/server=default-server/ajp-listener=ajp:write-attribute(name=max-connections,value=${env.AJP_MAX_CONNECTIONS:500})
|
18
testsuite/performance/keycloak/docker-entrypoint.sh
Normal file
18
testsuite/performance/keycloak/docker-entrypoint.sh
Normal file
|
@ -0,0 +1,18 @@
|
|||
#!/bin/bash
|
||||
|
||||
cat $JBOSS_HOME/standalone/configuration/$CONFIGURATION
|
||||
|
||||
. get-ips.sh
|
||||
|
||||
PARAMS="-b $PUBLIC_IP -bmanagement $PUBLIC_IP -bprivate $PRIVATE_IP -c $CONFIGURATION $@"
|
||||
echo "Server startup params: $PARAMS"
|
||||
|
||||
# Note: External container connectivity is always provided by eth0 -- irrespective of which is considered public/private by KC.
|
||||
# In case the container needs to be accessible on the host computer override -b $PUBLIC_IP by adding: `-b 0.0.0.0` to the docker command.
|
||||
|
||||
if [ $KEYCLOAK_USER ] && [ $KEYCLOAK_PASSWORD ]; then
|
||||
$JBOSS_HOME/bin/add-user-keycloak.sh --user $KEYCLOAK_USER --password $KEYCLOAK_PASSWORD
|
||||
fi
|
||||
|
||||
exec /opt/jboss/keycloak/bin/standalone.sh $PARAMS
|
||||
exit $?
|
37
testsuite/performance/keycloak/get-ips.sh
Normal file
37
testsuite/performance/keycloak/get-ips.sh
Normal file
|
@ -0,0 +1,37 @@
|
|||
#!/bin/bash
|
||||
|
||||
# This script:
|
||||
# - assumes there are 2 network interfaces connected to $PUBLIC_SUBNET and $PRIVATE_SUBNET respectively
|
||||
# - finds IP addresses from those interfaces using `ip` command
|
||||
# - exports the IPs as variables
|
||||
|
||||
function getIpForSubnet {
|
||||
if [ -z $1 ]; then
|
||||
echo Subnet parameter undefined
|
||||
exit 1
|
||||
else
|
||||
ip r | grep $1 | sed 's/.*src\ \([^\ ]*\).*/\1/'
|
||||
fi
|
||||
}
|
||||
|
||||
if [ -z $PUBLIC_SUBNET ]; then
|
||||
PUBLIC_IP=0.0.0.0
|
||||
echo PUBLIC_SUBNET undefined. PUBLIC_IP defaults to $PUBLIC_IP
|
||||
else
|
||||
export PUBLIC_IP=`getIpForSubnet $PUBLIC_SUBNET`
|
||||
fi
|
||||
|
||||
if [ -z $PRIVATE_SUBNET ]; then
|
||||
PRIVATE_IP=127.0.0.1
|
||||
echo PRIVATE_SUBNET undefined. PRIVATE_IP defaults to $PRIVATE_IP
|
||||
else
|
||||
export PRIVATE_IP=`getIpForSubnet $PRIVATE_SUBNET`
|
||||
fi
|
||||
|
||||
echo Routing table:
|
||||
ip r
|
||||
echo PUBLIC_SUBNET=$PUBLIC_SUBNET
|
||||
echo PRIVATE_SUBNET=$PRIVATE_SUBNET
|
||||
echo PUBLIC_IP=$PUBLIC_IP
|
||||
echo PRIVATE_IP=$PRIVATE_IP
|
||||
|
11
testsuite/performance/keycloak/keycloak-healthcheck.sh
Normal file
11
testsuite/performance/keycloak/keycloak-healthcheck.sh
Normal file
|
@ -0,0 +1,11 @@
|
|||
#!/bin/bash
|
||||
|
||||
. get-ips.sh
|
||||
|
||||
CODE=`curl -s -o /dev/null -w "%{http_code}" http://$PUBLIC_IP:8080/auth/realms/master`
|
||||
|
||||
if [ "$CODE" -eq "200" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
exit 1
|
31
testsuite/performance/keycloak/module.xml
Normal file
31
testsuite/performance/keycloak/module.xml
Normal file
|
@ -0,0 +1,31 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!--
|
||||
~ JBoss, Home of Professional Open Source.
|
||||
~ Copyright 2010, Red Hat, Inc., and individual contributors
|
||||
~ as indicated by the @author tags. See the copyright.txt file in the
|
||||
~ distribution for a full listing of individual contributors.
|
||||
~
|
||||
~ This is free software; you can redistribute it and/or modify it
|
||||
~ under the terms of the GNU Lesser General Public License as
|
||||
~ published by the Free Software Foundation; either version 2.1 of
|
||||
~ the License, or (at your option) any later version.
|
||||
~
|
||||
~ This software is distributed in the hope that it will be useful,
|
||||
~ but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
~ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
~ Lesser General Public License for more details.
|
||||
~
|
||||
~ You should have received a copy of the GNU Lesser General Public
|
||||
~ License along with this software; if not, write to the Free
|
||||
~ Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
|
||||
~ 02110-1301 USA, or see the FSF site: http://www.fsf.org.
|
||||
-->
|
||||
<module xmlns="urn:jboss:module:1.0" name="org.mariadb.jdbc">
|
||||
<resources>
|
||||
<resource-root path="mariadb-java-client-2.0.3.jar"/>
|
||||
</resources>
|
||||
<dependencies>
|
||||
<module name="javax.api"/>
|
||||
<module name="javax.transaction.api"/>
|
||||
</dependencies>
|
||||
</module>
|
90
testsuite/performance/keycloak/pom.xml
Normal file
90
testsuite/performance/keycloak/pom.xml
Normal file
|
@ -0,0 +1,90 @@
|
|||
<?xml version="1.0"?>
|
||||
<!--
|
||||
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
|
||||
~ and other contributors as indicated by the @author tags.
|
||||
~
|
||||
~ Licensed under the Apache License, Version 2.0 (the "License");
|
||||
~ you may not use this file except in compliance with the License.
|
||||
~ You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing, software
|
||||
~ distributed under the License is distributed on an "AS IS" BASIS,
|
||||
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
~ See the License for the specific language governing permissions and
|
||||
~ limitations under the License.
|
||||
-->
|
||||
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
|
||||
<parent>
|
||||
<groupId>org.keycloak.testsuite</groupId>
|
||||
<artifactId>performance</artifactId>
|
||||
<version>3.4.0.CR1-SNAPSHOT</version>
|
||||
<relativePath>../pom.xml</relativePath>
|
||||
</parent>
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<artifactId>performance-keycloak-server</artifactId>
|
||||
<name>Keycloak Performance TestSuite - Keycloak Server</name>
|
||||
<packaging>pom</packaging>
|
||||
|
||||
<description>
|
||||
Uses maven-dependency-plugin to unpack keycloak-server-dist artifact into `target/keycloak` which is then added to Docker image.
|
||||
</description>
|
||||
|
||||
<build>
|
||||
|
||||
<plugins>
|
||||
<plugin>
|
||||
<artifactId>maven-dependency-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<!--=============================-->
|
||||
<id>unpack-keycloak-server-dist</id>
|
||||
<!--=============================-->
|
||||
<phase>generate-resources</phase>
|
||||
<goals>
|
||||
<goal>unpack</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<artifactItems>
|
||||
<artifactItem>
|
||||
<groupId>org.keycloak</groupId>
|
||||
<artifactId>keycloak-server-dist</artifactId>
|
||||
<version>${project.version}</version>
|
||||
<type>zip</type>
|
||||
<outputDirectory>${project.build.directory}</outputDirectory>
|
||||
</artifactItem>
|
||||
</artifactItems>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<!--=============================-->
|
||||
<id>rename-keycloak-folder</id>
|
||||
<!--=============================-->
|
||||
<phase>process-resources</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<target>
|
||||
<move todir="${project.build.directory}/keycloak" failonerror="false">
|
||||
<fileset dir="${project.build.directory}/keycloak-${project.version}"/>
|
||||
</move>
|
||||
</target>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
</project>
|
|
@ -0,0 +1,75 @@
|
|||
/*
|
||||
* Copyright(c) 2010 Red Hat Middleware, LLC,
|
||||
* and individual contributors as indicated by the @authors tag.
|
||||
* See the copyright.txt in the distribution for a
|
||||
* full listing of individual contributors.
|
||||
*
|
||||
* This library is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2 of the License, or (at your option) any later version.
|
||||
*
|
||||
* This library is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with this library in the file COPYING.LIB;
|
||||
* if not, write to the Free Software Foundation, Inc.,
|
||||
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
|
||||
*
|
||||
* @author Jean-Frederic Clere
|
||||
* @version $Revision: 420067 $, $Date: 2006-07-08 09:16:58 +0200 (sub, 08 srp 2006) $
|
||||
*/
|
||||
import java.net.MulticastSocket;
|
||||
import java.net.InetAddress;
|
||||
import java.net.InetSocketAddress;
|
||||
import java.net.DatagramPacket;
|
||||
|
||||
public class Advertise
|
||||
{
|
||||
/*
|
||||
* Client to test Advertize... Run it on node boxes to test for firewall.
|
||||
*/
|
||||
public static void main(String[] args) throws Exception
|
||||
{
|
||||
if (args.length != 2 && args.length != 3) {
|
||||
System.out.println("Usage: Advertize multicastaddress port [bindaddress]");
|
||||
System.out.println("java Advertize 224.0.1.105 23364");
|
||||
System.out.println("or");
|
||||
System.out.println("java Advertize 224.0.1.105 23364 10.33.144.3");
|
||||
System.out.println("receive from 224.0.1.105:23364");
|
||||
System.exit(1);
|
||||
}
|
||||
|
||||
InetAddress group = InetAddress.getByName(args[0]);
|
||||
int port = Integer.parseInt(args[1]);
|
||||
InetAddress socketInterface = null;
|
||||
if (args.length == 3)
|
||||
socketInterface = InetAddress.getByName(args[2]);
|
||||
MulticastSocket s = null;
|
||||
String value = System.getProperty("os.name");
|
||||
if ((value != null) && (value.toLowerCase().startsWith("linux") || value.toLowerCase().startsWith("mac") || value.toLowerCase().startsWith("hp"))) {
|
||||
System.out.println("Linux like OS");
|
||||
s = new MulticastSocket(new InetSocketAddress(group, port));
|
||||
} else
|
||||
s = new MulticastSocket(port);
|
||||
s.setTimeToLive(0);
|
||||
if (socketInterface != null) {
|
||||
s.setInterface(socketInterface);
|
||||
}
|
||||
s.joinGroup(group);
|
||||
boolean ok = true;
|
||||
System.out.println("ready waiting...");
|
||||
while (ok) {
|
||||
byte[] buf = new byte[1000];
|
||||
DatagramPacket recv = new DatagramPacket(buf, buf.length);
|
||||
s.receive(recv);
|
||||
String data = new String(buf);
|
||||
System.out.println("received: " + data);
|
||||
System.out.println("received from " + recv.getSocketAddress());
|
||||
}
|
||||
s.leaveGroup(group);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,7 @@
|
|||
FROM jboss/base-jdk:8
|
||||
|
||||
ADD SAdvertise.java .
|
||||
RUN javac SAdvertise.java
|
||||
|
||||
ENTRYPOINT ["java", "SAdvertise"]
|
||||
CMD ["0.0.0.0", "224.0.1.105", "23364"]
|
|
@ -0,0 +1,7 @@
|
|||
FROM jboss/base-jdk:8
|
||||
|
||||
ADD Advertise.java .
|
||||
RUN javac Advertise.java
|
||||
|
||||
ENTRYPOINT ["java", "Advertise"]
|
||||
CMD ["224.0.1.105", "23364"]
|
|
@ -0,0 +1,4 @@
|
|||
# mod_cluster advertize
|
||||
|
||||
Docker container for testing mod_cluster "advertise" functionality.
|
||||
|
|
@ -0,0 +1,60 @@
|
|||
/*
|
||||
* Copyright(c) 2010 Red Hat Middleware, LLC,
|
||||
* and individual contributors as indicated by the @authors tag.
|
||||
* See the copyright.txt in the distribution for a
|
||||
* full listing of individual contributors.
|
||||
*
|
||||
* This library is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2 of the License, or (at your option) any later version.
|
||||
*
|
||||
* This library is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with this library in the file COPYING.LIB;
|
||||
* if not, write to the Free Software Foundation, Inc.,
|
||||
* 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
|
||||
*
|
||||
* @author Jean-Frederic Clere
|
||||
* @version $Revision: 420067 $, $Date: 2006-07-08 09:16:58 +0200 (sub, 08 srp 2006) $
|
||||
*/
|
||||
import java.net.InetAddress;
|
||||
import java.net.InetSocketAddress;
|
||||
import java.net.DatagramPacket;
|
||||
import java.net.MulticastSocket;
|
||||
|
||||
public class SAdvertise {
|
||||
|
||||
/*
|
||||
* Server to test Advertize... Run it on httpd box to test for firewall.
|
||||
*/
|
||||
public static void main(String[] args) throws Exception {
|
||||
if (args.length != 3) {
|
||||
System.out.println("Usage: SAdvertize localaddress multicastaddress port");
|
||||
System.out.println("java SAdvertize 10.16.88.178 224.0.1.105 23364");
|
||||
System.out.println("send from 10.16.88.178:23364 to 224.0.1.105:23364");
|
||||
System.exit(1);
|
||||
}
|
||||
InetAddress group = InetAddress.getByName(args[1]);
|
||||
InetAddress addr = InetAddress.getByName(args[0]);
|
||||
int port = Integer.parseInt(args[2]);
|
||||
InetSocketAddress addrs = new InetSocketAddress(addr, port);
|
||||
|
||||
MulticastSocket s = new MulticastSocket(addrs);
|
||||
s.setTimeToLive(29);
|
||||
s.joinGroup(group);
|
||||
boolean ok = true;
|
||||
while (ok) {
|
||||
byte[] buf = new byte[1000];
|
||||
DatagramPacket recv = new DatagramPacket(buf, buf.length, group, port);
|
||||
System.out.println("sending from: " + addr);
|
||||
s.send(recv);
|
||||
Thread.currentThread().sleep(2000);
|
||||
}
|
||||
s.leaveGroup(group);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,17 @@
|
|||
FROM jboss/wildfly
|
||||
|
||||
ADD configs/ ./
|
||||
ADD *.sh /usr/local/bin/
|
||||
|
||||
USER root
|
||||
RUN chmod -v +x /usr/local/bin/*.sh
|
||||
RUN yum -y install iproute
|
||||
USER jboss
|
||||
|
||||
RUN $JBOSS_HOME/bin/jboss-cli.sh --file=mod-cluster-balancer.cli
|
||||
RUN $JBOSS_HOME/bin/jboss-cli.sh --file=undertow.cli
|
||||
RUN $JBOSS_HOME/bin/jboss-cli.sh --file=io-worker-threads.cli; \
|
||||
cd $JBOSS_HOME/standalone; rm -rf configuration/standalone_xml_history log data tmp
|
||||
|
||||
HEALTHCHECK --interval=5s --timeout=5s --retries=12 CMD ["wildfly-healthcheck.sh"]
|
||||
ENTRYPOINT [ "docker-entrypoint.sh" ]
|
|
@ -0,0 +1,7 @@
|
|||
embed-server --server-config=standalone.xml
|
||||
|
||||
# default is cpuCount * 2
|
||||
/subsystem=io/worker=default:write-attribute(name=io-threads,value=${env.WORKER_IO_THREADS:2})
|
||||
|
||||
# default is cpuCount * 16
|
||||
/subsystem=io/worker=default:write-attribute(name=task-max-threads,value=${env.WORKER_TASK_MAX_THREADS:16})
|
|
@ -0,0 +1,12 @@
|
|||
embed-server --server-config=standalone.xml
|
||||
|
||||
/interface=private:add(inet-address=${jboss.bind.address.private:127.0.0.1})
|
||||
|
||||
/socket-binding-group=standard-sockets/socket-binding=modcluster-advertise:add(interface=private, multicast-port=23364, multicast-address=${jboss.default.multicast.address:224.0.1.105})
|
||||
/socket-binding-group=standard-sockets/socket-binding=modcluster-management:add(interface=private, port=6666)
|
||||
|
||||
/extension=org.jboss.as.modcluster/:add
|
||||
/subsystem=undertow/configuration=filter/mod-cluster=modcluster:add(advertise-socket-binding=modcluster-advertise, advertise-frequency=${modcluster.advertise-frequency:2000}, management-socket-binding=modcluster-management, enable-http2=true)
|
||||
/subsystem=undertow/server=default-server/host=default-host/filter-ref=modcluster:add
|
||||
/subsystem=undertow/server=default-server/http-listener=modcluster:add(socket-binding=modcluster-management, enable-http2=true)
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
embed-server --server-config=standalone.xml
|
||||
/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=max-connections,value=${env.HTTP_MAX_CONNECTIONS:500})
|
|
@ -0,0 +1,14 @@
|
|||
#!/bin/bash
|
||||
|
||||
cat $JBOSS_HOME/standalone/configuration/standalone.xml
|
||||
|
||||
. get-ips.sh
|
||||
|
||||
PARAMS="-b $PUBLIC_IP -bprivate $PRIVATE_IP $@"
|
||||
echo "Server startup params: $PARAMS"
|
||||
|
||||
# Note: External container connectivity is always provided by eth0 -- irrespective of which is considered public/private by KC.
|
||||
# In case the container needs to be accessible on the host computer override -b $PUBLIC_IP by adding: `-b 0.0.0.0` to the docker command.
|
||||
|
||||
exec /opt/jboss/wildfly/bin/standalone.sh $PARAMS
|
||||
exit $?
|
|
@ -0,0 +1,37 @@
|
|||
#!/bin/bash
|
||||
|
||||
# This script:
|
||||
# - assumes there are 2 network interfaces connected to $PUBLIC_SUBNET and $PRIVATE_SUBNET respectively
|
||||
# - finds IP addresses from those interfaces using `ip` command
|
||||
# - exports the IPs as variables
|
||||
|
||||
function getIpForSubnet {
|
||||
if [ -z $1 ]; then
|
||||
echo Subnet parameter undefined
|
||||
exit 1
|
||||
else
|
||||
ip r | grep $1 | sed 's/.*src\ \([^\ ]*\).*/\1/'
|
||||
fi
|
||||
}
|
||||
|
||||
if [ -z $PUBLIC_SUBNET ]; then
|
||||
PUBLIC_IP=0.0.0.0
|
||||
echo PUBLIC_SUBNET undefined. PUBLIC_IP defaults to $PUBLIC_IP
|
||||
else
|
||||
export PUBLIC_IP=`getIpForSubnet $PUBLIC_SUBNET`
|
||||
fi
|
||||
|
||||
if [ -z $PRIVATE_SUBNET ]; then
|
||||
PRIVATE_IP=127.0.0.1
|
||||
echo PRIVATE_SUBNET undefined. PRIVATE_IP defaults to $PRIVATE_IP
|
||||
else
|
||||
export PRIVATE_IP=`getIpForSubnet $PRIVATE_SUBNET`
|
||||
fi
|
||||
|
||||
echo Routing table:
|
||||
ip r
|
||||
echo PUBLIC_SUBNET=$PUBLIC_SUBNET
|
||||
echo PRIVATE_SUBNET=$PRIVATE_SUBNET
|
||||
echo PUBLIC_IP=$PUBLIC_IP
|
||||
echo PRIVATE_IP=$PRIVATE_IP
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
#!/bin/bash
|
||||
|
||||
#$JBOSS_HOME/bin/jboss-cli.sh -c ":read-attribute(name=server-state)" | grep -q "running"
|
||||
|
||||
. get-ips.sh
|
||||
|
||||
CODE=`curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/`
|
||||
|
||||
if [ "$CODE" -eq "200" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
exit 1
|
||||
|
26
testsuite/performance/load-dump.sh
Executable file
26
testsuite/performance/load-dump.sh
Executable file
|
@ -0,0 +1,26 @@
|
|||
#!/bin/bash
|
||||
DIRNAME=`dirname "$0"`
|
||||
GATLING_HOME=$DIRNAME/tests
|
||||
|
||||
if [ -z "$DATASET" ]; then
|
||||
echo "Please specify DATASET env variable.";
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [ ! -f $GATLING_HOME/datasets/$DATASET.sql ]; then
|
||||
if [ ! -f $GATLING_HOME/datasets/$DATASET.sql.gz ]; then
|
||||
echo "Data dump file does not exist: $GATLING_HOME/datasets/$DATASET.sql(.gz)";
|
||||
echo "Usage: DATASET=[name] $0"
|
||||
exit 1;
|
||||
else
|
||||
FILE=$GATLING_HOME/datasets/$DATASET.sql.gz
|
||||
CMD="zcat $FILE"
|
||||
fi
|
||||
else
|
||||
FILE=$GATLING_HOME/datasets/$DATASET.sql
|
||||
CMD="cat $FILE"
|
||||
fi
|
||||
|
||||
echo "Importing dump file: $FILE"
|
||||
echo "\$ $CMD | docker exec -i performance_mariadb_1 /usr/bin/mysql -u root --password=root keycloak"
|
||||
$CMD | docker exec -i performance_mariadb_1 /usr/bin/mysql -u root --password=root keycloak
|
5
testsuite/performance/log-tool.sh
Executable file
5
testsuite/performance/log-tool.sh
Executable file
|
@ -0,0 +1,5 @@
|
|||
#!/bin/bash
|
||||
DIRNAME=`dirname "$0"`
|
||||
GATLING_HOME=$DIRNAME/tests
|
||||
|
||||
java -cp $GATLING_HOME/target/classes org.keycloak.performance.log.LogProcessor "$@"
|
4
testsuite/performance/monitoring/cadvisor/Dockerfile
Normal file
4
testsuite/performance/monitoring/cadvisor/Dockerfile
Normal file
|
@ -0,0 +1,4 @@
|
|||
FROM google/cadvisor:v0.26.1
|
||||
RUN apk add --no-cache bash curl
|
||||
ADD entrypoint.sh /entrypoint.sh
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
10
testsuite/performance/monitoring/cadvisor/README.md
Normal file
10
testsuite/performance/monitoring/cadvisor/README.md
Normal file
|
@ -0,0 +1,10 @@
|
|||
# CAdvisor configured for InfluxDB
|
||||
|
||||
Google CAdvisor tool configured to export data into InfluxDB service.
|
||||
|
||||
## Customization for InfluxDB
|
||||
|
||||
This image adds a custom `entrypoint.sh` on top of the original CAdvisor image which:
|
||||
- checks if `$INFLUX_DATABASE` exists on `$INFLUX_HOST`
|
||||
- creates the DB if it doesn't exist
|
||||
- starts `cadvisor` with storage driver set for the Influx DB
|
18
testsuite/performance/monitoring/cadvisor/entrypoint.sh
Executable file
18
testsuite/performance/monitoring/cadvisor/entrypoint.sh
Executable file
|
@ -0,0 +1,18 @@
|
|||
#!/bin/bash
|
||||
|
||||
if [ -z $INFLUX_HOST ]; then export INFLUX_HOST=influx; fi
|
||||
if [ -z $INFLUX_DATABASE ]; then export INFLUX_DATABASE=cadvisor; fi
|
||||
|
||||
# Check if DB exists
|
||||
curl -s -G "http://$INFLUX_HOST:8086/query?pretty=true" --data-urlencode "q=SHOW DATABASES" | grep $INFLUX_DATABASE
|
||||
DB_EXISTS=$?
|
||||
|
||||
if [ $DB_EXISTS -eq 0 ]; then
|
||||
echo "Database '$INFLUX_DATABASE' already exists on InfluxDB server '$INFLUX_HOST:8086'"
|
||||
else
|
||||
echo "Creating database '$INFLUX_DATABASE' on InfluxDB server '$INFLUX_HOST:8086'"
|
||||
curl -i -XPOST http://$INFLUX_HOST:8086/query --data-urlencode "q=CREATE DATABASE $INFLUX_DATABASE"
|
||||
fi
|
||||
|
||||
/usr/bin/cadvisor -logtostderr -docker_only -storage_driver=influxdb -storage_driver_db=cadvisor -storage_driver_host=$INFLUX_HOST:8086 $@
|
||||
|
7
testsuite/performance/monitoring/grafana/Dockerfile
Normal file
7
testsuite/performance/monitoring/grafana/Dockerfile
Normal file
|
@ -0,0 +1,7 @@
|
|||
FROM grafana/grafana:4.4.3
|
||||
ENV GF_DASHBOARDS_JSON_ENABLED true
|
||||
ENV GF_DASHBOARDS_JSON_PATH /etc/grafana/dashboards/
|
||||
COPY resource-usage-per-container.json /etc/grafana/dashboards/
|
||||
COPY resource-usage-combined.json /etc/grafana/dashboards/
|
||||
ADD entrypoint.sh /entrypoint.sh
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
9
testsuite/performance/monitoring/grafana/README.md
Normal file
9
testsuite/performance/monitoring/grafana/README.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
# Grafana configured with InfluxDB
|
||||
|
||||
## Customization for InfluxDB
|
||||
|
||||
This image adds a custom `entrypoint.sh` on top of the original Grafana image which:
|
||||
- checks if `$INFLUX_DATASOURCE_NAME` exists in Grafana
|
||||
- creates the datasource if it doesn't exist
|
||||
- adds a custom dashboard configured to query for the datasource
|
||||
- starts Grafana
|
41
testsuite/performance/monitoring/grafana/entrypoint.sh
Executable file
41
testsuite/performance/monitoring/grafana/entrypoint.sh
Executable file
|
@ -0,0 +1,41 @@
|
|||
#!/bin/bash
|
||||
|
||||
if [ -z $INFLUX_DATASOURCE_NAME ]; then INFLUX_DATASOURCE_NAME=influx_datasource; fi
|
||||
|
||||
if [ -z $INFLUX_HOST ]; then export INFLUX_HOST=influx; fi
|
||||
if [ -z $INFLUX_DATABASE ]; then export INFLUX_DATABASE=cadvisor; fi
|
||||
|
||||
|
||||
echo Starting Grafana
|
||||
./run.sh "${@}" &
|
||||
timeout 10 bash -c "until </dev/tcp/localhost/3000; do sleep 1; done"
|
||||
|
||||
echo Checking if datasource '$INFLUX_DATASOURCE_NAME' exists
|
||||
curl -s 'http://admin:admin@localhost:3000/api/datasources' | grep $INFLUX_DATASOURCE_NAME
|
||||
DS_EXISTS=$?
|
||||
|
||||
if [ $DS_EXISTS -eq 0 ]; then
|
||||
echo "Datasource '$INFLUX_DATASOURCE_NAME' already exists in Grafana."
|
||||
else
|
||||
echo "Datasource '$INFLUX_DATASOURCE_NAME' not found in Grafana. Creating..."
|
||||
|
||||
curl -s -H "Content-Type: application/json" \
|
||||
-X POST http://admin:admin@localhost:3000/api/datasources \
|
||||
-d @- <<EOF
|
||||
{
|
||||
"name": "${INFLUX_DATASOURCE_NAME}",
|
||||
"type": "influxdb",
|
||||
"isDefault": false,
|
||||
"access": "proxy",
|
||||
"url": "http://${INFLUX_HOST}:8086",
|
||||
"database": "${INFLUX_DATABASE}"
|
||||
}
|
||||
EOF
|
||||
|
||||
fi
|
||||
|
||||
echo Restarting Grafana
|
||||
pkill grafana-server
|
||||
timeout 10 bash -c "while </dev/tcp/localhost/3000; do sleep 1; done"
|
||||
|
||||
exec ./run.sh "${@}"
|
|
@ -0,0 +1,610 @@
|
|||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_INFLUXDB_CADVISOR",
|
||||
"label": "influxdb_cadvisor",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
"pluginId": "influxdb",
|
||||
"pluginName": "InfluxDB"
|
||||
}
|
||||
],
|
||||
"__requires": [
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "4.4.3"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "graph",
|
||||
"name": "Graph",
|
||||
"version": ""
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "influxdb",
|
||||
"name": "InfluxDB",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": []
|
||||
},
|
||||
"editable": true,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"hideControls": true,
|
||||
"id": null,
|
||||
"links": [],
|
||||
"refresh": "5s",
|
||||
"rows": [
|
||||
{
|
||||
"collapse": false,
|
||||
"height": 250,
|
||||
"panels": [
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"fill": 1,
|
||||
"id": 2,
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"show": true,
|
||||
"total": true,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"links": [],
|
||||
"nullPointMode": "connected",
|
||||
"percentage": false,
|
||||
"pointradius": 5,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"span": 6,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_container_name",
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"container_name"
|
||||
],
|
||||
"type": "tag"
|
||||
}
|
||||
],
|
||||
"measurement": "cpu_usage_total",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT derivative(mean(\"value\"), 10s) FROM \"cpu_usage_total\" WHERE \"container_name\" =~ /^performance_keycloak.*$*/ AND $timeFilter GROUP BY time($interval), \"container_name\"",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"10s"
|
||||
],
|
||||
"type": "derivative"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"key": "container_name",
|
||||
"operator": "=~",
|
||||
"value": "/^performance_keycloak.*$*/"
|
||||
},
|
||||
{
|
||||
"condition": "AND",
|
||||
"key": "machine",
|
||||
"operator": "=~",
|
||||
"value": "/^$host$/"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "CPU",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "hertz",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"fill": 1,
|
||||
"id": 4,
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"show": true,
|
||||
"total": true,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"links": [],
|
||||
"nullPointMode": "connected",
|
||||
"percentage": false,
|
||||
"pointradius": 5,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"span": 6,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_container_name",
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"container_name"
|
||||
],
|
||||
"type": "tag"
|
||||
}
|
||||
],
|
||||
"measurement": "rx_bytes",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT derivative(mean(\"value\"), 10s) FROM \"rx_bytes\" WHERE \"container_name\" =~ /^performance_keycloak.*$*/ AND $timeFilter GROUP BY time($interval), \"container_name\"",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"10s"
|
||||
],
|
||||
"type": "derivative"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"key": "machine",
|
||||
"operator": "=~",
|
||||
"value": "/^$host$/"
|
||||
},
|
||||
{
|
||||
"condition": "AND",
|
||||
"key": "container_name",
|
||||
"operator": "=~",
|
||||
"value": "/^performance_keycloak.*$*/"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Network IN",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "Bps",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"repeat": null,
|
||||
"repeatIteration": null,
|
||||
"repeatRowId": null,
|
||||
"showTitle": false,
|
||||
"title": "Dashboard Row",
|
||||
"titleSize": "h6"
|
||||
},
|
||||
{
|
||||
"collapse": false,
|
||||
"height": 250,
|
||||
"panels": [
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"fill": 1,
|
||||
"id": 1,
|
||||
"interval": "",
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"rightSide": false,
|
||||
"show": true,
|
||||
"total": true,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"links": [],
|
||||
"nullPointMode": "connected",
|
||||
"percentage": false,
|
||||
"pointradius": 5,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"span": 6,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_container_name",
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$__interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"container_name"
|
||||
],
|
||||
"type": "tag"
|
||||
}
|
||||
],
|
||||
"measurement": "memory_usage",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT mean(\"value\") FROM \"memory_usage\" WHERE \"container_name\" =~ /^performance_keycloak.*$*/ AND $timeFilter GROUP BY time($__interval), \"container_name\"",
|
||||
"rawQuery": true,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"key": "container_name",
|
||||
"operator": "=~",
|
||||
"value": "/^performance_keycloak.*$*/"
|
||||
},
|
||||
{
|
||||
"condition": "AND",
|
||||
"key": "machine",
|
||||
"operator": "=~",
|
||||
"value": "/^$host$/"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Memory",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "decbytes",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"fill": 1,
|
||||
"id": 5,
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"show": true,
|
||||
"total": true,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"links": [],
|
||||
"nullPointMode": "connected",
|
||||
"percentage": false,
|
||||
"pointradius": 5,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"span": 6,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_container_name",
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"container_name"
|
||||
],
|
||||
"type": "tag"
|
||||
}
|
||||
],
|
||||
"measurement": "tx_bytes",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT derivative(mean(\"value\"), 10s) FROM \"tx_bytes\" WHERE \"container_name\" =~ /^performance_keycloak.*$*/ AND $timeFilter GROUP BY time($interval), \"container_name\"",
|
||||
"rawQuery": true,
|
||||
"refId": "B",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"10s"
|
||||
],
|
||||
"type": "derivative"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"key": "machine",
|
||||
"operator": "=~",
|
||||
"value": "/^$host$/"
|
||||
},
|
||||
{
|
||||
"condition": "AND",
|
||||
"key": "container_name",
|
||||
"operator": "=~",
|
||||
"value": "/^performance_keycloak.*$*/"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Network OUT",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "Bps",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"repeat": null,
|
||||
"repeatIteration": null,
|
||||
"repeatRowId": null,
|
||||
"showTitle": false,
|
||||
"title": "Dashboard Row",
|
||||
"titleSize": "h6"
|
||||
}
|
||||
],
|
||||
"schemaVersion": 14,
|
||||
"style": "dark",
|
||||
"tags": [],
|
||||
"templating": {
|
||||
"list": []
|
||||
},
|
||||
"time": {
|
||||
"from": "now-5m",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {
|
||||
"refresh_intervals": [
|
||||
"5s",
|
||||
"10s",
|
||||
"30s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"30m",
|
||||
"1h",
|
||||
"2h",
|
||||
"1d"
|
||||
],
|
||||
"time_options": [
|
||||
"5m",
|
||||
"15m",
|
||||
"1h",
|
||||
"6h",
|
||||
"12h",
|
||||
"24h",
|
||||
"2d",
|
||||
"7d",
|
||||
"30d"
|
||||
]
|
||||
},
|
||||
"timezone": "browser",
|
||||
"title": "Resource Usage - Combined",
|
||||
"version": 0
|
||||
}
|
|
@ -0,0 +1,669 @@
|
|||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_INFLUXDB_CADVISOR",
|
||||
"label": "influxdb_cadvisor",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
"pluginId": "influxdb",
|
||||
"pluginName": "InfluxDB"
|
||||
}
|
||||
],
|
||||
"__requires": [
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "4.4.3"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "graph",
|
||||
"name": "Graph",
|
||||
"version": ""
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "influxdb",
|
||||
"name": "InfluxDB",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": []
|
||||
},
|
||||
"editable": true,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"hideControls": false,
|
||||
"id": null,
|
||||
"links": [],
|
||||
"refresh": "5s",
|
||||
"rows": [
|
||||
{
|
||||
"collapse": false,
|
||||
"height": 250,
|
||||
"panels": [
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"fill": 1,
|
||||
"id": 2,
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"show": true,
|
||||
"total": true,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"links": [],
|
||||
"nullPointMode": "connected",
|
||||
"percentage": false,
|
||||
"pointradius": 5,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"span": 6,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_container_name",
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"machine"
|
||||
],
|
||||
"type": "tag"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"container_name"
|
||||
],
|
||||
"type": "tag"
|
||||
}
|
||||
],
|
||||
"measurement": "cpu_usage_total",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"10s"
|
||||
],
|
||||
"type": "derivative"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"key": "container_name",
|
||||
"operator": "=~",
|
||||
"value": "/^$container$*/"
|
||||
},
|
||||
{
|
||||
"condition": "AND",
|
||||
"key": "machine",
|
||||
"operator": "=~",
|
||||
"value": "/^$host$/"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "CPU",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "hertz",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"fill": 1,
|
||||
"id": 4,
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"show": true,
|
||||
"total": true,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"links": [],
|
||||
"nullPointMode": "connected",
|
||||
"percentage": false,
|
||||
"pointradius": 5,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"span": 6,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_container_name",
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"container_name"
|
||||
],
|
||||
"type": "tag"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"machine"
|
||||
],
|
||||
"type": "tag"
|
||||
}
|
||||
],
|
||||
"measurement": "rx_bytes",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"10s"
|
||||
],
|
||||
"type": "derivative"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"key": "machine",
|
||||
"operator": "=~",
|
||||
"value": "/^$host$/"
|
||||
},
|
||||
{
|
||||
"condition": "AND",
|
||||
"key": "container_name",
|
||||
"operator": "=~",
|
||||
"value": "/^$container$*/"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Network IN",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "Bps",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"repeat": null,
|
||||
"repeatIteration": null,
|
||||
"repeatRowId": null,
|
||||
"showTitle": false,
|
||||
"title": "Dashboard Row",
|
||||
"titleSize": "h6"
|
||||
},
|
||||
{
|
||||
"collapse": false,
|
||||
"height": 250,
|
||||
"panels": [
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"fill": 1,
|
||||
"id": 1,
|
||||
"interval": "",
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"rightSide": false,
|
||||
"show": true,
|
||||
"total": true,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"links": [],
|
||||
"nullPointMode": "connected",
|
||||
"percentage": false,
|
||||
"pointradius": 5,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"span": 6,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_container_name",
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$__interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"machine"
|
||||
],
|
||||
"type": "tag"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"container_name"
|
||||
],
|
||||
"type": "tag"
|
||||
}
|
||||
],
|
||||
"measurement": "memory_usage",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"query": "SELECT \"value\" FROM \"memory_usage\" WHERE \"container_name\" =~ /^$container$/ AND \"machine\" =~ /^$host$/ AND $timeFilter",
|
||||
"rawQuery": false,
|
||||
"refId": "A",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"key": "container_name",
|
||||
"operator": "=~",
|
||||
"value": "/^$container$*/"
|
||||
},
|
||||
{
|
||||
"condition": "AND",
|
||||
"key": "machine",
|
||||
"operator": "=~",
|
||||
"value": "/^$host$/"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Memory",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "decbytes",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"aliasColors": {},
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"fill": 1,
|
||||
"id": 5,
|
||||
"legend": {
|
||||
"alignAsTable": true,
|
||||
"avg": true,
|
||||
"current": true,
|
||||
"max": true,
|
||||
"min": true,
|
||||
"show": true,
|
||||
"total": true,
|
||||
"values": true
|
||||
},
|
||||
"lines": true,
|
||||
"linewidth": 1,
|
||||
"links": [],
|
||||
"nullPointMode": "connected",
|
||||
"percentage": false,
|
||||
"pointradius": 5,
|
||||
"points": false,
|
||||
"renderer": "flot",
|
||||
"seriesOverrides": [],
|
||||
"spaceLength": 10,
|
||||
"span": 6,
|
||||
"stack": false,
|
||||
"steppedLine": false,
|
||||
"targets": [
|
||||
{
|
||||
"alias": "$tag_container_name",
|
||||
"dsType": "influxdb",
|
||||
"groupBy": [
|
||||
{
|
||||
"params": [
|
||||
"$interval"
|
||||
],
|
||||
"type": "time"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"container_name"
|
||||
],
|
||||
"type": "tag"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"machine"
|
||||
],
|
||||
"type": "tag"
|
||||
}
|
||||
],
|
||||
"measurement": "tx_bytes",
|
||||
"orderByTime": "ASC",
|
||||
"policy": "default",
|
||||
"refId": "B",
|
||||
"resultFormat": "time_series",
|
||||
"select": [
|
||||
[
|
||||
{
|
||||
"params": [
|
||||
"value"
|
||||
],
|
||||
"type": "field"
|
||||
},
|
||||
{
|
||||
"params": [],
|
||||
"type": "mean"
|
||||
},
|
||||
{
|
||||
"params": [
|
||||
"10s"
|
||||
],
|
||||
"type": "derivative"
|
||||
}
|
||||
]
|
||||
],
|
||||
"tags": [
|
||||
{
|
||||
"key": "machine",
|
||||
"operator": "=~",
|
||||
"value": "/^$host$/"
|
||||
},
|
||||
{
|
||||
"condition": "AND",
|
||||
"key": "container_name",
|
||||
"operator": "=~",
|
||||
"value": "/^$container$*/"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"thresholds": [],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Network OUT",
|
||||
"tooltip": {
|
||||
"shared": true,
|
||||
"sort": 0,
|
||||
"value_type": "individual"
|
||||
},
|
||||
"type": "graph",
|
||||
"xaxis": {
|
||||
"buckets": null,
|
||||
"mode": "time",
|
||||
"name": null,
|
||||
"show": true,
|
||||
"values": []
|
||||
},
|
||||
"yaxes": [
|
||||
{
|
||||
"format": "Bps",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
},
|
||||
{
|
||||
"format": "short",
|
||||
"label": null,
|
||||
"logBase": 1,
|
||||
"max": null,
|
||||
"min": null,
|
||||
"show": true
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"repeat": null,
|
||||
"repeatIteration": null,
|
||||
"repeatRowId": null,
|
||||
"showTitle": false,
|
||||
"title": "Dashboard Row",
|
||||
"titleSize": "h6"
|
||||
}
|
||||
],
|
||||
"schemaVersion": 14,
|
||||
"style": "dark",
|
||||
"tags": [],
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"allValue": "",
|
||||
"current": {},
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"label": "Host",
|
||||
"multi": false,
|
||||
"name": "host",
|
||||
"options": [],
|
||||
"query": "show tag values with key = \"machine\"",
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"sort": 0,
|
||||
"tagValuesQuery": "",
|
||||
"tags": [],
|
||||
"tagsQuery": "",
|
||||
"type": "query",
|
||||
"useTags": false
|
||||
},
|
||||
{
|
||||
"allValue": null,
|
||||
"current": {},
|
||||
"datasource": "influxdb_cadvisor",
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"label": "Container",
|
||||
"multi": false,
|
||||
"name": "container",
|
||||
"options": [],
|
||||
"query": "show tag values with key = \"container_name\" WHERE machine =~ /^$host$/",
|
||||
"refresh": 1,
|
||||
"regex": "/([^.]+)/",
|
||||
"sort": 0,
|
||||
"tagValuesQuery": "",
|
||||
"tags": [],
|
||||
"tagsQuery": "",
|
||||
"type": "query",
|
||||
"useTags": false
|
||||
}
|
||||
]
|
||||
},
|
||||
"time": {
|
||||
"from": "now-5m",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {
|
||||
"refresh_intervals": [
|
||||
"5s",
|
||||
"10s",
|
||||
"30s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"30m",
|
||||
"1h",
|
||||
"2h",
|
||||
"1d"
|
||||
],
|
||||
"time_options": [
|
||||
"5m",
|
||||
"15m",
|
||||
"1h",
|
||||
"6h",
|
||||
"12h",
|
||||
"24h",
|
||||
"2d",
|
||||
"7d",
|
||||
"30d"
|
||||
]
|
||||
},
|
||||
"timezone": "browser",
|
||||
"title": "Resource Usage - Per Container",
|
||||
"version": 2
|
||||
}
|
39
testsuite/performance/pom.xml
Normal file
39
testsuite/performance/pom.xml
Normal file
|
@ -0,0 +1,39 @@
|
|||
<?xml version="1.0"?>
|
||||
<!--
|
||||
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
|
||||
~ and other contributors as indicated by the @author tags.
|
||||
~
|
||||
~ Licensed under the Apache License, Version 2.0 (the "License");
|
||||
~ you may not use this file except in compliance with the License.
|
||||
~ You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing, software
|
||||
~ distributed under the License is distributed on an "AS IS" BASIS,
|
||||
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
~ See the License for the specific language governing permissions and
|
||||
~ limitations under the License.
|
||||
-->
|
||||
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
|
||||
<parent>
|
||||
<artifactId>keycloak-testsuite-pom</artifactId>
|
||||
<groupId>org.keycloak</groupId>
|
||||
<version>3.4.0.CR1-SNAPSHOT</version>
|
||||
<relativePath>../pom.xml</relativePath>
|
||||
</parent>
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<groupId>org.keycloak.testsuite</groupId>
|
||||
<artifactId>performance</artifactId>
|
||||
<name>Keycloak Performance TestSuite</name>
|
||||
<packaging>pom</packaging>
|
||||
|
||||
<modules>
|
||||
<module>keycloak</module>
|
||||
<module>tests</module>
|
||||
</modules>
|
||||
|
||||
</project>
|
24
testsuite/performance/prepare-dump.sh
Executable file
24
testsuite/performance/prepare-dump.sh
Executable file
|
@ -0,0 +1,24 @@
|
|||
#!/bin/bash
|
||||
DIRNAME=`dirname "$0"`
|
||||
GATLING_HOME=$DIRNAME/tests
|
||||
|
||||
if [ -z "$DATASET" ]; then
|
||||
echo "This script requires DATASET env variable to be set"
|
||||
echo 1
|
||||
fi
|
||||
|
||||
./prepare-data.sh $@
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed! See log file for details."
|
||||
exit $?
|
||||
fi
|
||||
|
||||
echo "Exporting dump file"
|
||||
docker exec performance_mariadb_1 /usr/bin/mysqldump -u root --password=root keycloak > $DATASET.sql
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Failed!"
|
||||
exit $?
|
||||
fi
|
||||
|
||||
gzip $DATASET.sql
|
||||
mv $DATASET.sql.gz $GATLING_HOME/datasets/
|
|
@ -0,0 +1,8 @@
|
|||
numOfRealms=100
|
||||
usersPerRealm=100
|
||||
clientsPerRealm=2
|
||||
realmRoles=2
|
||||
realmRolesPerUser=2
|
||||
clientRolesPerUser=2
|
||||
clientRolesPerClient=2
|
||||
hashIterations=27500
|
|
@ -0,0 +1,8 @@
|
|||
numOfRealms=100
|
||||
usersPerRealm=2
|
||||
clientsPerRealm=2
|
||||
realmRoles=2
|
||||
realmRolesPerUser=2
|
||||
clientRolesPerUser=2
|
||||
clientRolesPerClient=2
|
||||
hashIterations=27500
|
8
testsuite/performance/tests/datasets/100u.properties
Normal file
8
testsuite/performance/tests/datasets/100u.properties
Normal file
|
@ -0,0 +1,8 @@
|
|||
numOfRealms=1
|
||||
usersPerRealm=100
|
||||
clientsPerRealm=2
|
||||
realmRoles=2
|
||||
realmRolesPerUser=2
|
||||
clientRolesPerUser=2
|
||||
clientRolesPerClient=2
|
||||
hashIterations=27500
|
8
testsuite/performance/tests/datasets/200ku2kc.properties
Normal file
8
testsuite/performance/tests/datasets/200ku2kc.properties
Normal file
|
@ -0,0 +1,8 @@
|
|||
numOfRealms=1
|
||||
usersPerRealm=200000
|
||||
clientsPerRealm=2000
|
||||
realmRoles=2
|
||||
realmRolesPerUser=2
|
||||
clientRolesPerUser=2
|
||||
clientRolesPerClient=2
|
||||
hashIterations=27500
|
8
testsuite/performance/tests/datasets/default.properties
Normal file
8
testsuite/performance/tests/datasets/default.properties
Normal file
|
@ -0,0 +1,8 @@
|
|||
numOfRealms=1
|
||||
usersPerRealm=2
|
||||
clientsPerRealm=2
|
||||
realmRoles=2
|
||||
realmRolesPerUser=2
|
||||
clientRolesPerUser=2
|
||||
clientRolesPerClient=2
|
||||
hashIterations=27500
|
628
testsuite/performance/tests/pom.xml
Normal file
628
testsuite/performance/tests/pom.xml
Normal file
|
@ -0,0 +1,628 @@
|
|||
<?xml version="1.0"?>
|
||||
<!--
|
||||
~ Copyright 2016 Red Hat, Inc. and/or its affiliates
|
||||
~ and other contributors as indicated by the @author tags.
|
||||
~
|
||||
~ Licensed under the Apache License, Version 2.0 (the "License");
|
||||
~ you may not use this file except in compliance with the License.
|
||||
~ You may obtain a copy of the License at
|
||||
~
|
||||
~ http://www.apache.org/licenses/LICENSE-2.0
|
||||
~
|
||||
~ Unless required by applicable law or agreed to in writing, software
|
||||
~ distributed under the License is distributed on an "AS IS" BASIS,
|
||||
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
~ See the License for the specific language governing permissions and
|
||||
~ limitations under the License.
|
||||
-->
|
||||
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
|
||||
<parent>
|
||||
<groupId>org.keycloak.testsuite</groupId>
|
||||
<artifactId>performance</artifactId>
|
||||
<version>3.4.0.CR1-SNAPSHOT</version>
|
||||
<relativePath>../pom.xml</relativePath>
|
||||
</parent>
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<artifactId>performance-tests</artifactId>
|
||||
<name>Keycloak Performance TestSuite - Tests</name>
|
||||
|
||||
<description>
|
||||
</description>
|
||||
|
||||
<properties>
|
||||
<compose.file>docker-compose.yml</compose.file>
|
||||
<compose.up.params/>
|
||||
<compose.restart.params>keycloak</compose.restart.params>
|
||||
<keycloak.server.uris>http://localhost:8080/auth</keycloak.server.uris>
|
||||
<db.url>jdbc:mariadb://keycloak:keycloak@localhost:3306/keycloak</db.url>
|
||||
|
||||
<keycloak.jvm.memory>-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m</keycloak.jvm.memory>
|
||||
<keycloak.http.max-connections>500</keycloak.http.max-connections>
|
||||
<keycloak.ajp.max-connections>500</keycloak.ajp.max-connections>
|
||||
<keycloak.worker.io-threads>2</keycloak.worker.io-threads>
|
||||
<keycloak.worker.task-max-threads>16</keycloak.worker.task-max-threads>
|
||||
<keycloak.ds.min-pool-size>10</keycloak.ds.min-pool-size>
|
||||
<keycloak.ds.max-pool-size>100</keycloak.ds.max-pool-size>
|
||||
<keycloak.ds.pool-prefill>true</keycloak.ds.pool-prefill>
|
||||
<keycloak.ds.ps-cache-size>100</keycloak.ds.ps-cache-size>
|
||||
|
||||
<keycloak-lb.jvm.memory>-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m</keycloak-lb.jvm.memory>
|
||||
<keycloak-lb.http.max-connections>500</keycloak-lb.http.max-connections>
|
||||
<keycloak-lb.worker.io-threads>2</keycloak-lb.worker.io-threads>
|
||||
<keycloak-lb.worker.task-max-threads>16</keycloak-lb.worker.task-max-threads>
|
||||
|
||||
<infinispan.jvm.memory>-Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC</infinispan.jvm.memory>
|
||||
|
||||
<dataset>default</dataset>
|
||||
<numOfWorkers>1</numOfWorkers>
|
||||
|
||||
<maven.compiler.target>1.8</maven.compiler.target>
|
||||
<maven.compiler.source>1.8</maven.compiler.source>
|
||||
|
||||
<scala.version>2.11.7</scala.version>
|
||||
<gatling.version>2.1.7</gatling.version>
|
||||
<gatling-plugin.version>2.2.1</gatling-plugin.version>
|
||||
<scala-maven-plugin.version>3.2.2</scala-maven-plugin.version>
|
||||
<jboss-logging.version>3.3.0.Final</jboss-logging.version>
|
||||
|
||||
<gatling.simulationClass>keycloak.DefaultSimulation</gatling.simulationClass>
|
||||
<gatling.skip.run>true</gatling.skip.run>
|
||||
</properties>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.keycloak</groupId>
|
||||
<artifactId>keycloak-adapter-core</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.keycloak</groupId>
|
||||
<artifactId>keycloak-adapter-spi</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.keycloak</groupId>
|
||||
<artifactId>keycloak-core</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.keycloak</groupId>
|
||||
<artifactId>keycloak-common</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-core</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-databind</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.keycloak</groupId>
|
||||
<artifactId>keycloak-admin-client</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.jboss.spec.javax.ws.rs</groupId>
|
||||
<artifactId>jboss-jaxrs-api_2.0_spec</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.jboss.logging</groupId>
|
||||
<artifactId>jboss-logging</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.jboss.resteasy</groupId>
|
||||
<artifactId>resteasy-jaxrs</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.jboss.resteasy</groupId>
|
||||
<artifactId>resteasy-client</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.jboss.resteasy</groupId>
|
||||
<artifactId>resteasy-jackson2-provider</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.httpcomponents</groupId>
|
||||
<artifactId>httpclient</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.scala-lang</groupId>
|
||||
<artifactId>scala-library</artifactId>
|
||||
<version>${scala.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.gatling.highcharts</groupId>
|
||||
<artifactId>gatling-charts-highcharts</artifactId>
|
||||
<version>${gatling.version}</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<testResources>
|
||||
<testResource>
|
||||
<directory>src/test/resources</directory>
|
||||
<filtering>true</filtering>
|
||||
</testResource>
|
||||
</testResources>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>net.alchim31.maven</groupId>
|
||||
<artifactId>scala-maven-plugin</artifactId>
|
||||
<version>${scala-maven-plugin.version}</version>
|
||||
|
||||
<executions>
|
||||
<execution>
|
||||
<id>add-source</id>
|
||||
<!--phase>process-resources</phase-->
|
||||
<goals>
|
||||
<goal>add-source</goal>
|
||||
<!--goal>compile</goal-->
|
||||
</goals>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>compile</id>
|
||||
<!--phase>process-test-resources</phase-->
|
||||
<goals>
|
||||
<goal>compile</goal>
|
||||
<goal>testCompile</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<args>
|
||||
<arg>-target:jvm-1.8</arg>
|
||||
<arg>-deprecation</arg>
|
||||
<arg>-feature</arg>
|
||||
<arg>-unchecked</arg>
|
||||
<arg>-language:implicitConversions</arg>
|
||||
<arg>-language:postfixOps</arg>
|
||||
</args>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
|
||||
|
||||
<plugin>
|
||||
<!--
|
||||
Execute test directly by using:
|
||||
|
||||
mvn gatling:execute -f testsuite/performance/gatling -Dgatling.simulationClass=keycloak.DemoSimulation2
|
||||
|
||||
For more usage info see: http://gatling.io/docs/current/extensions/maven_plugin/
|
||||
-->
|
||||
<groupId>io.gatling</groupId>
|
||||
<artifactId>gatling-maven-plugin</artifactId>
|
||||
<version>${gatling-plugin.version}</version>
|
||||
|
||||
<configuration>
|
||||
<configFolder>${project.build.testOutputDirectory}</configFolder>
|
||||
<skip>${gatling.skip.run}</skip>
|
||||
<disableCompiler>true</disableCompiler>
|
||||
<runMultipleSimulations>true</runMultipleSimulations>
|
||||
<!--includes>
|
||||
<include>keycloak.DemoSimulation2</include>
|
||||
</includes-->
|
||||
<jvmArgs>
|
||||
<param>-DnumOfRealms=${numOfRealms}</param>
|
||||
<param>-DusersPerRealm=${usersPerRealm}</param>
|
||||
<param>-DclientsPerRealm=${clientsPerRealm}</param>
|
||||
<param>-DrealmRoles=${realmRoles}</param>
|
||||
<param>-DrealmRolesPerUser=${realmRolesPerUser}</param>
|
||||
<param>-DclientRolesPerUser=${clientRolesPerUser}</param>
|
||||
<param>-DclientRolesPerClient=${clientRolesPerClient}</param>
|
||||
<param>-DhashIterations=${hashIterations}</param>
|
||||
</jvmArgs>
|
||||
</configuration>
|
||||
|
||||
<executions>
|
||||
<execution>
|
||||
<goals>
|
||||
<goal>integration-test</goal>
|
||||
</goals>
|
||||
<phase>integration-test</phase>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>properties-maven-plugin</artifactId>
|
||||
<version>1.0.0</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>read-dataset-properties</id>
|
||||
<phase>initialize</phase>
|
||||
<goals>
|
||||
<goal>read-project-properties</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<files>
|
||||
<file>${project.basedir}/datasets/${dataset}.properties</file>
|
||||
</files>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
<profiles>
|
||||
<profile>
|
||||
<id>initialize-dataset-properties</id>
|
||||
<activation>
|
||||
<property>
|
||||
<name>dataset</name>
|
||||
</property>
|
||||
</activation>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>properties-maven-plugin</artifactId>
|
||||
<version>1.0.0</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>initialize-dataset-properties</id>
|
||||
<phase>initialize</phase>
|
||||
<goals>
|
||||
<goal>read-project-properties</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<files>
|
||||
<file>${project.basedir}/datasets/${dataset}.properties</file>
|
||||
</files>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
|
||||
<profile>
|
||||
<id>cluster</id>
|
||||
<properties>
|
||||
<compose.file>docker-compose-cluster.yml</compose.file>
|
||||
<keycloak.scale>1</keycloak.scale>
|
||||
<compose.up.params>--scale keycloak=${keycloak.scale}</compose.up.params>
|
||||
<keycloak.server.uris>http://localhost:8080/auth</keycloak.server.uris>
|
||||
</properties>
|
||||
</profile>
|
||||
<profile>
|
||||
<id>crossdc</id>
|
||||
<properties>
|
||||
<compose.file>docker-compose-crossdc.yml</compose.file>
|
||||
<keycloak.dc1.scale>1</keycloak.dc1.scale>
|
||||
<keycloak.dc2.scale>1</keycloak.dc2.scale>
|
||||
<compose.up.params>--scale keycloak_dc1=${keycloak.dc1.scale} --scale keycloak_dc2=${keycloak.dc2.scale}</compose.up.params>
|
||||
<compose.restart.params>keycloak_dc1 keycloak_dc2</compose.restart.params>
|
||||
<keycloak.server.uris>http://localhost:8081/auth http://localhost:8082/auth</keycloak.server.uris>
|
||||
</properties>
|
||||
</profile>
|
||||
|
||||
|
||||
<profile>
|
||||
<id>provision</id>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>exec-maven-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>docker-compose-up</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<workingDirectory>${project.basedir}/..</workingDirectory>
|
||||
<executable>docker-compose</executable>
|
||||
<commandlineArgs>-f ${compose.file} up -d --build ${compose.up.params}</commandlineArgs>
|
||||
<environmentVariables>
|
||||
<KEYCLOAK_VERSION>${project.version}</KEYCLOAK_VERSION>
|
||||
|
||||
<KEYCLOAK_JVM_MEMORY>${keycloak.jvm.memory}</KEYCLOAK_JVM_MEMORY>
|
||||
<KEYCLOAK_HTTP_MAX_CONNECTIONS>${keycloak.http.max-connections}</KEYCLOAK_HTTP_MAX_CONNECTIONS>
|
||||
<KEYCLOAK_AJP_MAX_CONNECTIONS>${keycloak.ajp.max-connections}</KEYCLOAK_AJP_MAX_CONNECTIONS>
|
||||
<KEYCLOAK_WORKER_IO_THREADS>${keycloak.worker.io-threads}</KEYCLOAK_WORKER_IO_THREADS>
|
||||
<KEYCLOAK_WORKER_TASK_MAX_THREADS>${keycloak.worker.task-max-threads}</KEYCLOAK_WORKER_TASK_MAX_THREADS>
|
||||
<KEYCLOAK_DS_MIN_POOL_SIZE>${keycloak.ds.min-pool-size}</KEYCLOAK_DS_MIN_POOL_SIZE>
|
||||
<KEYCLOAK_DS_MAX_POOL_SIZE>${keycloak.ds.max-pool-size}</KEYCLOAK_DS_MAX_POOL_SIZE>
|
||||
<KEYCLOAK_DS_POOL_PREFILL>${keycloak.ds.pool-prefill}</KEYCLOAK_DS_POOL_PREFILL>
|
||||
<KEYCLOAK_DS_PS_CACHE_SIZE>${keycloak.ds.ps-cache-size}</KEYCLOAK_DS_PS_CACHE_SIZE>
|
||||
|
||||
<KEYCLOAK_LB_JVM_MEMORY>${keycloak-lb.jvm.memory}</KEYCLOAK_LB_JVM_MEMORY>
|
||||
<KEYCLOAK_LB_HTTP_MAX_CONNECTIONS>${keycloak-lb.http.max-connections}</KEYCLOAK_LB_HTTP_MAX_CONNECTIONS>
|
||||
<KEYCLOAK_LB_WORKER_IO_THREADS>${keycloak-lb.worker.io-threads}</KEYCLOAK_LB_WORKER_IO_THREADS>
|
||||
<KEYCLOAK_LB_WORKER_TASK_MAX_THREADS>${keycloak-lb.worker.task-max-threads}</KEYCLOAK_LB_WORKER_TASK_MAX_THREADS>
|
||||
|
||||
<INFINISPAN_JVM_MEMORY>${infinispan.jvm.memory}</INFINISPAN_JVM_MEMORY>
|
||||
</environmentVariables>
|
||||
</configuration>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>healthcheck</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<executable>./healthcheck.sh</executable>
|
||||
<workingDirectory>${project.basedir}/..</workingDirectory>
|
||||
<environmentVariables>
|
||||
<KEYCLOAK_SERVER_URIS>${keycloak.server.uris}</KEYCLOAK_SERVER_URIS>
|
||||
</environmentVariables>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>write-provisioned-system-properties</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<target>
|
||||
<propertyfile file="${project.build.directory}/provisioned-system.properties">
|
||||
<entry key="keycloak.server.uris" value="${keycloak.server.uris}"/>
|
||||
</propertyfile>
|
||||
</target>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
|
||||
<profile>
|
||||
<id>import-data</id>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>exec-maven-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>generate-data</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<executable>java</executable>
|
||||
<workingDirectory>${project.build.directory}</workingDirectory>
|
||||
<arguments>
|
||||
<argument>-classpath</argument>
|
||||
<classpath/>
|
||||
<argument>-DnumOfRealms=${numOfRealms}</argument>
|
||||
<argument>-DusersPerRealm=${usersPerRealm}</argument>
|
||||
<argument>-DclientsPerRealm=${clientsPerRealm}</argument>
|
||||
<argument>-DrealmRoles=${realmRoles}</argument>
|
||||
<argument>-DrealmRolesPerUser=${realmRolesPerUser}</argument>
|
||||
<argument>-DclientRolesPerUser=${clientRolesPerUser}</argument>
|
||||
<argument>-DclientRolesPerClient=${clientRolesPerClient}</argument>
|
||||
<argument>-DhashIterations=${hashIterations}</argument>
|
||||
<argument>org.keycloak.performance.RealmsConfigurationBuilder</argument>
|
||||
</arguments>
|
||||
</configuration>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>load-data</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<executable>java</executable>
|
||||
<workingDirectory>${project.build.directory}</workingDirectory>
|
||||
<arguments>
|
||||
<argument>-classpath</argument>
|
||||
<classpath/>
|
||||
<argument>-DnumOfWorkers=${numOfWorkers}</argument>
|
||||
<argument>org.keycloak.performance.RealmsConfigurationLoader</argument>
|
||||
<argument>benchmark-realms.json</argument>
|
||||
</arguments>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<artifactId>maven-antrun-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>write-imported-dataset-properties</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>run</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<target>
|
||||
<propertyfile file="${project.build.directory}/imported-dataset.properties">
|
||||
<entry key="numOfRealms" value="${numOfRealms}"/>
|
||||
<entry key="usersPerRealm" value="${usersPerRealm}"/>
|
||||
<entry key="clientsPerRealm" value="${clientsPerRealm}"/>
|
||||
<entry key="realmRoles" value="${realmRoles}"/>
|
||||
<entry key="realmRolesPerUser" value="${realmRolesPerUser}"/>
|
||||
<entry key="clientRolesPerUser" value="${clientRolesPerUser}"/>
|
||||
<entry key="clientRolesPerClient" value="${clientRolesPerClient}"/>
|
||||
<entry key="hashIterations" value="${hashIterations}"/>
|
||||
</propertyfile>
|
||||
</target>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
|
||||
<profile>
|
||||
<id>import-dump</id>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>exec-maven-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>load-dump</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<workingDirectory>${project.basedir}/..</workingDirectory>
|
||||
<executable>./load-dump.sh</executable>
|
||||
|
||||
<environmentVariables>
|
||||
<DATASET>${dataset}</DATASET>
|
||||
</environmentVariables>
|
||||
</configuration>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>restart-keycloak</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<workingDirectory>${project.basedir}/..</workingDirectory>
|
||||
<executable>docker-compose</executable>
|
||||
<commandlineArgs>-f ${compose.file} restart ${compose.restart.params}</commandlineArgs>
|
||||
</configuration>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>healthcheck</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<executable>./healthcheck.sh</executable>
|
||||
<workingDirectory>${project.basedir}/..</workingDirectory>
|
||||
<environmentVariables>
|
||||
<KEYCLOAK_SERVER_URIS>${keycloak.server.uris}</KEYCLOAK_SERVER_URIS>
|
||||
</environmentVariables>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
|
||||
<profile>
|
||||
<id>test</id>
|
||||
<properties>
|
||||
<gatling.skip.run>false</gatling.skip.run>
|
||||
</properties>
|
||||
</profile>
|
||||
|
||||
<profile>
|
||||
<id>teardown</id>
|
||||
<properties>
|
||||
<volumes.arg>-v</volumes.arg>
|
||||
</properties>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>exec-maven-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>docker-compose-down</id>
|
||||
<phase>post-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<workingDirectory>${project.basedir}/..</workingDirectory>
|
||||
<executable>docker-compose</executable>
|
||||
<commandlineArgs>-f ${compose.file} down ${volumes.arg}</commandlineArgs>
|
||||
<environmentVariables>
|
||||
<KEYCLOAK_VERSION>${project.version}</KEYCLOAK_VERSION>
|
||||
</environmentVariables>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
<profile>
|
||||
<id>keep-data</id>
|
||||
<properties>
|
||||
<volumes.arg/>
|
||||
</properties>
|
||||
</profile>
|
||||
|
||||
|
||||
<profile>
|
||||
<id>monitoring</id>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>exec-maven-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>monitoring-docker-compose-up</id>
|
||||
<phase>pre-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<workingDirectory>${project.basedir}/..</workingDirectory>
|
||||
<executable>docker-compose</executable>
|
||||
<commandlineArgs>-f docker-compose-monitoring.yml up -d --build</commandlineArgs>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
<profile>
|
||||
<id>monitoring-off</id>
|
||||
<properties>
|
||||
<monitoring.volumes.arg/>
|
||||
</properties>
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>exec-maven-plugin</artifactId>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>monitoring-docker-compose-down</id>
|
||||
<phase>post-integration-test</phase>
|
||||
<goals>
|
||||
<goal>exec</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<workingDirectory>${project.basedir}/..</workingDirectory>
|
||||
<executable>docker-compose</executable>
|
||||
<commandlineArgs>-f docker-compose-monitoring.yml down ${monitoring.volumes.arg}</commandlineArgs>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</profile>
|
||||
<profile>
|
||||
<id>delete-monitoring-data</id>
|
||||
<properties>
|
||||
<monitoring.volumes.arg>-v</monitoring.volumes.arg>
|
||||
</properties>
|
||||
</profile>
|
||||
|
||||
</profiles>
|
||||
|
||||
</project>
|
|
@ -0,0 +1,166 @@
|
|||
package org.keycloak.gatling;
|
||||
|
||||
import org.keycloak.adapters.spi.AuthenticationError;
|
||||
import org.keycloak.adapters.spi.HttpFacade;
|
||||
import org.keycloak.adapters.spi.LogoutError;
|
||||
|
||||
import javax.security.cert.X509Certificate;
|
||||
|
||||
import java.io.InputStream;
|
||||
import java.io.OutputStream;
|
||||
import java.net.URI;
|
||||
import java.net.URL;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
public class MockHttpFacade implements HttpFacade {
|
||||
final Request request = new Request();
|
||||
final Response response = new Response();
|
||||
|
||||
@Override
|
||||
public Request getRequest() {
|
||||
return request;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Response getResponse() {
|
||||
return response;
|
||||
}
|
||||
|
||||
@Override
|
||||
public X509Certificate[] getCertificateChain() {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
static class Request implements HttpFacade.Request {
|
||||
private String uri;
|
||||
private String relativePath;
|
||||
private Map<String, String> queryParams;
|
||||
private Map<String, Cookie> cookies;
|
||||
|
||||
@Override
|
||||
public String getMethod() {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getURI() {
|
||||
return uri;
|
||||
}
|
||||
|
||||
public void setURI(String uri) {
|
||||
this.uri = uri;
|
||||
this.relativePath = URI.create(uri).getPath();
|
||||
List<String> params = Arrays.asList(uri.substring(uri.indexOf('?') + 1).split("&"));
|
||||
queryParams = params.stream().map(p -> p.split("="))
|
||||
.collect(Collectors.toMap(a -> a[0], a -> a.length > 1 ? a[1] : ""));
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getRelativePath() {
|
||||
return relativePath;
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isSecure() {
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getFirstParam(String param) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getQueryParamValue(String param) {
|
||||
return queryParams.get(param);
|
||||
}
|
||||
|
||||
@Override
|
||||
public HttpFacade.Cookie getCookie(String cookieName) {
|
||||
return cookies.get(cookieName);
|
||||
}
|
||||
|
||||
public void setCookies(Map<String, HttpFacade.Cookie> cookies) {
|
||||
this.cookies = cookies;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getHeader(String name) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public List<String> getHeaders(String name) {
|
||||
return Collections.emptyList();
|
||||
}
|
||||
|
||||
@Override
|
||||
public InputStream getInputStream() {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getRemoteAddr() {
|
||||
return "localhost"; // TODO
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setError(AuthenticationError error) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setError(LogoutError error) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
}
|
||||
|
||||
static class Response implements HttpFacade.Response {
|
||||
@Override
|
||||
public void setStatus(int status) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void addHeader(String name, String value) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setHeader(String name, String value) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void resetCookie(String name, String path) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void setCookie(String name, String value, String path, String domain, int maxAge, boolean secure, boolean httpOnly) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public OutputStream getOutputStream() {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void sendError(int code) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void sendError(int code, String message) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void end() {
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,49 @@
|
|||
package org.keycloak.gatling;
|
||||
|
||||
import org.keycloak.KeycloakPrincipal;
|
||||
import org.keycloak.adapters.AdapterTokenStore;
|
||||
import org.keycloak.adapters.KeycloakDeployment;
|
||||
import org.keycloak.adapters.OAuthRequestAuthenticator;
|
||||
import org.keycloak.adapters.RefreshableKeycloakSecurityContext;
|
||||
import org.keycloak.adapters.RequestAuthenticator;
|
||||
import org.keycloak.adapters.spi.HttpFacade;
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
public class MockRequestAuthenticator extends RequestAuthenticator {
|
||||
public static String KEY = MockRequestAuthenticator.class.getName();
|
||||
|
||||
private RefreshableKeycloakSecurityContext keycloakSecurityContext;
|
||||
// This is application-specific user session, used for backchannel operations
|
||||
private final String sessionId;
|
||||
|
||||
public MockRequestAuthenticator(HttpFacade facade, KeycloakDeployment deployment, AdapterTokenStore tokenStore, int sslRedirectPort, String sessionId) {
|
||||
super(facade, deployment, tokenStore, sslRedirectPort);
|
||||
this.sessionId = sessionId;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected OAuthRequestAuthenticator createOAuthAuthenticator() {
|
||||
return new OAuthRequestAuthenticator(this, facade, deployment, sslRedirectPort, tokenStore);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void completeOAuthAuthentication(KeycloakPrincipal<RefreshableKeycloakSecurityContext> principal) {
|
||||
keycloakSecurityContext = principal.getKeycloakSecurityContext();
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void completeBearerAuthentication(KeycloakPrincipal<RefreshableKeycloakSecurityContext> principal, String method) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
protected String changeHttpSessionId(boolean create) {
|
||||
return sessionId;
|
||||
}
|
||||
|
||||
public RefreshableKeycloakSecurityContext getKeycloakSecurityContext() {
|
||||
return keycloakSecurityContext;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,43 @@
|
|||
package org.keycloak.gatling;
|
||||
|
||||
import org.keycloak.adapters.AdapterTokenStore;
|
||||
import org.keycloak.adapters.OidcKeycloakAccount;
|
||||
import org.keycloak.adapters.RefreshableKeycloakSecurityContext;
|
||||
import org.keycloak.adapters.RequestAuthenticator;
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
public class MockTokenStore implements AdapterTokenStore {
|
||||
@Override
|
||||
public void checkCurrentToken() {
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean isCached(RequestAuthenticator authenticator) {
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void saveAccountInfo(OidcKeycloakAccount account) {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void logout() {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void refreshCallback(RefreshableKeycloakSecurityContext securityContext) {
|
||||
}
|
||||
|
||||
@Override
|
||||
public void saveRequest() {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean restoreRequest() {
|
||||
throw new UnsupportedOperationException();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
package org.keycloak.performance;
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class ClientInfo {
|
||||
|
||||
public final int index;
|
||||
public final String clientId;
|
||||
public final String secret;
|
||||
public final String appUrl;
|
||||
|
||||
|
||||
public ClientInfo(int index, String clientId, String secret, String appUrl) {
|
||||
this.index = index;
|
||||
this.clientId = clientId;
|
||||
this.secret = secret;
|
||||
this.appUrl = appUrl;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,14 @@
|
|||
package org.keycloak.performance;
|
||||
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class RealmConfig {
|
||||
public int accessTokenLifeSpan = 60;
|
||||
public boolean registrationAllowed = false;
|
||||
public List<String> requiredCredentials = Collections.unmodifiableList(Arrays.asList("password"));
|
||||
}
|
|
@ -0,0 +1,325 @@
|
|||
package org.keycloak.performance;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonInclude;
|
||||
import com.fasterxml.jackson.core.JsonEncoding;
|
||||
import com.fasterxml.jackson.core.JsonFactory;
|
||||
import com.fasterxml.jackson.core.JsonGenerator;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.fasterxml.jackson.databind.SerializationFeature;
|
||||
import org.keycloak.representations.idm.ClientRepresentation;
|
||||
import org.keycloak.representations.idm.CredentialRepresentation;
|
||||
import org.keycloak.representations.idm.UserRepresentation;
|
||||
|
||||
import java.io.File;
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Random;
|
||||
import java.util.Set;
|
||||
|
||||
import static org.keycloak.common.util.ObjectUtil.capitalize;
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class RealmsConfigurationBuilder {
|
||||
|
||||
private Random RANDOM = new Random();
|
||||
|
||||
private JsonGenerator g;
|
||||
|
||||
private String file;
|
||||
|
||||
public static final String EXPORT_FILENAME = "benchmark-realms.json";
|
||||
|
||||
public RealmsConfigurationBuilder(String filename) {
|
||||
this.file = filename;
|
||||
|
||||
try {
|
||||
JsonFactory f = new JsonFactory();
|
||||
g = f.createGenerator(new File(file), JsonEncoding.UTF8);
|
||||
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);
|
||||
mapper.enable(SerializationFeature.INDENT_OUTPUT);
|
||||
g.setCodec(mapper);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to create realms export file", e);
|
||||
}
|
||||
}
|
||||
|
||||
public void build() throws IOException {
|
||||
// There are several realms
|
||||
startRealms();
|
||||
for (int i = 0; i < TestConfig.numOfRealms; i++) {
|
||||
RealmConfig realm = new RealmConfig();
|
||||
|
||||
String realmName = "realm_" + i;
|
||||
startRealm(realmName, realm);
|
||||
|
||||
// first create clients,
|
||||
// then roles, and client roles
|
||||
// create users at the end
|
||||
// reason: each next depends on availability of the previous
|
||||
|
||||
// each realm has some clients
|
||||
startClients();
|
||||
for (int j = 0; j < TestConfig.clientsPerRealm; j++) {
|
||||
ClientRepresentation client = new ClientRepresentation();
|
||||
|
||||
String clientId = computeClientId(realmName, j);
|
||||
client.setClientId(clientId);
|
||||
client.setEnabled(true);
|
||||
|
||||
String baseDir = computeAppUrl(clientId);
|
||||
client.setBaseUrl(baseDir);
|
||||
|
||||
List<String> uris = new ArrayList<>();
|
||||
uris.add(baseDir + "/*");
|
||||
|
||||
if (isClientConfidential(j)) {
|
||||
client.setRedirectUris(uris);
|
||||
client.setSecret(computeSecret(clientId));
|
||||
} else if (isClientBearerOnly(j)) {
|
||||
client.setBearerOnly(true);
|
||||
} else {
|
||||
client.setPublicClient(true);
|
||||
client.setRedirectUris(uris);
|
||||
}
|
||||
|
||||
addClient(client);
|
||||
}
|
||||
|
||||
completeClients();
|
||||
|
||||
// each realm has some realm roles
|
||||
startRoles();
|
||||
startRealmRoles();
|
||||
for (int j = 0; j < TestConfig.realmRoles; j++) {
|
||||
addRole("role_" + j + "_ofRealm_" + i);
|
||||
}
|
||||
completeRealmRoles();
|
||||
|
||||
// each client has some client roles
|
||||
startClientRoles();
|
||||
for (int j = 0; j < TestConfig.clientsPerRealm; j++) {
|
||||
addClientRoles("client_" + j + "_ofRealm_" + i);
|
||||
}
|
||||
completeClientRoles();
|
||||
|
||||
completeRoles();
|
||||
|
||||
// each realm so many users
|
||||
startUsers();
|
||||
for (int j = 0; j < TestConfig.usersPerRealm; j++) {
|
||||
UserRepresentation user = new UserRepresentation();
|
||||
user.setUsername(computeUsername(realmName, j));
|
||||
user.setEnabled(true);
|
||||
user.setEmail(user.getUsername() + "@example.com");
|
||||
user.setFirstName("User" + j);
|
||||
user.setLastName("O'realm" + i);
|
||||
|
||||
CredentialRepresentation creds = new CredentialRepresentation();
|
||||
creds.setType("password");
|
||||
creds.setValue(computePassword(user.getUsername()));
|
||||
user.setCredentials(Arrays.asList(creds));
|
||||
|
||||
// add realm roles
|
||||
// each user some random realm roles
|
||||
Set<String> realmRoles = new HashSet<String>();
|
||||
while (realmRoles.size() < TestConfig.realmRolesPerUser) {
|
||||
realmRoles.add("role_" + random(TestConfig.realmRoles) + "_ofRealm_" + i);
|
||||
}
|
||||
user.setRealmRoles(new ArrayList<>(realmRoles));
|
||||
|
||||
// add client roles
|
||||
// each user some random client roles (random client + random role on that client)
|
||||
Set<String> clientRoles = new HashSet<String>();
|
||||
while (clientRoles.size() < TestConfig.clientRolesPerUser) {
|
||||
int client = random(TestConfig.clientsPerRealm);
|
||||
int clientRole = random(TestConfig.clientRolesPerClient);
|
||||
clientRoles.add("clientrole_" + clientRole + "_ofClient_" + client + "_ofRealm_" + i);
|
||||
}
|
||||
Map<String, List<String>> clientRoleMappings = new HashMap<>();
|
||||
for (String item : clientRoles) {
|
||||
int s = item.indexOf("_ofClient_");
|
||||
int b = s + "_ofClient_".length();
|
||||
int e = item.indexOf("_", b);
|
||||
String key = "client_" + item.substring(b, e) + "_ofRealm_" + i;
|
||||
List<String> cliRoles = clientRoleMappings.get(key);
|
||||
if (cliRoles == null) {
|
||||
cliRoles = new ArrayList<>();
|
||||
clientRoleMappings.put(key, cliRoles);
|
||||
}
|
||||
cliRoles.add(item);
|
||||
}
|
||||
user.setClientRoles(clientRoleMappings);
|
||||
addUser(user);
|
||||
}
|
||||
completeUsers();
|
||||
|
||||
completeRealm();
|
||||
}
|
||||
completeRealms();
|
||||
g.close();
|
||||
}
|
||||
|
||||
private void addClientRoles(String client) throws IOException {
|
||||
g.writeArrayFieldStart(client);
|
||||
for (int i = 0; i < TestConfig.clientRolesPerClient; i++) {
|
||||
g.writeStartObject();
|
||||
String name = "clientrole_" + i + "_of" + capitalize(client);
|
||||
g.writeStringField("name", name);
|
||||
g.writeStringField("description", "Test realm role - " + name);
|
||||
g.writeEndObject();
|
||||
}
|
||||
g.writeEndArray();
|
||||
}
|
||||
|
||||
private void addClient(ClientRepresentation client) throws IOException {
|
||||
g.writeObject(client);
|
||||
}
|
||||
|
||||
private void startClients() throws IOException {
|
||||
g.writeArrayFieldStart("clients");
|
||||
}
|
||||
|
||||
private void completeClients() throws IOException {
|
||||
g.writeEndArray();
|
||||
}
|
||||
|
||||
private void startRealms() throws IOException {
|
||||
g.writeStartArray();
|
||||
}
|
||||
|
||||
private void completeRealms() throws IOException {
|
||||
g.writeEndArray();
|
||||
}
|
||||
|
||||
private int random(int max) {
|
||||
return RANDOM.nextInt(max);
|
||||
}
|
||||
|
||||
private void startRealm(String realmName, RealmConfig conf) throws IOException {
|
||||
g.writeStartObject();
|
||||
g.writeStringField("realm", realmName);
|
||||
g.writeBooleanField("enabled", true);
|
||||
g.writeNumberField("accessTokenLifespan", conf.accessTokenLifeSpan);
|
||||
g.writeBooleanField("registrationAllowed", conf.registrationAllowed);
|
||||
g.writeStringField("passwordPolicy", "hashIterations(" + TestConfig.hashIterations + ")");
|
||||
|
||||
/*
|
||||
if (conf.requiredCredentials != null) {
|
||||
g.writeArrayFieldStart("requiredCredentials");
|
||||
//g.writeStartArray();
|
||||
for (String item: conf.requiredCredentials) {
|
||||
g.writeString(item);
|
||||
}
|
||||
g.writeEndArray();
|
||||
}
|
||||
*/
|
||||
}
|
||||
|
||||
private void completeRealm() throws IOException {
|
||||
g.writeEndObject();
|
||||
}
|
||||
|
||||
private void startUsers() throws IOException {
|
||||
g.writeArrayFieldStart("users");
|
||||
}
|
||||
|
||||
private void completeUsers() throws IOException {
|
||||
g.writeEndArray();
|
||||
}
|
||||
|
||||
private void addUser(UserRepresentation user) throws IOException {
|
||||
g.writeObject(user);
|
||||
}
|
||||
|
||||
private void startRoles() throws IOException {
|
||||
g.writeObjectFieldStart("roles");
|
||||
}
|
||||
|
||||
private void startRealmRoles() throws IOException {
|
||||
g.writeArrayFieldStart("realm");
|
||||
}
|
||||
|
||||
private void startClientRoles() throws IOException {
|
||||
g.writeObjectFieldStart("client");
|
||||
}
|
||||
|
||||
private void completeClientRoles() throws IOException {
|
||||
g.writeEndObject();
|
||||
}
|
||||
|
||||
private void addRole(String role) throws IOException {
|
||||
g.writeStartObject();
|
||||
g.writeStringField("name", role);
|
||||
g.writeStringField("description", "Test realm role - " + role);
|
||||
g.writeEndObject();
|
||||
}
|
||||
|
||||
private void completeRealmRoles() throws IOException {
|
||||
g.writeEndArray();
|
||||
}
|
||||
|
||||
private void completeRoles() throws IOException {
|
||||
g.writeEndObject();
|
||||
}
|
||||
|
||||
|
||||
static boolean isClientConfidential(int index) {
|
||||
// every third client starting with 0
|
||||
return index % 3 == 0;
|
||||
}
|
||||
|
||||
static boolean isClientBearerOnly(int index) {
|
||||
// every third client starting with 1
|
||||
return index % 3 == 1;
|
||||
}
|
||||
|
||||
static boolean isClientPublic(int index) {
|
||||
// every third client starting with 2
|
||||
return index % 3 == 2;
|
||||
}
|
||||
|
||||
static String computeClientId(String realm, int idx) {
|
||||
return "client_" + idx + "_of" + capitalize(realm);
|
||||
}
|
||||
|
||||
static String computeAppUrl(String clientId) {
|
||||
return "http://keycloak-test-" + clientId.toLowerCase();
|
||||
}
|
||||
|
||||
static String computeSecret(String clientId) {
|
||||
return "secretFor_" + clientId;
|
||||
}
|
||||
|
||||
static String computeUsername(String realm, int idx) {
|
||||
return "user_" + idx + "_of" + capitalize(realm);
|
||||
}
|
||||
|
||||
static String computePassword(String username) {
|
||||
return "passOfUser_" + username;
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
public static void main(String[] args) throws IOException {
|
||||
|
||||
File exportFile = new File(EXPORT_FILENAME);
|
||||
|
||||
TestConfig.validateConfiguration();
|
||||
System.out.println("Generating test dataset with the following parameters: \n" + TestConfig.toStringDatasetProperties());
|
||||
|
||||
new RealmsConfigurationBuilder(exportFile.getAbsolutePath()).build();
|
||||
|
||||
System.out.println("Created " + exportFile.getAbsolutePath());
|
||||
}
|
||||
}
|
|
@ -0,0 +1,746 @@
|
|||
package org.keycloak.performance;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonInclude;
|
||||
import com.fasterxml.jackson.core.JsonFactory;
|
||||
import com.fasterxml.jackson.core.JsonParser;
|
||||
import com.fasterxml.jackson.core.JsonToken;
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import org.keycloak.admin.client.Keycloak;
|
||||
import org.keycloak.representations.idm.ClientRepresentation;
|
||||
import org.keycloak.representations.idm.CredentialRepresentation;
|
||||
import org.keycloak.representations.idm.RealmRepresentation;
|
||||
import org.keycloak.representations.idm.RoleRepresentation;
|
||||
import org.keycloak.representations.idm.UserRepresentation;
|
||||
|
||||
import javax.ws.rs.core.Response;
|
||||
import java.io.File;
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Iterator;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.BlockingQueue;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.ConcurrentLinkedQueue;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
import java.util.concurrent.LinkedBlockingQueue;
|
||||
import static org.keycloak.performance.RealmsConfigurationBuilder.EXPORT_FILENAME;
|
||||
|
||||
import static org.keycloak.performance.TestConfig.numOfWorkers;
|
||||
|
||||
/**
|
||||
* # build
|
||||
* mvn -f testsuite/integration-arquillian/tests/performance/gatling-perf clean install
|
||||
*
|
||||
* # generate benchmark-realms.json file with generated test data
|
||||
* mvn -f testsuite/integration-arquillian/tests/performance/gatling-perf exec:java -Dexec.mainClass=org.keycloak.performance.RealmsConfigurationBuilder -DnumOfRealms=2 -DusersPerRealm=2 -DclientsPerRealm=2 -DrealmRoles=2 -DrealmRolesPerUser=2 -DclientRolesPerUser=2 -DclientRolesPerClient=2
|
||||
*
|
||||
* # use benchmark-realms.json to load the data up to Keycloak Server listening on localhost:8080
|
||||
* mvn -f testsuite/integration-arquillian/tests/performance/gatling-perf exec:java -Dexec.mainClass=org.keycloak.performance.RealmsConfigurationLoader -DnumOfWorkers=5 -Dexec.args=benchmark-realms.json > perf-output.txt
|
||||
*
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class RealmsConfigurationLoader {
|
||||
|
||||
static final int ERROR_CHECK_INTERVAL = 10;
|
||||
|
||||
// multi-thread mechanics
|
||||
static final BlockingQueue<AdminJob> queue = new LinkedBlockingQueue<>(numOfWorkers);
|
||||
static final ArrayList<Worker> workers = new ArrayList<>();
|
||||
static final ConcurrentLinkedQueue<PendingResult> pendingResult = new ConcurrentLinkedQueue<>();
|
||||
|
||||
// realm caches - we completely handle one realm before starting the next
|
||||
static ConcurrentHashMap<String, String> clientIdMap = new ConcurrentHashMap<>();
|
||||
static ConcurrentHashMap<String, String> realmRoleIdMap = new ConcurrentHashMap<>();
|
||||
static ConcurrentHashMap<String, Map<String, String>> clientRoleIdMap = new ConcurrentHashMap<>();
|
||||
static boolean realmCreated;
|
||||
|
||||
public static void main(String [] args) throws IOException {
|
||||
|
||||
if (args.length == 0) {
|
||||
args = new String[] {EXPORT_FILENAME};
|
||||
}
|
||||
|
||||
if (args.length != 1) {
|
||||
System.out.println("Usage: java " + RealmsConfigurationLoader.class.getName() + " <FILE>");
|
||||
return;
|
||||
}
|
||||
|
||||
String file = args[0];
|
||||
System.out.println("Using file: " + new File(args[0]).getAbsolutePath());
|
||||
System.out.println("Number of workers (numOfWorkers): " + numOfWorkers);
|
||||
|
||||
JsonParser p = initParser(file);
|
||||
|
||||
initWorkers();
|
||||
|
||||
try {
|
||||
|
||||
// read json file using JSON stream API
|
||||
readRealms(p);
|
||||
|
||||
} finally {
|
||||
|
||||
completeWorkers();
|
||||
}
|
||||
}
|
||||
|
||||
private static void completeWorkers() {
|
||||
|
||||
try {
|
||||
// wait for all jobs to finish
|
||||
completePending();
|
||||
|
||||
} finally {
|
||||
// stop workers
|
||||
for (Worker w : workers) {
|
||||
w.exit = true;
|
||||
try {
|
||||
w.join(5000);
|
||||
if (w.isAlive()) {
|
||||
System.out.println("Worker thread failed to stop: ");
|
||||
dumpThread(w);
|
||||
}
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void readRealms(JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
|
||||
while (t != JsonToken.END_OBJECT && t != JsonToken.END_ARRAY) {
|
||||
if (t != JsonToken.START_ARRAY) {
|
||||
readRealm(p);
|
||||
}
|
||||
t = p.nextToken();
|
||||
}
|
||||
}
|
||||
|
||||
private static void initWorkers() {
|
||||
// configure job queue and worker threads
|
||||
for (int i = 0; i < numOfWorkers; i++) {
|
||||
workers.add(new Worker());
|
||||
}
|
||||
}
|
||||
|
||||
private static JsonParser initParser(String file) {
|
||||
JsonParser p;
|
||||
try {
|
||||
JsonFactory f = new JsonFactory();
|
||||
p = f.createParser(new File(file));
|
||||
|
||||
ObjectMapper mapper = new ObjectMapper();
|
||||
mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);
|
||||
p.setCodec(mapper);
|
||||
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException("Failed to parse file " + new File(file).getAbsolutePath(), e);
|
||||
}
|
||||
return p;
|
||||
}
|
||||
|
||||
private static void dumpThread(Worker w) {
|
||||
StringBuilder b = new StringBuilder();
|
||||
for (StackTraceElement e: w.getStackTrace()) {
|
||||
b.append(e.toString()).append("\n");
|
||||
}
|
||||
System.out.print(b);
|
||||
}
|
||||
|
||||
private static void readRealm(JsonParser p) throws IOException {
|
||||
|
||||
// as soon as we encounter users, roles, clients we create a CreateRealmJob
|
||||
// TODO: if after that point in a realm we encounter realm attribute, we report a warning but continue
|
||||
|
||||
RealmRepresentation r = new RealmRepresentation();
|
||||
JsonToken t = p.nextToken();
|
||||
while (t != JsonToken.END_OBJECT) {
|
||||
|
||||
//System.out.println(t + ", name: " + p.getCurrentName() + ", text: '" + p.getText() + "', value: " + p.getValueAsString());
|
||||
|
||||
switch (p.getCurrentName()) {
|
||||
case "realm":
|
||||
r.setRealm(getStringValue(p));
|
||||
break;
|
||||
case "enabled":
|
||||
r.setEnabled(getBooleanValue(p));
|
||||
break;
|
||||
case "accessTokenLifespan":
|
||||
r.setAccessCodeLifespan(getIntegerValue(p));
|
||||
break;
|
||||
case "registrationAllowed":
|
||||
r.setRegistrationAllowed(getBooleanValue(p));
|
||||
break;
|
||||
case "passwordPolicy":
|
||||
r.setPasswordPolicy(getStringValue(p));
|
||||
break;
|
||||
case "users":
|
||||
ensureRealm(r);
|
||||
readUsers(r, p);
|
||||
break;
|
||||
case "roles":
|
||||
ensureRealm(r);
|
||||
readRoles(r, p);
|
||||
break;
|
||||
case "clients":
|
||||
ensureRealm(r);
|
||||
readClients(r, p);
|
||||
break;
|
||||
default: {
|
||||
// if we don't understand the field we ignore it - but report that
|
||||
System.out.println("Realm attribute ignored: " + p.getCurrentName());
|
||||
consumeAttribute(p);
|
||||
}
|
||||
}
|
||||
t = p.nextToken();
|
||||
}
|
||||
|
||||
// we wait for realm to complete
|
||||
completePending();
|
||||
|
||||
// reset realm specific cache
|
||||
realmCreated = false;
|
||||
clientIdMap.clear();
|
||||
realmRoleIdMap.clear();
|
||||
clientRoleIdMap.clear();
|
||||
}
|
||||
|
||||
private static void ensureRealm(RealmRepresentation r) {
|
||||
if (!realmCreated) {
|
||||
createRealm(r);
|
||||
realmCreated = true;
|
||||
}
|
||||
}
|
||||
|
||||
private static void createRealm(RealmRepresentation r) {
|
||||
try {
|
||||
queue.put(new CreateRealmJob(r));
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted", e);
|
||||
}
|
||||
|
||||
// now wait for job to appear
|
||||
PendingResult next = pendingResult.poll();
|
||||
while (next == null) {
|
||||
waitForAwhile();
|
||||
next = pendingResult.poll();
|
||||
}
|
||||
|
||||
// then wait for the job to complete
|
||||
while (!next.isDone()) {
|
||||
waitForAwhile();
|
||||
}
|
||||
|
||||
try {
|
||||
next.get();
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted", e);
|
||||
} catch (ExecutionException e) {
|
||||
throw new RuntimeException("Execution failed", e.getCause());
|
||||
}
|
||||
}
|
||||
|
||||
private static void enqueueCreateUser(RealmRepresentation r, UserRepresentation u) {
|
||||
try {
|
||||
queue.put(new CreateUserJob(r, u));
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted", e);
|
||||
}
|
||||
}
|
||||
|
||||
private static void enqueueCreateRealmRole(RealmRepresentation r, RoleRepresentation role) {
|
||||
try {
|
||||
queue.put(new CreateRealmRoleJob(r, role));
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted", e);
|
||||
}
|
||||
}
|
||||
|
||||
private static void enqueueCreateClientRole(RealmRepresentation r, RoleRepresentation role, String client) {
|
||||
try {
|
||||
queue.put(new CreateClientRoleJob(r, role, client));
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted", e);
|
||||
}
|
||||
}
|
||||
|
||||
private static void enqueueCreateClient(RealmRepresentation r, ClientRepresentation client) {
|
||||
try {
|
||||
queue.put(new CreateClientJob(r, client));
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted", e);
|
||||
}
|
||||
}
|
||||
|
||||
private static void waitForAwhile() {
|
||||
waitForAwhile(100, "Interrupted");
|
||||
}
|
||||
|
||||
private static void waitForAwhile(int millis) {
|
||||
waitForAwhile(millis, "Interrupted");
|
||||
}
|
||||
|
||||
private static void waitForAwhile(int millis, String interruptMessage) {
|
||||
try {
|
||||
Thread.sleep(millis);
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException(interruptMessage);
|
||||
}
|
||||
}
|
||||
|
||||
private static void readUsers(RealmRepresentation r, JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
if (t != JsonToken.START_ARRAY) {
|
||||
throw new RuntimeException("Error reading field 'users'. Expected array of users [" + t + "]");
|
||||
}
|
||||
int count = 0;
|
||||
t = p.nextToken();
|
||||
while (t == JsonToken.START_OBJECT) {
|
||||
UserRepresentation u = p.readValueAs(UserRepresentation.class);
|
||||
enqueueCreateUser(r, u);
|
||||
t = p.nextToken();
|
||||
count += 1;
|
||||
|
||||
// every some users check to see pending errors
|
||||
// in order to short-circuit if any errors have occurred
|
||||
if (count % ERROR_CHECK_INTERVAL == 0) {
|
||||
checkPendingErrors();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void readRoles(RealmRepresentation r, JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
if (t != JsonToken.START_OBJECT) {
|
||||
throw new RuntimeException("Error reading field 'roles'. Expected start of object [" + t + "]");
|
||||
}
|
||||
|
||||
t = p.nextToken();
|
||||
if (t != JsonToken.FIELD_NAME) {
|
||||
throw new RuntimeException("Error reading field 'roles'. Expected field 'realm' or 'client' [" + t + "]");
|
||||
}
|
||||
|
||||
while (t != JsonToken.END_OBJECT) {
|
||||
switch (p.getCurrentName()) {
|
||||
case "realm":
|
||||
readRealmRoles(r, p);
|
||||
break;
|
||||
case "client":
|
||||
waitForClientsCompleted();
|
||||
readClientRoles(r, p);
|
||||
break;
|
||||
default:
|
||||
throw new RuntimeException("Unexpected field in roles: " + p.getCurrentName());
|
||||
}
|
||||
t = p.nextToken();
|
||||
}
|
||||
}
|
||||
|
||||
private static void waitForClientsCompleted() {
|
||||
completePending();
|
||||
}
|
||||
|
||||
private static void readClientRoles(RealmRepresentation r, JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
|
||||
if (t != JsonToken.START_OBJECT) {
|
||||
throw new RuntimeException("Expected start_of_object on 'roles/client' [" + t + "]");
|
||||
}
|
||||
|
||||
t = p.nextToken();
|
||||
|
||||
int count = 0;
|
||||
while (t == JsonToken.FIELD_NAME) {
|
||||
String client = p.getCurrentName();
|
||||
|
||||
t = p.nextToken();
|
||||
if (t != JsonToken.START_ARRAY) {
|
||||
throw new RuntimeException("Expected start_of_array on 'roles/client/" + client + " [" + t + "]");
|
||||
}
|
||||
|
||||
t = p.nextToken();
|
||||
while (t != JsonToken.END_ARRAY) {
|
||||
RoleRepresentation u = p.readValueAs(RoleRepresentation.class);
|
||||
enqueueCreateClientRole(r, u, client);
|
||||
t = p.nextToken();
|
||||
count += 1;
|
||||
|
||||
// every some roles check to see pending errors
|
||||
// in order to short-circuit if any errors have occurred
|
||||
if (count % ERROR_CHECK_INTERVAL == 0) {
|
||||
checkPendingErrors();
|
||||
}
|
||||
}
|
||||
t = p.nextToken();
|
||||
}
|
||||
}
|
||||
|
||||
private static void readRealmRoles(RealmRepresentation r, JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
|
||||
if (t != JsonToken.START_ARRAY) {
|
||||
throw new RuntimeException("Expected start_of_array on 'roles/realm' [" + t + "]");
|
||||
}
|
||||
|
||||
t = p.nextToken();
|
||||
|
||||
int count = 0;
|
||||
while (t == JsonToken.START_OBJECT) {
|
||||
RoleRepresentation u = p.readValueAs(RoleRepresentation.class);
|
||||
enqueueCreateRealmRole(r, u);
|
||||
t = p.nextToken();
|
||||
count += 1;
|
||||
|
||||
// every some roles check to see pending errors
|
||||
// in order to short-circuit if any errors have occurred
|
||||
if (count % ERROR_CHECK_INTERVAL == 0) {
|
||||
checkPendingErrors();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void readClients(RealmRepresentation r, JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
if (t != JsonToken.START_ARRAY) {
|
||||
throw new RuntimeException("Error reading field 'clients'. Expected array of clients [" + t + "]");
|
||||
}
|
||||
int count = 0;
|
||||
t = p.nextToken();
|
||||
while (t == JsonToken.START_OBJECT) {
|
||||
ClientRepresentation u = p.readValueAs(ClientRepresentation.class);
|
||||
enqueueCreateClient(r, u);
|
||||
t = p.nextToken();
|
||||
count += 1;
|
||||
|
||||
// every some users check to see pending errors
|
||||
if (count % ERROR_CHECK_INTERVAL == 0) {
|
||||
checkPendingErrors();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void checkPendingErrors() {
|
||||
// now wait for job to appear
|
||||
PendingResult next = pendingResult.peek();
|
||||
while (next == null) {
|
||||
waitForAwhile();
|
||||
next = pendingResult.peek();
|
||||
}
|
||||
|
||||
// now process then
|
||||
Iterator<PendingResult> it = pendingResult.iterator();
|
||||
while (it.hasNext()) {
|
||||
next = it.next();
|
||||
if (next.isDone() && !next.isCompletedExceptionally()) {
|
||||
it.remove();
|
||||
} else if (next.isCompletedExceptionally()) {
|
||||
try {
|
||||
next.get();
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted");
|
||||
} catch (ExecutionException e) {
|
||||
throw new RuntimeException("Execution failed", e.getCause());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void completePending() {
|
||||
|
||||
// wait for queue to empty up
|
||||
while (queue.size() > 0) {
|
||||
waitForAwhile();
|
||||
}
|
||||
|
||||
PendingResult next;
|
||||
while ((next = pendingResult.poll()) != null) {
|
||||
try {
|
||||
next.get();
|
||||
} catch (InterruptedException e) {
|
||||
throw new RuntimeException("Interrupted");
|
||||
} catch (ExecutionException e) {
|
||||
throw new RuntimeException("Execution failed", e.getCause());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static Integer getIntegerValue(JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
if (t != JsonToken.VALUE_NUMBER_INT) {
|
||||
throw new RuntimeException("Error while reading field '" + p.getCurrentName() + "'. Expected integer value [" + t + "]");
|
||||
}
|
||||
return p.getValueAsInt();
|
||||
}
|
||||
|
||||
private static void consumeAttribute(JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
if (t == JsonToken.START_OBJECT || t == JsonToken.START_ARRAY) {
|
||||
p.skipChildren();
|
||||
}
|
||||
}
|
||||
|
||||
private static Boolean getBooleanValue(JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
if (t != JsonToken.VALUE_TRUE && t != JsonToken.VALUE_FALSE) {
|
||||
throw new RuntimeException("Error while reading field '" + p.getCurrentName() + "'. Expected boolean value [" + t + "]");
|
||||
}
|
||||
return p.getValueAsBoolean();
|
||||
}
|
||||
|
||||
private static String getStringValue(JsonParser p) throws IOException {
|
||||
JsonToken t = p.nextToken();
|
||||
if (t != JsonToken.VALUE_STRING) {
|
||||
throw new RuntimeException("Error while reading field '" + p.getCurrentName() + "'. Expected string value [" + t + "]");
|
||||
}
|
||||
return p.getText();
|
||||
}
|
||||
|
||||
static class Worker extends Thread {
|
||||
|
||||
volatile boolean exit = false;
|
||||
|
||||
Worker() {
|
||||
start();
|
||||
}
|
||||
|
||||
public void run() {
|
||||
while (!exit) {
|
||||
Job r = queue.poll();
|
||||
if (r == null) {
|
||||
waitForAwhile(50, "Worker thread " + this.getName() + " interrupted");
|
||||
continue;
|
||||
}
|
||||
PendingResult pending = new PendingResult(r);
|
||||
pendingResult.add(pending);
|
||||
try {
|
||||
r.run();
|
||||
pending.complete(true);
|
||||
} catch (Throwable t) {
|
||||
pending.completeExceptionally(t);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static class CreateRealmJob extends AdminJob {
|
||||
|
||||
private RealmRepresentation realm;
|
||||
|
||||
CreateRealmJob(RealmRepresentation r) {
|
||||
this.realm = r;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
admin().realms().create(realm);
|
||||
}
|
||||
}
|
||||
|
||||
static class CreateUserJob extends AdminJob {
|
||||
|
||||
private RealmRepresentation realm;
|
||||
private UserRepresentation user;
|
||||
|
||||
CreateUserJob(RealmRepresentation r, UserRepresentation u) {
|
||||
this.realm = r;
|
||||
this.user = u;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
Response response = admin().realms().realm(realm.getRealm()).users().create(user);
|
||||
response.close();
|
||||
if (response.getStatus() != 201) {
|
||||
throw new RuntimeException("Failed to create user with status: " + response.getStatusInfo().getReasonPhrase());
|
||||
}
|
||||
|
||||
String userId = extractIdFromResponse(response);
|
||||
|
||||
List<CredentialRepresentation> creds = user.getCredentials();
|
||||
for (CredentialRepresentation cred: creds) {
|
||||
admin().realms().realm(realm.getRealm()).users().get(userId).resetPassword(cred);
|
||||
}
|
||||
|
||||
List<String> realmRoles = user.getRealmRoles();
|
||||
if (realmRoles != null && !realmRoles.isEmpty()) {
|
||||
List<RoleRepresentation> roles = convertRealmRoleNamesToRepresentation(user.getRealmRoles());
|
||||
if (!roles.isEmpty()) {
|
||||
admin().realms().realm(realm.getRealm()).users().get(userId).roles().realmLevel().add(roles);
|
||||
}
|
||||
}
|
||||
|
||||
Map<String, List<String>> clientRoles = user.getClientRoles();
|
||||
if (clientRoles != null && !clientRoles.isEmpty()) {
|
||||
for (String clientId: clientRoles.keySet()) {
|
||||
List<String> roleNames = clientRoles.get(clientId);
|
||||
if (roleNames != null && !roleNames.isEmpty()) {
|
||||
List<RoleRepresentation> reps = convertClientRoleNamesToRepresentation(clientId, roleNames);
|
||||
if (!reps.isEmpty()) {
|
||||
String idOfClient = clientIdMap.get(clientId);
|
||||
if (idOfClient == null) {
|
||||
throw new RuntimeException("No client created for clientId: " + clientId);
|
||||
}
|
||||
admin().realms().realm(realm.getRealm()).users().get(userId).roles().clientLevel(idOfClient).add(reps);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private List<RoleRepresentation> convertClientRoleNamesToRepresentation(String clientId, List<String> roles) {
|
||||
LinkedList<RoleRepresentation> result = new LinkedList<>();
|
||||
Map<String, String> roleIdMap = clientRoleIdMap.get(clientId);
|
||||
if (roleIdMap == null || roleIdMap.isEmpty()) {
|
||||
throw new RuntimeException("No client roles created for clientId: " + clientId);
|
||||
}
|
||||
|
||||
for (String role: roles) {
|
||||
RoleRepresentation r = new RoleRepresentation();
|
||||
String id = roleIdMap.get(role);
|
||||
if (id == null) {
|
||||
throw new RuntimeException("No client role created on client '" + clientId + "' for name: " + role);
|
||||
}
|
||||
r.setId(id);
|
||||
r.setName(role);
|
||||
result.add(r);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
private List<RoleRepresentation> convertRealmRoleNamesToRepresentation(List<String> roles) {
|
||||
LinkedList<RoleRepresentation> result = new LinkedList<>();
|
||||
for (String role: roles) {
|
||||
RoleRepresentation r = new RoleRepresentation();
|
||||
String id = realmRoleIdMap.get(role);
|
||||
if (id == null) {
|
||||
throw new RuntimeException("No realm role created for name: " + role);
|
||||
}
|
||||
r.setId(id);
|
||||
r.setName(role);
|
||||
result.add(r);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
static class CreateRealmRoleJob extends AdminJob {
|
||||
|
||||
private RealmRepresentation realm;
|
||||
private RoleRepresentation role;
|
||||
|
||||
CreateRealmRoleJob(RealmRepresentation r, RoleRepresentation role) {
|
||||
this.realm = r;
|
||||
this.role = role;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
admin().realms().realm(realm.getRealm()).roles().create(role);
|
||||
|
||||
// we need the id but it's not returned by REST API - we have to perform a get on the created role and save the returned id
|
||||
RoleRepresentation rr = admin().realms().realm(realm.getRealm()).roles().get(role.getName()).toRepresentation();
|
||||
realmRoleIdMap.put(rr.getName(), rr.getId());
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static class CreateClientRoleJob extends AdminJob {
|
||||
|
||||
private RealmRepresentation realm;
|
||||
private RoleRepresentation role;
|
||||
private String clientId;
|
||||
|
||||
CreateClientRoleJob(RealmRepresentation r, RoleRepresentation role, String clientId) {
|
||||
this.realm = r;
|
||||
this.role = role;
|
||||
this.clientId = clientId;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
String id = clientIdMap.get(clientId);
|
||||
if (id == null) {
|
||||
throw new RuntimeException("No client created for clientId: " + clientId);
|
||||
}
|
||||
admin().realms().realm(realm.getRealm()).clients().get(id).roles().create(role);
|
||||
|
||||
// we need the id but it's not returned by REST API - we have to perform a get on the created role and save the returned id
|
||||
RoleRepresentation rr = admin().realms().realm(realm.getRealm()).clients().get(id).roles().get(role.getName()).toRepresentation();
|
||||
|
||||
Map<String, String> roleIdMap = clientRoleIdMap.get(clientId);
|
||||
if (roleIdMap == null) {
|
||||
roleIdMap = clientRoleIdMap.computeIfAbsent(clientId, (k) -> new ConcurrentHashMap<>());
|
||||
}
|
||||
|
||||
roleIdMap.put(rr.getName(), rr.getId());
|
||||
}
|
||||
}
|
||||
|
||||
static class CreateClientJob extends AdminJob {
|
||||
|
||||
|
||||
private ClientRepresentation client;
|
||||
private RealmRepresentation realm;
|
||||
|
||||
public CreateClientJob(RealmRepresentation r, ClientRepresentation client) {
|
||||
this.realm = r;
|
||||
this.client = client;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
Response response = admin().realms().realm(realm.getRealm()).clients().create(client);
|
||||
response.close();
|
||||
if (response.getStatus() != 201) {
|
||||
throw new RuntimeException("Failed to create client with status: " + response.getStatusInfo().getReasonPhrase());
|
||||
}
|
||||
String id = extractIdFromResponse(response);
|
||||
clientIdMap.put(client.getClientId(), id);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static String extractIdFromResponse(Response response) {
|
||||
String location = response.getHeaderString("Location");
|
||||
if (location == null)
|
||||
return null;
|
||||
|
||||
int last = location.lastIndexOf("/");
|
||||
if (last == -1) {
|
||||
return null;
|
||||
}
|
||||
String id = location.substring(last + 1);
|
||||
if (id == null || "".equals(id)) {
|
||||
throw new RuntimeException("Failed to extract 'id' of created resource");
|
||||
}
|
||||
|
||||
return id;
|
||||
}
|
||||
|
||||
static abstract class AdminJob extends Job {
|
||||
|
||||
static Keycloak admin = Keycloak.getInstance(TestConfig.serverUrisList.get(0), TestConfig.authRealm, TestConfig.authUser, TestConfig.authPassword, TestConfig.authClient);
|
||||
|
||||
static Keycloak admin() {
|
||||
return admin;
|
||||
}
|
||||
}
|
||||
|
||||
static abstract class Job implements Runnable {
|
||||
|
||||
}
|
||||
|
||||
static class PendingResult extends CompletableFuture<Boolean> {
|
||||
|
||||
Job job;
|
||||
|
||||
PendingResult(Job job) {
|
||||
this.job = job;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,206 @@
|
|||
package org.keycloak.performance;
|
||||
|
||||
import org.keycloak.performance.util.FilteredIterator;
|
||||
import org.keycloak.performance.util.LoopingIterator;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Iterator;
|
||||
import java.util.List;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import java.util.concurrent.ThreadLocalRandom;
|
||||
|
||||
import static org.keycloak.performance.RealmsConfigurationBuilder.computeAppUrl;
|
||||
import static org.keycloak.performance.RealmsConfigurationBuilder.computeClientId;
|
||||
import static org.keycloak.performance.RealmsConfigurationBuilder.computePassword;
|
||||
import static org.keycloak.performance.RealmsConfigurationBuilder.computeSecret;
|
||||
import static org.keycloak.performance.RealmsConfigurationBuilder.computeUsername;
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class TestConfig {
|
||||
|
||||
//
|
||||
// Settings used by RealmsConfigurationBuilder only - when generating the dataset
|
||||
//
|
||||
public static final int hashIterations = Integer.getInteger("hashIterations", 27500);
|
||||
|
||||
//
|
||||
// Settings used by RealmsConfigurationLoader only - when loading data into Keycloak
|
||||
//
|
||||
public static final int numOfWorkers = Integer.getInteger("numOfWorkers", 1);
|
||||
|
||||
//
|
||||
// Settings used by RealmConfigurationLoader to connect to Admin REST API
|
||||
//
|
||||
public static final String authRealm = System.getProperty("authRealm", "master");
|
||||
public static final String authUser = System.getProperty("authUser", "admin");
|
||||
public static final String authPassword = System.getProperty("authPassword", "admin");
|
||||
public static final String authClient = System.getProperty("authClient", "admin-cli");
|
||||
|
||||
|
||||
//
|
||||
// Settings used by RealmsConfigurationBuilder to generate the dataset and by tests to work within constraints of the dataset
|
||||
//
|
||||
public static final int numOfRealms = Integer.getInteger("numOfRealms", 1);
|
||||
public static final int usersPerRealm = Integer.getInteger("usersPerRealm", 2);
|
||||
public static final int clientsPerRealm = Integer.getInteger("clientsPerRealm", 2);
|
||||
public static final int realmRoles = Integer.getInteger("realmRoles", 2);
|
||||
public static final int realmRolesPerUser = Integer.getInteger("realmRolesPerUser", 2);
|
||||
public static final int clientRolesPerUser = Integer.getInteger("clientRolesPerUser", 2);
|
||||
public static final int clientRolesPerClient = Integer.getInteger("clientRolesPerClient", 2);
|
||||
|
||||
|
||||
//
|
||||
// Settings used by tests to control common test parameters
|
||||
//
|
||||
public static final int runUsers = Integer.getInteger("runUsers", 1);
|
||||
public static final int rampUpPeriod = Integer.getInteger("rampUpPeriod", 0);
|
||||
public static final int userThinkTime = Integer.getInteger("userThinkTime", 5);
|
||||
public static final int refreshTokenPeriod = Integer.getInteger("refreshTokenPeriod", 10);
|
||||
|
||||
|
||||
//
|
||||
// Settings used by DefaultSimulation to control behavior specific to DefaultSimulation
|
||||
//
|
||||
public static final int numOfIterations = Integer.getInteger("numOfIterations", 1);
|
||||
public static final int badLoginAttempts = Integer.getInteger("badLoginAttempts", 0);
|
||||
public static final int refreshTokenCount = Integer.getInteger("refreshTokenCount", 0);
|
||||
|
||||
|
||||
public static final String serverUris;
|
||||
public static final List<String> serverUrisList;
|
||||
|
||||
// Round-robin infinite iterator that directs each next session to the next server
|
||||
public static final Iterator<String> serverUrisIterator;
|
||||
|
||||
static {
|
||||
// if KEYCLOAK_SERVER_URIS env var is set, and system property serverUris is not set
|
||||
String servers = System.getProperty("serverUris");
|
||||
if (servers == null) {
|
||||
String env = System.getenv("KEYCLOAK_SERVER_URIS");
|
||||
serverUris = env != null ? env : "http://localhost:8080/auth";
|
||||
} else {
|
||||
serverUris = servers;
|
||||
}
|
||||
|
||||
// initialize serverUrisList and serverUrisIterator
|
||||
ArrayList<String> uris = new ArrayList<>();
|
||||
for (String uri: serverUris.split(" ")) {
|
||||
uris.add(uri);
|
||||
}
|
||||
serverUrisList = uris;
|
||||
serverUrisIterator = new LoopingIterator<>(uris);
|
||||
}
|
||||
|
||||
// Users iterators by realm
|
||||
private static final ConcurrentMap<String, Iterator<UserInfo>> usersIteratorMap = new ConcurrentHashMap<>();
|
||||
|
||||
// Clients iterators by realm
|
||||
private static final ConcurrentMap<String, Iterator<ClientInfo>> clientsIteratorMap = new ConcurrentHashMap<>();
|
||||
|
||||
|
||||
public static Iterator<UserInfo> getUsersIterator(String realm) {
|
||||
return usersIteratorMap.computeIfAbsent(realm, (k) -> randomUsersIterator(realm));
|
||||
}
|
||||
|
||||
public static Iterator<ClientInfo> getClientsIterator(String realm) {
|
||||
return clientsIteratorMap.computeIfAbsent(realm, (k) -> randomClientsIterator(realm));
|
||||
}
|
||||
|
||||
public static Iterator<ClientInfo> getConfidentialClientsIterator(String realm) {
|
||||
Iterator<ClientInfo> clientsIt = getClientsIterator(realm);
|
||||
return new FilteredIterator<>(clientsIt, (v) -> RealmsConfigurationBuilder.isClientConfidential(v.index));
|
||||
}
|
||||
|
||||
public static String toStringDatasetProperties() {
|
||||
return String.format(" numOfRealms: %s\n usersPerRealm: %s\n clientsPerRealm: %s\n realmRoles: %s\n realmRolesPerUser: %s\n clientRolesPerUser: %s\n clientRolesPerClient: %s\n hashIterations: %s",
|
||||
numOfRealms, usersPerRealm, clientsPerRealm, realmRoles, realmRolesPerUser, clientRolesPerUser, clientRolesPerClient, hashIterations);
|
||||
}
|
||||
|
||||
public static Iterator<UserInfo> sequentialUsersIterator(final String realm) {
|
||||
|
||||
return new Iterator<UserInfo>() {
|
||||
|
||||
int idx = 0;
|
||||
|
||||
@Override
|
||||
public boolean hasNext() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized UserInfo next() {
|
||||
if (idx >= usersPerRealm) {
|
||||
idx = 0;
|
||||
}
|
||||
|
||||
String user = computeUsername(realm, idx);
|
||||
idx += 1;
|
||||
return new UserInfo(user, computePassword(user));
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
public static Iterator<UserInfo> randomUsersIterator(final String realm) {
|
||||
|
||||
return new Iterator<UserInfo>() {
|
||||
|
||||
@Override
|
||||
public boolean hasNext() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public UserInfo next() {
|
||||
String user = computeUsername(realm, ThreadLocalRandom.current().nextInt(usersPerRealm));
|
||||
return new UserInfo(user, computePassword(user));
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
public static Iterator<ClientInfo> randomClientsIterator(final String realm) {
|
||||
|
||||
return new Iterator<ClientInfo>() {
|
||||
|
||||
@Override
|
||||
public boolean hasNext() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClientInfo next() {
|
||||
int idx = ThreadLocalRandom.current().nextInt(clientsPerRealm);
|
||||
String clientId = computeClientId(realm, idx);
|
||||
String appUrl = computeAppUrl(clientId);
|
||||
return new ClientInfo(idx, clientId, computeSecret(clientId), appUrl);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
public static Iterator<String> randomRealmsIterator() {
|
||||
|
||||
return new Iterator<String>() {
|
||||
|
||||
@Override
|
||||
public boolean hasNext() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String next() {
|
||||
return "realm_" + ThreadLocalRandom.current().nextInt(numOfRealms);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
static void validateConfiguration() {
|
||||
if (realmRolesPerUser > realmRoles) {
|
||||
throw new RuntimeException("Can't have more realmRolesPerUser than there are realmRoles");
|
||||
}
|
||||
if (clientRolesPerUser > clientsPerRealm * clientRolesPerClient) {
|
||||
throw new RuntimeException("Can't have more clientRolesPerUser than there are all client roles (clientsPerRealm * clientRolesPerClient)");
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,15 @@
|
|||
package org.keycloak.performance;
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class UserInfo {
|
||||
|
||||
public final String username;
|
||||
public final String password;
|
||||
|
||||
UserInfo(String username, String password) {
|
||||
this.username = username;
|
||||
this.password = password;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,492 @@
|
|||
package org.keycloak.performance.log;
|
||||
|
||||
import java.io.BufferedReader;
|
||||
import java.io.File;
|
||||
import java.io.FileInputStream;
|
||||
import java.io.FileNotFoundException;
|
||||
import java.io.FileOutputStream;
|
||||
import java.io.IOException;
|
||||
import java.io.InputStreamReader;
|
||||
import java.io.OutputStreamWriter;
|
||||
import java.io.PrintWriter;
|
||||
import java.nio.charset.Charset;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.LinkedHashMap;
|
||||
import java.util.LinkedHashSet;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
|
||||
/**
|
||||
* To run use the following:
|
||||
*
|
||||
* mvn -f testsuite/integration-arquillian/tests/performance/gatling-perf exec:java -Dexec.mainClass=org.keycloak.performance.log.LogProcessor -Dexec.args="ARGUMENTS"
|
||||
*
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class LogProcessor {
|
||||
|
||||
static boolean INLAYED_INCLUDED = false;
|
||||
static boolean OUTLAYED_INCLUDED = false;
|
||||
|
||||
File simulationLogFile;
|
||||
String lastRequestLabel;
|
||||
|
||||
HashMap<String, HashMap<String, Integer>> userIterations = new HashMap<>();
|
||||
HashMap<String, Integer> currentIterations = new HashMap<>();
|
||||
|
||||
/**
|
||||
* Create a log processor that knows how to parse the following format
|
||||
*
|
||||
*
|
||||
* keycloak.AdminSimulation adminsimulation RUN 1502467483145 null 2.0
|
||||
* AdminSimulation 1829459789004783445-0 USER START 1502467483171 0
|
||||
* AdminSimulation 1829459789004783445-0 REQUEST Console REST - Config 1502467483296 1502467483299 1502467483303 1502467483303 OK
|
||||
* AdminSimulation 1829459789004783445-1 USER START 1502467483328 0
|
||||
* AdminSimulation 1829459789004783445-1 REQUEST Console Home 1502467483331 1502467483335 1502467483340 1502467483340 OK
|
||||
* AdminSimulation 1829459789004783445-2 REQUEST Console REST - realm_0/users/ID/reset-password PUT 1502467578382 1502467578382 1502467578393 1502467578393 KO status.find.is(204), but actually found 401
|
||||
* AdminSimulation 1829459789004783445-40 REQUEST Console REST - realm_0 1502467578386 1502467578386 1502467578397 1502467578397 KO status.find.is(200), but actually found 401
|
||||
* AdminSimulation 1829459789004783445-40 USER END 1502467487280 1502467581383
|
||||
* AdminSimulation 1829459789004783445-43 REQUEST Console REST - realm_0/users/ID/reset-password PUT 1502467581480 1502467581480 1502467581487 1502467581487 KO status.find.is(204), but actually found 401
|
||||
* AdminSimulation 1829459789004783445-43 USER END 1502467487581 1502467581489
|
||||
* AdminSimulation 1829459789004783445-42 REQUEST Console REST - realm_0/users/ID/reset-password PUT 1502467582881 1502467582881 1502467582885 1502467582885 KO status.find.is(204), but actually found 401
|
||||
*/
|
||||
public LogProcessor(String logFilePath) {
|
||||
simulationLogFile = new File(logFilePath);
|
||||
}
|
||||
|
||||
public Stats stats() throws IOException {
|
||||
|
||||
Stats stats = new Stats();
|
||||
|
||||
LogReader reader = new LogReader(simulationLogFile);
|
||||
try {
|
||||
LogLine line;
|
||||
while ((line = reader.readLine()) != null) {
|
||||
if (line.type() == LogLine.Type.RUN) {
|
||||
stats.setStartTime(line.startTime());
|
||||
} else if (line.type() == LogLine.Type.USER_START) {
|
||||
stats.setLastUserStart(line.startTime());
|
||||
userStarted(line.scenario(), line.userId());
|
||||
} else if (line.type() == LogLine.Type.USER_END) {
|
||||
if (line.ok() && stats.firstUserEnd() == 0) {
|
||||
stats.setFirstUserEnd(line.endTime());
|
||||
}
|
||||
stats.setLastUserEnd(line.endTime());
|
||||
userCompleted(line.scenario(), line.userId());
|
||||
} else if (line.type() == LogLine.Type.REQUEST) {
|
||||
String scenario = line.scenario();
|
||||
stats.addRequest(scenario, line.request());
|
||||
if (lastRequestLabel != null && line.request().endsWith(lastRequestLabel)) {
|
||||
iterationCompleted(scenario, line.userId());
|
||||
if (allUsersCompletedIteration(scenario)) {
|
||||
advanceIteration(scenario);
|
||||
stats.addIterationCompletedByAll(scenario, line.endTime());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
reader.close();
|
||||
}
|
||||
|
||||
return stats;
|
||||
}
|
||||
|
||||
|
||||
private void iterationCompleted(String scenario, String userId) {
|
||||
HashMap<String, Integer> userMap = userIterations.computeIfAbsent(scenario, k -> new HashMap<>());
|
||||
int count = userMap.getOrDefault(userId, 0);
|
||||
userMap.put(userId, count + 1);
|
||||
}
|
||||
|
||||
private void userStarted(String scenario, String userId) {
|
||||
HashMap<String, Integer> userMap = userIterations.computeIfAbsent(scenario, k -> new HashMap<>());
|
||||
userMap.put(userId, 0);
|
||||
}
|
||||
|
||||
private void userCompleted(String scenario, String userId) {
|
||||
HashMap<String, Integer> userMap = userIterations.computeIfAbsent(scenario, k -> new HashMap<>());
|
||||
userMap.remove(userId);
|
||||
}
|
||||
|
||||
private boolean allUsersCompletedIteration(String scenario) {
|
||||
HashMap<String, Integer> userMap = userIterations.computeIfAbsent(scenario, k -> new HashMap<>());
|
||||
// check if all users have reached currentIteration
|
||||
for (Integer val: userMap.values()) {
|
||||
if (val < currentIterations.getOrDefault(scenario, 1))
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
private void advanceIteration(String scenario) {
|
||||
currentIterations.put(scenario, 1 + currentIterations.getOrDefault(scenario, 1));
|
||||
}
|
||||
|
||||
public void copyPartialLog(PrintWriter output, long start, long end) throws IOException {
|
||||
|
||||
LogReader reader = new LogReader(simulationLogFile);
|
||||
try {
|
||||
LogLine line;
|
||||
while ((line = reader.readLine()) != null) {
|
||||
|
||||
if (line.type() == LogLine.Type.RUN) {
|
||||
output.println(line.rawLine());
|
||||
continue;
|
||||
}
|
||||
|
||||
long startTime = line.startTime();
|
||||
long endTime = line.endTime();
|
||||
|
||||
if (startTime >= start && startTime < end) {
|
||||
if (OUTLAYED_INCLUDED) {
|
||||
output.println(line.rawLine());
|
||||
} else if (endTime < end) {
|
||||
output.println(line.rawLine());
|
||||
}
|
||||
} else if (INLAYED_INCLUDED && endTime >= start && endTime < end) {
|
||||
output.println(line.rawLine());
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
reader.close();
|
||||
output.flush();
|
||||
}
|
||||
}
|
||||
|
||||
public void setLastRequestLabel(String lastRequestLabel) {
|
||||
this.lastRequestLabel = lastRequestLabel;
|
||||
}
|
||||
|
||||
static class Stats {
|
||||
private long startTime;
|
||||
|
||||
// timestamp at which rampUp is complete
|
||||
private long lastUserStart;
|
||||
|
||||
// timestamp at which first user completed the simulation
|
||||
private long firstUserEnd;
|
||||
|
||||
// timestamp at which all users completed the simulation
|
||||
private long lastUserEnd;
|
||||
|
||||
// timestamps of iteration completions - when all users achieved last step of the scenario - for each scenario in the log file
|
||||
private ConcurrentHashMap<String, ArrayList<Long>> completedIterations = new ConcurrentHashMap<>();
|
||||
|
||||
private LinkedHashMap<String, Set<String>> scenarioRequests = new LinkedHashMap<>();
|
||||
|
||||
private HashMap<String, Integer> requestCounters = new HashMap<>();
|
||||
|
||||
public void setStartTime(long startTime) {
|
||||
this.startTime = startTime;
|
||||
}
|
||||
|
||||
public void setLastUserStart(long lastUserStart) {
|
||||
this.lastUserStart = lastUserStart;
|
||||
}
|
||||
|
||||
public void setFirstUserEnd(long firstUserEnd) {
|
||||
this.firstUserEnd = firstUserEnd;
|
||||
}
|
||||
|
||||
public long firstUserEnd() {
|
||||
return firstUserEnd;
|
||||
}
|
||||
|
||||
public void setLastUserEnd(long lastUserEnd) {
|
||||
this.lastUserEnd = lastUserEnd;
|
||||
}
|
||||
|
||||
public void addIterationCompletedByAll(String scenario, long time) {
|
||||
this.completedIterations.computeIfAbsent(scenario, k -> new ArrayList<>())
|
||||
.add(time);
|
||||
}
|
||||
|
||||
public void addRequest(String scenario, String request) {
|
||||
Set<String> requests = scenarioRequests.get(scenario);
|
||||
if (requests == null) {
|
||||
requests = new LinkedHashSet<>();
|
||||
scenarioRequests.put(scenario, requests);
|
||||
}
|
||||
requests.add(request);
|
||||
incrementRequestCounter(scenario, request);
|
||||
}
|
||||
|
||||
public Map<String, Set<String>> requestNames() {
|
||||
return scenarioRequests;
|
||||
}
|
||||
|
||||
private void incrementRequestCounter(String scenario, String requestName) {
|
||||
String key = scenario + "." + requestName;
|
||||
int count = requestCounters.getOrDefault(key, 0);
|
||||
requestCounters.put(key, count+1);
|
||||
}
|
||||
|
||||
public int requestCount(String scenario, String requestName) {
|
||||
String key = scenario + "." + requestName;
|
||||
return requestCounters.getOrDefault(key, 0);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static class LogReader {
|
||||
|
||||
private BufferedReader reader;
|
||||
|
||||
LogReader(File file) throws FileNotFoundException {
|
||||
reader = new BufferedReader(new InputStreamReader(new FileInputStream(file), Charset.forName("utf-8")));
|
||||
}
|
||||
|
||||
LogLine readLine() throws IOException {
|
||||
String line = reader.readLine();
|
||||
return line != null ? new LogLine(line) : null;
|
||||
}
|
||||
|
||||
void close() throws IOException {
|
||||
reader.close();
|
||||
}
|
||||
}
|
||||
|
||||
static class LogLine {
|
||||
|
||||
private String rawLine;
|
||||
private Type type;
|
||||
private String scenario;
|
||||
private String userId;
|
||||
private String request;
|
||||
private long start = -1;
|
||||
private long end = -1;
|
||||
private boolean ok;
|
||||
|
||||
LogLine(String line) {
|
||||
rawLine = line;
|
||||
}
|
||||
|
||||
String rawLine() {
|
||||
return rawLine;
|
||||
}
|
||||
|
||||
Type type() {
|
||||
return type != null ? type : parse().type;
|
||||
}
|
||||
|
||||
long startTime() {
|
||||
return type != null ? start : parse().start;
|
||||
}
|
||||
|
||||
long endTime() {
|
||||
return type != null ? end : parse().end;
|
||||
}
|
||||
|
||||
String scenario() {
|
||||
return type != null ? scenario : parse().scenario;
|
||||
}
|
||||
|
||||
String userId() {
|
||||
return type != null ? userId : parse().userId;
|
||||
}
|
||||
|
||||
String request() {
|
||||
return type != null ? request : parse().request;
|
||||
}
|
||||
|
||||
long logTime() {
|
||||
if (type == null) {
|
||||
parse();
|
||||
}
|
||||
return type == Type.RUN || type == Type.USER_START ? start : end;
|
||||
}
|
||||
|
||||
boolean ok() {
|
||||
if (type == null) {
|
||||
parse();
|
||||
}
|
||||
return type != null ? ok : parse().ok;
|
||||
}
|
||||
|
||||
LogLine parse() {
|
||||
String [] cols = rawLine.split("\\t");
|
||||
|
||||
if ("RUN".equals(cols[2])) {
|
||||
type = Type.RUN;
|
||||
start = Long.parseLong(cols[3]);
|
||||
} else if ("REQUEST".equals(cols[2])) {
|
||||
type = Type.REQUEST;
|
||||
scenario = cols[0];
|
||||
userId = cols[1];
|
||||
request = cols[4];
|
||||
start = Long.parseLong(cols[5]);
|
||||
end = Long.parseLong(cols[8]);
|
||||
ok = "OK".equals(cols[9]);
|
||||
} else if ("USER".equals(cols[2])) {
|
||||
if ("START".equals(cols[3])) {
|
||||
type = Type.USER_START;
|
||||
} else if ("END".equals(cols[3])) {
|
||||
type = Type.USER_END;
|
||||
} else {
|
||||
throw new RuntimeException("Unknown log entry type: USER " + cols[3]);
|
||||
}
|
||||
scenario = cols[0];
|
||||
userId = cols[1];
|
||||
start = Long.parseLong(cols[4]);
|
||||
end = Long.parseLong(cols[5]);
|
||||
} else {
|
||||
throw new RuntimeException("Unknow log entry type: " + cols[3]);
|
||||
}
|
||||
|
||||
return this;
|
||||
}
|
||||
|
||||
enum Type {
|
||||
RUN,
|
||||
REQUEST,
|
||||
USER_START,
|
||||
USER_END
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
public static void main(String [] args) {
|
||||
if (args == null || args.length == 0) {
|
||||
printHelp();
|
||||
System.exit(1);
|
||||
}
|
||||
|
||||
boolean debug = false;
|
||||
boolean help = false;
|
||||
String inFile = null;
|
||||
boolean performStat = false;
|
||||
boolean performExtract = false;
|
||||
String outFile = null;
|
||||
long startMillis = -1;
|
||||
long endMillis = -1;
|
||||
String lastRequestLabel = null;
|
||||
|
||||
try {
|
||||
// Gather and print out stats
|
||||
int i = 0;
|
||||
for (i = 0; i < args.length; i++) {
|
||||
String arg = args[i];
|
||||
switch (arg) {
|
||||
case "-X":
|
||||
debug = true;
|
||||
break;
|
||||
case "-f":
|
||||
case "--file":
|
||||
if (i == args.length - 1) {
|
||||
throw new RuntimeException("Argument " + arg + " requires a FILE");
|
||||
}
|
||||
inFile = args[++i];
|
||||
break;
|
||||
case "-s":
|
||||
case "--stat":
|
||||
performStat = true;
|
||||
break;
|
||||
case "-e":
|
||||
case "--extract":
|
||||
performExtract = true;
|
||||
break;
|
||||
case "-o":
|
||||
case "--out":
|
||||
if (i == args.length - 1) {
|
||||
throw new RuntimeException("Argument " + arg + " requires a FILE");
|
||||
}
|
||||
outFile = args[++i];
|
||||
break;
|
||||
case "--start":
|
||||
if (i == args.length - 1) {
|
||||
throw new RuntimeException("Argument " + arg + " requires a timestamp in milliseconds");
|
||||
}
|
||||
startMillis = Long.valueOf(args[++i]);
|
||||
break;
|
||||
case "--end":
|
||||
if (i == args.length - 1) {
|
||||
throw new RuntimeException("Argument " + arg + " requires a timestamp in milliseconds");
|
||||
}
|
||||
endMillis = Long.valueOf(args[++i]);
|
||||
break;
|
||||
case "--lastRequest":
|
||||
if (i == args.length - 1) {
|
||||
throw new RuntimeException("Argument " + arg + " requires a LABEL");
|
||||
}
|
||||
lastRequestLabel = args[++i];
|
||||
break;
|
||||
case "--help":
|
||||
help = true;
|
||||
break;
|
||||
default:
|
||||
throw new RuntimeException("Unknown argument: " + arg);
|
||||
}
|
||||
}
|
||||
|
||||
if (help) {
|
||||
printHelp();
|
||||
System.exit(0);
|
||||
}
|
||||
|
||||
if (inFile == null) {
|
||||
throw new RuntimeException("No path to simulation.log file specified. Use -f FILE, or --help to see more help.");
|
||||
}
|
||||
|
||||
LogProcessor proc = new LogProcessor(inFile);
|
||||
proc.setLastRequestLabel(lastRequestLabel);
|
||||
|
||||
if (performStat) {
|
||||
Stats stats = proc.stats();
|
||||
// Print out results
|
||||
System.out.println("Start time: " + stats.startTime);
|
||||
System.out.println("End time: " + stats.lastUserEnd);
|
||||
System.out.println("Duration (ms): " + (stats.lastUserEnd - stats.startTime));
|
||||
System.out.println("Ramping up completes at: " + stats.lastUserStart);
|
||||
System.out.println("Ramping down starts at: " + stats.firstUserEnd);
|
||||
System.out.println();
|
||||
|
||||
System.out.println("HTTP Requests:");
|
||||
for (Map.Entry<String, Set<String>> scenario: stats.requestNames().entrySet()) {
|
||||
for (String name: scenario.getValue()) {
|
||||
System.out.println(" [" + scenario.getKey() + "]\t" + name + "\t" + stats.requestCount(scenario.getKey(), name));
|
||||
}
|
||||
}
|
||||
System.out.println();
|
||||
|
||||
System.out.println("Times of completed iterations:");
|
||||
for (Map.Entry<String, ArrayList<Long>> ent: stats.completedIterations.entrySet()) {
|
||||
System.out.println(" " + ent.getKey() + ": " + ent.getValue());
|
||||
}
|
||||
}
|
||||
if (performExtract) {
|
||||
if (outFile == null) {
|
||||
throw new RuntimeException("No output file specified for extraction results. Use -o FILE, or --help to see more help.");
|
||||
}
|
||||
PrintWriter out = new PrintWriter(new OutputStreamWriter(new FileOutputStream(outFile), "utf-8"));
|
||||
proc.copyPartialLog(out, startMillis, endMillis);
|
||||
}
|
||||
if (!performStat && !performExtract) {
|
||||
throw new RuntimeException("Nothing to do. Use -s to analyze simulation log, -e to perform time based extraction, or --help to see more help.");
|
||||
}
|
||||
} catch (Throwable t) {
|
||||
System.out.println(t.getMessage());
|
||||
if (debug) {
|
||||
t.printStackTrace();
|
||||
}
|
||||
System.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
public static void printHelp() {
|
||||
System.out.println("Usage: java org.keycloak.performance.log.LogProcessor ARGUMENTS");
|
||||
System.out.println();
|
||||
System.out.println("ARGUMENTS:");
|
||||
System.out.println(" -f, --file FILE Path to simulation.log file ");
|
||||
System.out.println(" -s, --stat Perform analysis of the log and output some stats");
|
||||
System.out.println(" -e, --extract Copy a portion of the file PATH_TO_SIMULATION_LOG_FILE ");
|
||||
System.out.println(" -o, --out FILE Output file that will contain extracted portion of the log");
|
||||
System.out.println(" --start MILLIS Timestamp at which to start extracting");
|
||||
System.out.println(" --end MILLIS Timestamp at which to stop extracting");
|
||||
System.out.println(" --lastRequest LABEL Label of last request in the iteration");
|
||||
System.out.println(" -X Output a detailed error when something goes wrong");
|
||||
System.out.println();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,50 @@
|
|||
package org.keycloak.performance.util;
|
||||
|
||||
import java.util.Iterator;
|
||||
import java.util.NoSuchElementException;
|
||||
import java.util.function.Predicate;
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class FilteredIterator<T> implements Iterator<T> {
|
||||
|
||||
private final Iterator<T> delegate;
|
||||
private final Predicate<T> filter;
|
||||
private T next;
|
||||
|
||||
public FilteredIterator(Iterator<T> delegate, Predicate<T> filter) {
|
||||
this.delegate = delegate;
|
||||
this.filter = filter;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized boolean hasNext() {
|
||||
// check matching one
|
||||
if (next != null) {
|
||||
return true;
|
||||
}
|
||||
if (delegate.hasNext()) {
|
||||
T v = delegate.next();
|
||||
if (filter.test(v)) {
|
||||
next = v;
|
||||
return true;
|
||||
} else {
|
||||
// for infinite iterators it's up to provided 'filter' to make sure looping is not infinite
|
||||
// otherwise this may result in StackOverflowError
|
||||
return hasNext();
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized T next() {
|
||||
if (hasNext()) {
|
||||
T v = next;
|
||||
next = null;
|
||||
return v;
|
||||
}
|
||||
throw new NoSuchElementException();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,31 @@
|
|||
package org.keycloak.performance.util;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.Iterator;
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
public class LoopingIterator<T> implements Iterator<T> {
|
||||
|
||||
private Collection<T> collection;
|
||||
private Iterator<T> it;
|
||||
|
||||
public LoopingIterator(Collection<T> collection) {
|
||||
this.collection = collection;
|
||||
it = collection.iterator();
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized boolean hasNext() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized T next() {
|
||||
if (!it.hasNext()) {
|
||||
it = collection.iterator();
|
||||
}
|
||||
return it.next();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,51 @@
|
|||
package org.jboss.perf.util
|
||||
|
||||
trait Invalidatable[T] {
|
||||
def invalidate(): Boolean
|
||||
def apply(): T
|
||||
}
|
||||
|
||||
/**
|
||||
* Extends {@link RandomContainer} by storing {@link Invalidatable} entries -
|
||||
* when adding an entry an {@link Invalidatable} reference is returned, which could be
|
||||
* later removed from the collection in O(1).
|
||||
*
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
class InvalidatableRandomContainer[T >: Null <: AnyRef : Manifest](cap: Int = 16)
|
||||
extends RandomContainer[Invalidatable[T]](IndexedSeq(), cap) {
|
||||
|
||||
def add(elem: T): Invalidatable[T] = {
|
||||
val entry = new Entry(elem)
|
||||
this += entry
|
||||
entry
|
||||
}
|
||||
|
||||
private def remove(entry : Entry): Boolean = this.synchronized {
|
||||
if (entry.getPos >= 0) {
|
||||
removeInternal(entry.getPos)
|
||||
true
|
||||
} else false
|
||||
}
|
||||
|
||||
protected override def move(elem : Invalidatable[T], from: Int, to: Int): Unit = {
|
||||
elem.asInstanceOf[Entry].updatePos(to)
|
||||
}
|
||||
|
||||
private class Entry(value : T) extends Invalidatable[T] {
|
||||
private var pos: Int = 0
|
||||
|
||||
override def invalidate(): Boolean = {
|
||||
remove(this)
|
||||
}
|
||||
|
||||
override def apply() = value
|
||||
|
||||
private[InvalidatableRandomContainer] def updatePos(pos: Int): Unit = {
|
||||
this.pos = pos
|
||||
}
|
||||
|
||||
private[InvalidatableRandomContainer] def getPos = pos
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,92 @@
|
|||
package org.jboss.perf.util
|
||||
|
||||
import scala.util.Random
|
||||
|
||||
/**
|
||||
* Allows O(1) random removals and also adding in O(1), though these
|
||||
* operations can be blocked by another thread.
|
||||
* {@link #size} is non-blocking.
|
||||
*
|
||||
* Does not implement any collection interface/trait for simplicity reasons.
|
||||
*
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
class RandomContainer[T >: Null <: AnyRef : Manifest](
|
||||
seq: IndexedSeq[T],
|
||||
cap: Int = 0
|
||||
) {
|
||||
private var data = new Array[T](if (cap > 0) cap else seq.length * 2)
|
||||
private var takeIndex = 0
|
||||
private var putIndex = seq.length
|
||||
@volatile private var count = seq.length // volatile as we want to read size without lock
|
||||
|
||||
{
|
||||
var pos = 0
|
||||
for (e <- seq) {
|
||||
data(pos) = e
|
||||
pos = pos + 1
|
||||
}
|
||||
}
|
||||
|
||||
def +=(elem: T): RandomContainer[T] = this.synchronized {
|
||||
if (count == data.length) {
|
||||
val tmp = new Array[T](data.length * 2)
|
||||
for (i <- 0 until data.length) {
|
||||
tmp(i) = data(i)
|
||||
}
|
||||
tmp(data.length) = elem;
|
||||
move(elem, -1, data.length)
|
||||
putIndex = data.length + 1
|
||||
takeIndex = 0
|
||||
data = tmp;
|
||||
} else {
|
||||
data(putIndex) = elem
|
||||
move(elem, -1, putIndex)
|
||||
putIndex = (putIndex + 1) % data.length
|
||||
}
|
||||
count += 1
|
||||
this
|
||||
}
|
||||
|
||||
/**
|
||||
* Executed under lock, allows to track position of element
|
||||
*
|
||||
* @param elem
|
||||
* @param from
|
||||
* @param to
|
||||
*/
|
||||
protected def move(elem: T, from: Int, to: Int): Unit = {}
|
||||
|
||||
def removeRandom(random: Random): T = this.synchronized {
|
||||
if (count == 0) {
|
||||
return null;
|
||||
}
|
||||
removeInternal((takeIndex + random.nextInt(count)) % data.length)
|
||||
}
|
||||
|
||||
protected def removeInternal(index: Int): T = {
|
||||
assert(count > 0)
|
||||
count -= 1
|
||||
if (index == takeIndex) {
|
||||
val elem = data(takeIndex);
|
||||
assert(elem != null)
|
||||
move(elem, takeIndex, -1)
|
||||
data(takeIndex) = null
|
||||
takeIndex = (takeIndex + 1) % data.length
|
||||
return elem
|
||||
} else {
|
||||
val elem = data(index)
|
||||
assert(elem != null)
|
||||
move(elem, index, -1)
|
||||
val moved = data(takeIndex)
|
||||
assert(moved != null)
|
||||
data(index) = moved
|
||||
move(moved, takeIndex, index)
|
||||
data(takeIndex) = null // unnecessary
|
||||
takeIndex = (takeIndex + 1) % data.length
|
||||
return elem
|
||||
}
|
||||
}
|
||||
|
||||
def size() = count
|
||||
}
|
|
@ -0,0 +1,24 @@
|
|||
package org.jboss.perf.util
|
||||
|
||||
import java.util.UUID
|
||||
|
||||
import scala.concurrent.forkjoin.ThreadLocalRandom
|
||||
import scala.util.Random
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
object Util {
|
||||
val random = new Random(1234); // keep fixed seed
|
||||
|
||||
def randomString(length: Int, rand: Random = ThreadLocalRandom.current()): String = {
|
||||
val sb = new StringBuilder;
|
||||
for (i <- 0 until length) {
|
||||
sb.append((rand.nextInt(26) + 'a').toChar)
|
||||
}
|
||||
sb.toString()
|
||||
}
|
||||
|
||||
def randomUUID(rand: Random = ThreadLocalRandom.current()): String =
|
||||
new UUID(rand.nextLong(), rand.nextLong()).toString
|
||||
}
|
|
@ -0,0 +1,123 @@
|
|||
package org.keycloak.gatling
|
||||
|
||||
import java.text.SimpleDateFormat
|
||||
import java.util.{Collections, Date}
|
||||
|
||||
import akka.actor.ActorDSL.actor
|
||||
import akka.actor.ActorRef
|
||||
import io.gatling.core.action.Interruptable
|
||||
import io.gatling.core.action.builder.ActionBuilder
|
||||
import io.gatling.core.config.Protocols
|
||||
import io.gatling.core.result.writer.DataWriterClient
|
||||
import io.gatling.core.session._
|
||||
import io.gatling.core.validation._
|
||||
import org.jboss.logging.Logger
|
||||
import org.keycloak.adapters.KeycloakDeploymentBuilder
|
||||
import org.keycloak.adapters.spi.AuthOutcome
|
||||
import org.keycloak.adapters.spi.HttpFacade.Cookie
|
||||
import org.keycloak.common.enums.SslRequired
|
||||
import org.keycloak.representations.adapters.config.AdapterConfig
|
||||
|
||||
import scala.collection.JavaConverters._
|
||||
|
||||
case class AuthorizeAttributes(
|
||||
requestName: Expression[String],
|
||||
uri: Expression[String],
|
||||
cookies: Expression[List[Cookie]],
|
||||
sslRequired: SslRequired = SslRequired.EXTERNAL,
|
||||
resource: Option[Expression[String]] = None,
|
||||
secret: Option[Expression[String]] = None,
|
||||
isPublic: Option[Expression[Boolean]] = None,
|
||||
realm: Option[Expression[String]] = None,
|
||||
realmKey: Option[String] = None,
|
||||
authServerUrl: Expression[String] = _ => Failure("no server url")
|
||||
) {
|
||||
def toAdapterConfig(session: Session) = {
|
||||
val adapterConfig = new AdapterConfig
|
||||
adapterConfig.setSslRequired(sslRequired.toString)
|
||||
|
||||
adapterConfig.setResource( resource match {
|
||||
case Some(expr) => expr(session).get
|
||||
case None => null
|
||||
})
|
||||
adapterConfig.setPublicClient( isPublic match {
|
||||
case Some(expr) => expr(session).get
|
||||
case None => false
|
||||
})
|
||||
adapterConfig.setCredentials( secret match {
|
||||
case Some(expr) => Collections.singletonMap("secret", expr(session).get)
|
||||
case None => null
|
||||
})
|
||||
adapterConfig.setRealm(realm match {
|
||||
case Some(expr) => expr(session).get
|
||||
case None => null
|
||||
})
|
||||
adapterConfig.setRealmKey(realmKey match {
|
||||
case Some(key) => key
|
||||
case None => null
|
||||
})
|
||||
adapterConfig.setAuthServerUrl(authServerUrl(session).get)
|
||||
|
||||
adapterConfig
|
||||
}
|
||||
}
|
||||
|
||||
class AuthorizeActionBuilder(attributes: AuthorizeAttributes) extends ActionBuilder {
|
||||
def newInstance(attributes: AuthorizeAttributes) = new AuthorizeActionBuilder(attributes)
|
||||
|
||||
def sslRequired(sslRequired: SslRequired) = newInstance(attributes.copy(sslRequired = sslRequired))
|
||||
def resource(resource: Expression[String]) = newInstance(attributes.copy(resource = Option(resource)))
|
||||
def clientCredentials(secret: Expression[String]) = newInstance(attributes.copy(secret = Option(secret)))
|
||||
def publicClient(isPublic: Expression[Boolean]) = newInstance(attributes.copy(isPublic = Option(isPublic)))
|
||||
def realm(realm: Expression[String]) = newInstance(attributes.copy(realm = Option(realm)))
|
||||
def realmKey(realmKey: String) = newInstance(attributes.copy(realmKey = Option(realmKey)))
|
||||
def authServerUrl(authServerUrl: Expression[String]) = newInstance(attributes.copy(authServerUrl = authServerUrl))
|
||||
|
||||
override def build(next: ActorRef, protocols: Protocols): ActorRef = {
|
||||
actor(actorName("authorize"))(new AuthorizeAction(attributes, next))
|
||||
}
|
||||
}
|
||||
|
||||
object AuthorizeAction {
|
||||
val logger = Logger.getLogger(classOf[AuthorizeAction])
|
||||
|
||||
def init(session: Session) : Session = {
|
||||
session.remove(MockRequestAuthenticator.KEY)
|
||||
}
|
||||
}
|
||||
|
||||
class AuthorizeAction(
|
||||
attributes: AuthorizeAttributes,
|
||||
val next: ActorRef
|
||||
) extends Interruptable with ExitOnFailure with DataWriterClient {
|
||||
override def executeOrFail(session: Session): Validation[_] = {
|
||||
val facade = new MockHttpFacade()
|
||||
val deployment = KeycloakDeploymentBuilder.build(attributes.toAdapterConfig(session))
|
||||
val url = attributes.uri(session).get
|
||||
facade.request.setURI(if (attributes.isPublic.isDefined && attributes.isPublic.get(session).get) rewriteFragmentToQuery(url) else url)
|
||||
facade.request.setCookies(attributes.cookies(session).get.map(c => (c.getName, c)).toMap.asJava)
|
||||
var nextSession = session
|
||||
val requestAuth: MockRequestAuthenticator = session(MockRequestAuthenticator.KEY).asOption[MockRequestAuthenticator] match {
|
||||
case Some(ra) => ra
|
||||
case None =>
|
||||
val tmp = new MockRequestAuthenticator(facade, deployment, new MockTokenStore, -1, session.userId)
|
||||
nextSession = session.set(MockRequestAuthenticator.KEY, tmp)
|
||||
tmp
|
||||
}
|
||||
|
||||
Blocking(() => {
|
||||
AuthorizeAction.logger.debugf("%s: Authenticating %s%n", new SimpleDateFormat("HH:mm:ss,SSS").format(new Date()).asInstanceOf[Any], session("username").as[Any], Unit)
|
||||
Stopwatch(() => requestAuth.authenticate())
|
||||
.check(result => result == AuthOutcome.AUTHENTICATED, result => {
|
||||
AuthorizeAction.logger.warnf("%s: Failed auth %s%n", new SimpleDateFormat("HH:mm:ss,SSS").format(new Date()).asInstanceOf[Any], session("username").as[Any], Unit)
|
||||
"AuthorizeAction: authenticate() failed with status: " + result.toString
|
||||
})
|
||||
.recordAndContinue(this, nextSession, attributes.requestName(session).get)
|
||||
})
|
||||
}
|
||||
|
||||
def rewriteFragmentToQuery(str: String): String = {
|
||||
str.replaceFirst("#", "?")
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
package org.keycloak.gatling
|
||||
|
||||
import java.util.concurrent.atomic.AtomicInteger
|
||||
import java.util.concurrent.{Executors, ThreadFactory}
|
||||
|
||||
import io.gatling.core.akka.GatlingActorSystem
|
||||
import io.gatling.core.validation.Success
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
object Blocking {
|
||||
GatlingActorSystem.instance.registerOnTermination(() => shutdown())
|
||||
|
||||
private val threadPool = Executors.newCachedThreadPool(new ThreadFactory {
|
||||
val counter = new AtomicInteger();
|
||||
|
||||
override def newThread(r: Runnable): Thread =
|
||||
new Thread(r, "blocking-thread-" + counter.incrementAndGet())
|
||||
})
|
||||
|
||||
def apply(f: () => Unit) = {
|
||||
threadPool.execute(new Runnable() {
|
||||
override def run = {
|
||||
f()
|
||||
}
|
||||
})
|
||||
Success(())
|
||||
}
|
||||
|
||||
def shutdown() = {
|
||||
threadPool.shutdownNow()
|
||||
}
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
package org.keycloak.gatling
|
||||
|
||||
import io.gatling.core.action.{Chainable, UserEnd}
|
||||
import io.gatling.core.session.Session
|
||||
import io.gatling.core.validation.Validation
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
trait ExitOnFailure extends Chainable {
|
||||
override def execute(session: Session): Unit = {
|
||||
executeOrFail(session).onFailure { message =>
|
||||
logger.error(s"'${self.path.name}' failed to execute: $message")
|
||||
UserEnd.instance ! session.markAsFailed
|
||||
}
|
||||
}
|
||||
|
||||
def executeOrFail(session: Session): Validation[_]
|
||||
}
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
package org.keycloak.gatling
|
||||
|
||||
import io.gatling.core.session._
|
||||
import org.keycloak.adapters.spi.HttpFacade.Cookie
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
case class Oauth(requestName: Expression[String]) {
|
||||
def authorize(uri: Expression[String], cookies: Expression[List[Cookie]]) = new AuthorizeActionBuilder(new AuthorizeAttributes(requestName, uri, cookies));
|
||||
def refresh() = new RefreshTokenActionBuilder(requestName);
|
||||
}
|
|
@ -0,0 +1,10 @@
|
|||
package org.keycloak.gatling
|
||||
|
||||
import io.gatling.core.session.Expression
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
object Predef {
|
||||
def oauth(requestName: Expression[String]) = new Oauth(requestName)
|
||||
}
|
|
@ -0,0 +1,33 @@
|
|||
package org.keycloak.gatling
|
||||
|
||||
import akka.actor.ActorDSL._
|
||||
import akka.actor.ActorRef
|
||||
import io.gatling.core.action.Interruptable
|
||||
import io.gatling.core.action.builder.ActionBuilder
|
||||
import io.gatling.core.config.Protocols
|
||||
import io.gatling.core.result.writer.DataWriterClient
|
||||
import io.gatling.core.session.{Expression, Session}
|
||||
import io.gatling.core.validation.Validation
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
class RefreshTokenActionBuilder(requestName: Expression[String]) extends ActionBuilder{
|
||||
override def build(next: ActorRef, protocols: Protocols): ActorRef = {
|
||||
actor(actorName("refresh-token"))(new RefreshTokenAction(requestName, next))
|
||||
}
|
||||
}
|
||||
|
||||
class RefreshTokenAction(
|
||||
requestName: Expression[String],
|
||||
val next: ActorRef
|
||||
) extends Interruptable with ExitOnFailure with DataWriterClient {
|
||||
override def executeOrFail(session: Session): Validation[_] = {
|
||||
val requestAuth: MockRequestAuthenticator = session(MockRequestAuthenticator.KEY).as[MockRequestAuthenticator]
|
||||
Blocking(() =>
|
||||
Stopwatch(() => requestAuth.getKeycloakSecurityContext.refreshExpiredToken(false))
|
||||
.check(identity, _ => "AuthorizeAction: refreshToken() failed")
|
||||
.recordAndContinue(this, session, requestName(session).get)
|
||||
)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,106 @@
|
|||
package org.keycloak.gatling
|
||||
|
||||
import com.typesafe.scalalogging.StrictLogging
|
||||
import io.gatling.core.action.{Chainable, UserEnd}
|
||||
import io.gatling.core.akka.GatlingActorSystem
|
||||
import io.gatling.core.result.message.{KO, OK, Status}
|
||||
import io.gatling.core.result.writer.DataWriterClient
|
||||
import io.gatling.core.session.Session
|
||||
import io.gatling.core.util.TimeHelper
|
||||
import io.gatling.core.validation.{Failure, Success, Validation}
|
||||
|
||||
/**
|
||||
* @author Radim Vansa <rvansa@redhat.com>
|
||||
*/
|
||||
object Stopwatch extends StrictLogging {
|
||||
@volatile var recording: Boolean = true;
|
||||
GatlingActorSystem.instance.registerOnTermination(() => recording = true)
|
||||
|
||||
def apply[T](f: () => T): Result[T] = {
|
||||
val start = TimeHelper.nowMillis
|
||||
try {
|
||||
val result = f()
|
||||
Result(Success(result), OK, start, TimeHelper.nowMillis, false)
|
||||
} catch {
|
||||
case ie: InterruptedException => {
|
||||
Result(Failure("Interrupted"), KO, start, start, true)
|
||||
}
|
||||
case e: Throwable => {
|
||||
Stopwatch.log.error("Operation failed with exception", e)
|
||||
Result(Failure(e.toString), KO, start, TimeHelper.nowMillis, false)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
def log = logger;
|
||||
}
|
||||
|
||||
case class Result[T](
|
||||
val value: Validation[T],
|
||||
val status: Status,
|
||||
val startTime: Long,
|
||||
val endTime: Long,
|
||||
val interrupted: Boolean
|
||||
) {
|
||||
def check(check: T => Boolean, fail: T => String): Result[T] = {
|
||||
value match {
|
||||
case Success(v) =>
|
||||
if (!check(v)) {
|
||||
Result(Failure(fail(v)), KO, startTime, endTime, interrupted);
|
||||
} else {
|
||||
this
|
||||
}
|
||||
case _ => this
|
||||
}
|
||||
}
|
||||
|
||||
def isSuccess =
|
||||
value match {
|
||||
case Success(_) => true
|
||||
case _ => false
|
||||
}
|
||||
|
||||
private def record(client: DataWriterClient, session: Session, name: String): Validation[T] = {
|
||||
if (!interrupted && Stopwatch.recording) {
|
||||
var msg = value match {
|
||||
case Failure(m) => Some(m)
|
||||
case _ => None
|
||||
}
|
||||
client.writeRequestData(session, name, startTime, startTime, endTime, endTime, status, msg)
|
||||
}
|
||||
value
|
||||
}
|
||||
|
||||
def recordAndStopOnFailure(client: DataWriterClient with Chainable, session: Session, name: String): Validation[T] = {
|
||||
val validation = record(client, session, name)
|
||||
validation.onFailure(message => {
|
||||
Stopwatch.log.error(s"'${client.self.path.name}', ${session.userId} failed to execute: $message")
|
||||
UserEnd.instance ! session.markAsFailed
|
||||
})
|
||||
validation
|
||||
}
|
||||
|
||||
def recordAndContinue(client: DataWriterClient with Chainable, session: Session, name: String): Unit = {
|
||||
// can't specify follow function as default arg since it uses another parameter
|
||||
recordAndContinue(client, session, name, _ => session);
|
||||
}
|
||||
|
||||
def recordAndContinue(client: DataWriterClient with Chainable, session: Session, name: String, follow: T => Session): Unit = {
|
||||
// 'follow' intentionally does not get session as arg, since caller site already has the reference
|
||||
record(client, session, name) match {
|
||||
case Success(value) => try {
|
||||
client.next ! follow(value)
|
||||
} catch {
|
||||
case t: Throwable => {
|
||||
Stopwatch.log.error(s"'${client.self.path.name}' failed processing", t)
|
||||
UserEnd.instance ! session.markAsFailed
|
||||
}
|
||||
}
|
||||
case Failure(message) => {
|
||||
Stopwatch.log.error(s"'${client.self.path.name}' failed to execute: $message")
|
||||
UserEnd.instance ! session.markAsFailed
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
package org.keycloak.gatling
|
||||
|
||||
import java.net.URLEncoder
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
object Utils {
|
||||
|
||||
def urlencode(url: String) = {
|
||||
URLEncoder.encode(url, "utf-8")
|
||||
}
|
||||
|
||||
def urlEncodedRoot(url: String) = {
|
||||
URLEncoder.encode(url.split("/auth")(0), "utf-8")
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,9 @@
|
|||
####################################
|
||||
# Akka Actor Config File #
|
||||
####################################
|
||||
|
||||
akka {
|
||||
scheduler {
|
||||
tick-duration = 50ms
|
||||
}
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
username,password
|
||||
user1,password1
|
||||
user2,password2
|
||||
user3,password3
|
||||
user4,password4
|
||||
user5,password5
|
||||
user6,password6
|
||||
user7,password7
|
||||
user8,password8
|
||||
user9,password9
|
||||
user10,password10
|
|
|
@ -0,0 +1,11 @@
|
|||
username,password,account_id
|
||||
user1,password1,1
|
||||
user2,password2,6
|
||||
user3,password3,9
|
||||
user4,password4,12
|
||||
user5,password5,15
|
||||
user6,password6,18
|
||||
user7,password7,21
|
||||
user8,password8,24
|
||||
user9,password9,27
|
||||
user10,password10,30
|
|
161
testsuite/performance/tests/src/test/resources/gatling.conf
Normal file
161
testsuite/performance/tests/src/test/resources/gatling.conf
Normal file
|
@ -0,0 +1,161 @@
|
|||
#########################
|
||||
# Gatling Configuration #
|
||||
#########################
|
||||
|
||||
# This file contains all the settings configurable for Gatling with their default values
|
||||
|
||||
gatling {
|
||||
core {
|
||||
#outputDirectoryBaseName = "" # The prefix for each simulation result folder (then suffixed by the report generation timestamp)
|
||||
#runDescription = "" # The description for this simulation run, displayed in each report
|
||||
#encoding = "utf-8" # Encoding to use throughout Gatling for file and string manipulation
|
||||
#simulationClass = "" # The FQCN of the simulation to run (when used in conjunction with noReports, the simulation for which assertions will be validated)
|
||||
#mute = false # When set to true, don't ask for simulation name nor run description (currently only used by Gatling SBT plugin)
|
||||
|
||||
extract {
|
||||
regex {
|
||||
#cacheMaxCapacity = 200 # Cache size for the compiled regexes, set to 0 to disable caching
|
||||
}
|
||||
xpath {
|
||||
#cacheMaxCapacity = 200 # Cache size for the compiled XPath queries, set to 0 to disable caching
|
||||
}
|
||||
jsonPath {
|
||||
#cacheMaxCapacity = 200 # Cache size for the compiled jsonPath queries, set to 0 to disable caching
|
||||
#preferJackson = false # When set to true, prefer Jackson over Boon for JSON-related operations
|
||||
jackson {
|
||||
#allowComments = false # Allow comments in JSON files
|
||||
#allowUnquotedFieldNames = false # Allow unquoted JSON fields names
|
||||
#allowSingleQuotes = false # Allow single quoted JSON field names
|
||||
}
|
||||
|
||||
}
|
||||
css {
|
||||
#cacheMaxCapacity = 200 # Cache size for the compiled CSS selectors queries, set to 0 to disable caching
|
||||
}
|
||||
}
|
||||
|
||||
timeOut {
|
||||
#simulation = 8640000 # Absolute timeout, in seconds, of a simulation
|
||||
}
|
||||
directory {
|
||||
#data = src/test/resource/data # Folder where user's data (e.g. files used by Feeders) is located
|
||||
#bodies = src/test/resource/bodies # Folder where bodies are located
|
||||
#simulations = user-src/test/java/simulations # Folder where the bundle's simulations are located<
|
||||
#reportsOnly = "" # If set, name of report folder to look for in order to generate its report
|
||||
#binaries = "" # If set, name of the folder where compiles classes are located: Defaults to GATLING_HOME/target.
|
||||
|
||||
# this property will be replaced with absolute path to "target/test-classes" by maven resources plugin
|
||||
binaries = ${project.build.testOutputDirectory}
|
||||
|
||||
#results = results # Name of the folder where all reports folder are located
|
||||
}
|
||||
}
|
||||
charting {
|
||||
#noReports = false # When set to true, don't generate HTML reports
|
||||
#maxPlotPerSeries = 1000 # Number of points per graph in Gatling reports
|
||||
#accuracy = 10 # Accuracy, in milliseconds, of the report's stats
|
||||
indicators {
|
||||
#lowerBound = 800 # Lower bound for the requests' response time to track in the reports and the console summary
|
||||
#higherBound = 1200 # Higher bound for the requests' response time to track in the reports and the console summary
|
||||
#percentile1 = 50 # Value for the 1st percentile to track in the reports, the console summary and GraphiteDataWriter
|
||||
#percentile2 = 75 # Value for the 2nd percentile to track in the reports, the console summary and GraphiteDataWriter
|
||||
#percentile3 = 95 # Value for the 3rd percentile to track in the reports, the console summary and GraphiteDataWriter
|
||||
#percentile4 = 99 # Value for the 4th percentile to track in the reports, the console summary and GraphiteDataWriter
|
||||
}
|
||||
}
|
||||
http {
|
||||
#elFileBodiesCacheMaxCapacity = 200 # Cache size for request body EL templates, set to 0 to disable
|
||||
#rawFileBodiesCacheMaxCapacity = 200 # Cache size for request body Raw templates, set to 0 to disable
|
||||
#fetchedCssCacheMaxCapacity = 200 # Cache size for CSS parsed content, set to 0 to disable
|
||||
#fetchedHtmlCacheMaxCapacity = 200 # Cache size for HTML parsed content, set to 0 to disable
|
||||
#redirectPerUserCacheMaxCapacity = 200 # Per virtual user cache size for permanent redirects, set to 0 to disable
|
||||
#expirePerUserCacheMaxCapacity = 200 # Per virtual user cache size for permanent 'Expire' headers, set to 0 to disable
|
||||
#lastModifiedPerUserCacheMaxCapacity = 200 # Per virtual user cache size for permanent 'Last-Modified' headers, set to 0 to disable
|
||||
#etagPerUserCacheMaxCapacity = 200 # Per virtual user cache size for permanent ETag headers, set to 0 to disable
|
||||
#warmUpUrl = "http://gatling.io" # The URL to use to warm-up the HTTP stack (blank means disabled)
|
||||
#enableGA = true # Very light Google Analytics, please support
|
||||
ssl {
|
||||
trustStore {
|
||||
#type = "" # Type of SSLContext's TrustManagers store
|
||||
#file = "" # Location of SSLContext's TrustManagers store
|
||||
#password = "" # Password for SSLContext's TrustManagers store
|
||||
#algorithm = "" # Algorithm used by SSLContext's TrustManagers store
|
||||
}
|
||||
keyStore {
|
||||
#type = "" # Type of SSLContext's KeyManagers store
|
||||
#file = "" # Location of SSLContext's KeyManagers store
|
||||
#password = "" # Password for SSLContext's KeyManagers store
|
||||
#algorithm = "" # Algorithm used SSLContext's KeyManagers store
|
||||
}
|
||||
}
|
||||
ahc {
|
||||
#allowPoolingConnections = true # Allow pooling HTTP connections (keep-alive header automatically added)
|
||||
#allowPoolingSslConnections = true # Allow pooling HTTPS connections (keep-alive header automatically added)
|
||||
#compressionEnforced = false # Enforce gzip/deflate when Accept-Encoding header is not defined
|
||||
#connectTimeout = 60000 # Timeout when establishing a connection
|
||||
#pooledConnectionIdleTimeout = 60000 # Timeout when a connection stays unused in the pool
|
||||
#readTimeout = 60000 # Timeout when a used connection stays idle
|
||||
#connectionTTL = -1 # Max duration a connection can stay open (-1 means no limit)
|
||||
#ioThreadMultiplier = 2 # Number of Netty worker threads per core
|
||||
#maxConnectionsPerHost = -1 # Max number of connections per host (-1 means no limit)
|
||||
#maxConnections = -1 # Max number of connections (-1 means no limit)
|
||||
#maxRetry = 2 # Number of times that a request should be tried again
|
||||
#requestTimeout = 60000 # Timeout of the requests
|
||||
#useProxyProperties = false # When set to true, supports standard Proxy System properties
|
||||
#webSocketTimeout = 60000 # Timeout when a used websocket connection stays idle
|
||||
#useRelativeURIsWithConnectProxies = true # When set to true, use relative URIs when talking with an SSL proxy or a WebSocket proxy
|
||||
#acceptAnyCertificate = true # When set to true, doesn't validate SSL certificates
|
||||
#httpClientCodecMaxInitialLineLength = 4096 # Maximum length of the initial line of the response (e.g. "HTTP/1.0 200 OK")
|
||||
#httpClientCodecMaxHeaderSize = 8192 # Maximum size, in bytes, of each request's headers
|
||||
#httpClientCodecMaxChunkSize = 8192 # Maximum length of the content or each chunk
|
||||
#keepEncodingHeader = true # Don't drop Encoding response header after decoding
|
||||
#webSocketMaxFrameSize = 10240 # Maximum frame payload size
|
||||
#httpsEnabledProtocols = "" # Comma separated enabled protocols for HTTPS, if empty use the JDK defaults
|
||||
#httpsEnabledCipherSuites = "" # Comma separated enabled cipher suites for HTTPS, if empty use the JDK defaults
|
||||
#sslSessionCacheSize = 20000 # SSLSession cache size (set to 0 to disable)
|
||||
#sslSessionTimeout = 86400 # SSLSession timeout (default is 24, like Hotspot)
|
||||
}
|
||||
}
|
||||
data {
|
||||
#writers = "console, file" # The lists of DataWriters to which Gatling write simulation data (currently supported : "console", "file", "graphite", "jdbc")
|
||||
#reader = file # The DataReader used by the charting engine for reading simulation results
|
||||
console {
|
||||
#light = false # When set to true, displays a light version without detailed request stats
|
||||
}
|
||||
file {
|
||||
#bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
|
||||
}
|
||||
leak {
|
||||
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
|
||||
}
|
||||
jdbc {
|
||||
db {
|
||||
#url = "jdbc:mysql://localhost:3306/temp" # The JDBC URL used by the JDBC DataWriter
|
||||
#username = "root" # The database user used by the JDBC DataWriter
|
||||
#password = "123123q" # The password for the specified user
|
||||
}
|
||||
#bufferSize = 20 # The size for each batch of SQL inserts to send to the database
|
||||
create {
|
||||
#createRunRecordTable = "CREATE TABLE IF NOT EXISTS `RunRecords` ( `id` INT NOT NULL AUTO_INCREMENT , `runDate` DATETIME NULL , `simulationId` VARCHAR(45) NULL , `runDescription` VARCHAR(45) NULL , PRIMARY KEY (`id`) )"
|
||||
#createRequestRecordTable = "CREATE TABLE IF NOT EXISTS `RequestRecords` (`id` int(11) NOT NULL AUTO_INCREMENT, `runId` int DEFAULT NULL, `scenario` varchar(45) DEFAULT NULL, `userId` VARCHAR(30) NULL, `name` varchar(50) DEFAULT NULL, `requestStartDate` bigint DEFAULT NULL, `requestEndDate` bigint DEFAULT NULL, `responseStartDate` bigint DEFAULT NULL, `responseEndDate` bigint DEFAULT NULL, `status` varchar(2) DEFAULT NULL, `message` varchar(4500) DEFAULT NULL, `responseTime` bigint DEFAULT NULL, PRIMARY KEY (`id`) )"
|
||||
#createScenarioRecordTable = "CREATE TABLE IF NOT EXISTS `ScenarioRecords` (`id` int(11) NOT NULL AUTO_INCREMENT, `runId` int DEFAULT NULL, `scenarioName` varchar(45) DEFAULT NULL, `userId` VARCHAR(30) NULL, `event` varchar(50) DEFAULT NULL, `startDate` bigint DEFAULT NULL, `endDate` bigint DEFAULT NULL, PRIMARY KEY (`id`) )"
|
||||
#createGroupRecordTable = "CREATE TABLE IF NOT EXISTS `GroupRecords` (`id` int(11) NOT NULL AUTO_INCREMENT, `runId` int DEFAULT NULL, `scenarioName` varchar(45) DEFAULT NULL, `userId` VARCHAR(30) NULL, `entryDate` bigint DEFAULT NULL, `exitDate` bigint DEFAULT NULL, `status` varchar(2) DEFAULT NULL, PRIMARY KEY (`id`) )"
|
||||
}
|
||||
insert {
|
||||
#insertRunRecord = "INSERT INTO RunRecords (runDate, simulationId, runDescription) VALUES (?,?,?)"
|
||||
#insertRequestRecord = "INSERT INTO RequestRecords (runId, scenario, userId, name, requestStartDate, requestEndDate, responseStartDate, responseEndDate, status, message, responseTime) VALUES (?,?,?,?,?,?,?,?,?,?,?)"
|
||||
#insertScenarioRecord = "INSERT INTO ScenarioRecords (runId, scenarioName, userId, event, startDate, endDate) VALUES (?,?,?,?,?,?)"
|
||||
#insertGroupRecord = "INSERT INTO GroupRecords (runId, scenarioName, userId, entryDate, exitDate, status) VALUES (?,?,?,?,?,?)"
|
||||
}
|
||||
}
|
||||
graphite {
|
||||
#light = false # only send the all* stats
|
||||
#host = "localhost" # The host where the Carbon server is located
|
||||
#port = 2003 # The port to which the Carbon server listens to
|
||||
#protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
|
||||
#rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
|
||||
#bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
|
||||
#writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,24 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<configuration>
|
||||
|
||||
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
|
||||
<resetJUL>true</resetJUL>
|
||||
</contextListener>
|
||||
|
||||
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
|
||||
<encoder>
|
||||
<pattern>%d{HH:mm:ss.SSS} [%-5level] %logger{15} - %msg%n%rEx</pattern>
|
||||
</encoder>
|
||||
<immediateFlush>false</immediateFlush>
|
||||
</appender>
|
||||
|
||||
<!-- Uncomment for logging ALL HTTP request and responses -->
|
||||
<!-- <logger name="io.gatling.http" level="TRACE" /> -->
|
||||
<!-- Uncomment for logging ONLY FAILED HTTP request and responses -->
|
||||
<!-- <logger name="io.gatling.http" level="DEBUG" /> -->
|
||||
|
||||
<root level="WARN">
|
||||
<appender-ref ref="CONSOLE" />
|
||||
</root>
|
||||
|
||||
</configuration>
|
19
testsuite/performance/tests/src/test/scala/Engine.scala
Normal file
19
testsuite/performance/tests/src/test/scala/Engine.scala
Normal file
|
@ -0,0 +1,19 @@
|
|||
|
||||
import io.gatling.app.Gatling
|
||||
import io.gatling.core.config.GatlingPropertiesBuilder
|
||||
|
||||
object Engine extends App {
|
||||
|
||||
val sim = classOf[keycloak.DefaultSimulation]
|
||||
//val sim = classOf[keycloak.AdminSimulation]
|
||||
|
||||
val props = new GatlingPropertiesBuilder
|
||||
props.dataDirectory(IDEPathHelper.dataDirectory.toString)
|
||||
props.resultsDirectory(IDEPathHelper.resultsDirectory.toString)
|
||||
props.bodiesDirectory(IDEPathHelper.bodiesDirectory.toString)
|
||||
props.binariesDirectory(IDEPathHelper.mavenBinariesDirectory.toString)
|
||||
|
||||
props.simulationClass(sim.getName)
|
||||
|
||||
Gatling.fromMap(props.build)
|
||||
}
|
|
@ -0,0 +1,22 @@
|
|||
|
||||
import java.nio.file.Path
|
||||
import io.gatling.core.util.PathHelper._
|
||||
|
||||
object IDEPathHelper {
|
||||
|
||||
val gatlingConfUrl: Path = getClass.getClassLoader.getResource("gatling.conf").toURI
|
||||
val projectRootDir = gatlingConfUrl.ancestor(3)
|
||||
|
||||
val mavenSourcesDirectory = projectRootDir / "src" / "test" / "scala"
|
||||
val mavenResourcesDirectory = projectRootDir / "src" / "test" / "resources"
|
||||
val mavenTargetDirectory = projectRootDir / "target"
|
||||
val mavenBinariesDirectory = mavenTargetDirectory / "test-classes"
|
||||
|
||||
val dataDirectory = mavenResourcesDirectory / "data"
|
||||
val bodiesDirectory = mavenResourcesDirectory / "bodies"
|
||||
|
||||
val recorderOutputDirectory = mavenSourcesDirectory
|
||||
val resultsDirectory = mavenTargetDirectory / "gatling"
|
||||
|
||||
val recorderConfigFile = mavenResourcesDirectory / "recorder.conf"
|
||||
}
|
13
testsuite/performance/tests/src/test/scala/Recorder.scala
Normal file
13
testsuite/performance/tests/src/test/scala/Recorder.scala
Normal file
|
@ -0,0 +1,13 @@
|
|||
|
||||
import io.gatling.recorder.GatlingRecorder
|
||||
import io.gatling.recorder.config.RecorderPropertiesBuilder
|
||||
|
||||
object Recorder extends App {
|
||||
|
||||
val props = new RecorderPropertiesBuilder
|
||||
props.simulationOutputFolder(IDEPathHelper.recorderOutputDirectory.toString)
|
||||
props.simulationPackage("computerdatabase")
|
||||
props.bodiesFolder(IDEPathHelper.bodiesDirectory.toString)
|
||||
|
||||
GatlingRecorder.fromMap(props.build, Some(IDEPathHelper.recorderConfigFile))
|
||||
}
|
|
@ -0,0 +1,27 @@
|
|||
package examples
|
||||
|
||||
import io.gatling.core.Predef._
|
||||
import io.gatling.http.Predef._
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
class SimpleExample1 extends Simulation {
|
||||
|
||||
// Create a scenario with three steps:
|
||||
// - first perform an HTTP GET
|
||||
// - then pause for 10 seconds
|
||||
// - then perform a different HTTP GET
|
||||
|
||||
val scn = scenario("Simple")
|
||||
.exec(http("Home")
|
||||
.get("http://localhost:8080")
|
||||
.check(status is 200))
|
||||
.pause(10)
|
||||
.exec(http("Auth Home")
|
||||
.get("http://localhost:8080/auth")
|
||||
.check(status is 200))
|
||||
|
||||
// Run the scenario with 100 parallel users, all starting at the same time
|
||||
setUp(scn.inject(atOnceUsers(100)))
|
||||
}
|
|
@ -0,0 +1,43 @@
|
|||
package examples
|
||||
|
||||
import io.gatling.core.Predef._
|
||||
import io.gatling.http.Predef._
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
class SimpleExample2 extends Simulation {
|
||||
|
||||
// Create two scenarios
|
||||
// First one called Simple with three steps:
|
||||
// - first perform an HTTP GET
|
||||
// - then pause for 10 seconds
|
||||
// - then perform a different HTTP GET
|
||||
|
||||
val scn = scenario("Simple")
|
||||
.exec(http("Home")
|
||||
.get("http://localhost:8080")
|
||||
.check(status is 200))
|
||||
.pause(10)
|
||||
.exec(http("Auth Home")
|
||||
.get("http://localhost:8080/auth")
|
||||
.check(status is 200))
|
||||
|
||||
|
||||
// The second scenario called Account with only one step:
|
||||
// - perform an HTTP GET
|
||||
|
||||
val scn2 = scenario("Account")
|
||||
.exec(http("Account")
|
||||
.get("http://localhost:8080/auth/realms/master/account")
|
||||
.check(status is 200))
|
||||
|
||||
// Run both scenarios:
|
||||
// - first scenario with 100 parallel users, starting all at the same time
|
||||
// - second scenario with 50 parallel users, starting all at the same time
|
||||
|
||||
setUp(
|
||||
scn.inject(atOnceUsers(100)),
|
||||
scn2.inject(atOnceUsers(50))
|
||||
)
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
package examples
|
||||
|
||||
import io.gatling.core.Predef._
|
||||
import io.gatling.http.Predef._
|
||||
|
||||
/**
|
||||
* @author <a href="mailto:mstrukel@redhat.com">Marko Strukelj</a>
|
||||
*/
|
||||
class SimpleExample3 extends Simulation {
|
||||
|
||||
// Create a scenario where user performs the same operation in a loop without any pause
|
||||
// Each loop iteration will be displayed as individual request in the report
|
||||
val rapidlyRefreshAccount = repeat(10, "i") {
|
||||
exec(http("Account ${i}")
|
||||
.get("http://localhost:8080/auth/realms/master/account")
|
||||
.check(status is 200))
|
||||
}
|
||||
|
||||
val scn = scenario("Account Refresh")
|
||||
.exec(rapidlyRefreshAccount)
|
||||
|
||||
setUp(
|
||||
scn.inject(atOnceUsers(100))
|
||||
)
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue