keycloak-scim/openshift/topics/get_started.adoc
Andy Munro d02903fe30 KEYCLOAK-13280 Removing sentence that adds confusion.
KEYCLOAK-13280 addressing input from Jan
2020-07-31 10:24:18 +02:00

173 lines
8 KiB
Text

== Get Started
=== Using the {project_openshift_product_name} Image Streams and Application Templates
[IMPORTANT]
====
Red Hat JBoss Middleware for OpenShift images are pulled on demand from the secured Red Hat Registry: link:https://access.redhat.com/containers/[registry.redhat.io], which requires authentication. To retrieve content, you will need to log into the registry using the Red Hat account.
To consume container images from *_registry.redhat.io_* in shared environments such as OpenShift, it is recommended for an administrator to use a Registry Service Account, also referred to as authentication tokens, in place of an individual person's Red Hat Customer Portal credentials.
To create a Registry Service Account, navigate to the link:https://access.redhat.com/terms-based-registry/[Registry Service Account Management Application], and log in if necessary.
. From the *_Registry Service Accounts_* page, click *_Create Service Account_*.
. Provide a name for the Service Account, for example *_registry.redhat.io-sa_*. It will be prepended with a fixed, random string.
.. Enter a description for the Service Account, for example *_Service account to consume container images from registry.redhat.io._*.
.. Click *_Create_*.
. After the Service Account was created, click the *_registry.redhat.io-sa_* link in the *_Account name_* column of the table presented on the *_Registry Service Accounts_* page.
. Finally, click the *_OpenShift Secret_* tab, and perform all steps listed on that page.
See the link:https://access.redhat.com/RegistryAuthentication[Red Hat Container Registry Authentication] article for more information.
====
. Ensure that you are logged in as a cluster administrator or a user with project administrator access to the global openshift project:
. Choose a command based on your version of OpenShift Container Platform.
.. If you are running an OpenShift Container Platform v3 based cluster instance on (some) of your master host(s), perform the following:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc login -u system:admin
----
.. If you are running an OpenShift Container Platform v4 based cluster instance, link:https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html#cli-logging-in_cli-developer-commands[log in to the CLI] as the link:https://docs.openshift.com/container-platform/latest/authentication/remove-kubeadmin.html#understanding-kubeadmin_removing-kubeadmin[kubeadmin] user:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc login -u kubeadmin -p password \https://openshift.example.com:6443
----
. Run the following commands to update the core set of {project_name} {project_version} resources for OpenShift in the `openshift` project:
+
[source,bash,subs="attributes+,macros+"]
----
$ for resource in {project_templates_version}-image-stream.json \
{project_templates_version}-https.json \
{project_templates_version}-postgresql.json \
{project_templates_version}-postgresql-persistent.json \
{project_templates_version}-x509-https.json \
{project_templates_version}-x509-postgresql-persistent.json
do
oc replace -n openshift --force -f \
{project_templates_url}/$\{resource}
done
----
. Run the following command to install the {project_name} {project_version} OpenShift image streams in the `openshift` project:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc -n openshift import-image redhat-{project_templates_version}-openshift:{project_latest_image_tag}
----
[[Example-Deploying-SSO]]
=== Deploying the {project_name} Image
[[Preparing-SSO-Authentication-for-OpenShift-Deployment]]
==== Preparing the Deployment
Log in to the OpenShift CLI with a user that holds the _cluster:admin_ role.
. Create a new project:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc new-project sso-app-demo
----
. Add the `view` role to the link:{ocpdocs_default_service_accounts_link}[`default`] service account. This enables the service account to view all the resources in the sso-app-demo namespace, which is necessary for managing the cluster.
+
[source,bash,subs="attributes+,macros+"]
----
$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
----
==== Deploying the {project_name} Image using the Application Template
===== Deploying the Template via OpenShift CLI
. List the available {project_name} application templates:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc get templates -n openshift -o name | grep -o '{project_templates_version}.\+'
{project_templates_version}-https
{project_templates_version}-postgresql
{project_templates_version}-postgresql-persistent
{project_templates_version}-x509-https
{project_templates_version}-x509-postgresql-persistent
----
. Deploy the selected one:
+
[source,bash,subs="attributes+,macros+"]
----
$ oc new-app --template={project_templates_version}-x509-https
--> Deploying template "openshift/{project_templates_version}-x509-https" to project sso-app-demo
{project_name} {project_versionDoc} (Ephemeral)
---------
An example {project_name} 7 application. For more information about using this template, see https://github.com/jboss-openshift/application-templates.
A new {project_name} service has been created in your project. The admin username/password for accessing the master realm via the {project_name} console is IACfQO8v/nR7llVSVb4Dye3TNRbXoXhRpAKTmiCRc. The HTTPS keystore used for serving secure content, the JGroups keystore used for securing JGroups communications, and server truststore used for securing {project_name} requests were automatically created via OpenShift's service serving x509 certificate secrets.
* With parameters:
* Application Name=sso
* JGroups Cluster Password=jg0Rssom0gmHBnooDF3Ww7V4Mu5RymmB # generated
* Datasource Minimum Pool Size=
* Datasource Maximum Pool Size=
* Datasource Transaction Isolation=
* ImageStream Namespace=openshift
* {project_name} Administrator Username=IACfQO8v # generated
* {project_name} Administrator Password=nR7llVSVb4Dye3TNRbXoXhRpAKTmiCRc # generated
* {project_name} Realm=
* {project_name} Service Username=
* {project_name} Service Password=
* Container Memory Limit=1Gi
--> Creating resources ...
service "sso" created
service "secure-sso" created
service "sso-ping" created
route "sso" created
route "secure-sso" created
deploymentconfig "sso" created
--> Success
Run 'oc status' to view your app.
----
===== Deploying the Template via the OpenShift Web Console
Alternatively, perform the following steps to deploy the {project_name} template via the OpenShift web console:
. Log in to the OpenShift web console and select the _sso-app-demo_ project space.
. Click *Add to Project*, then *Browse Catalog* to list the default image streams and templates.
. Use the *Filter by Keyword* search bar to limit the list to those that match _sso_. You may need to click *Middleware*, then *Integration* to show the desired application template.
. Select an {project_name} application template. This example uses *_{project_name} {project_versionDoc} (Ephemeral)_*.
. Click *Next* in the *Information* step.
. From the *Add to Project* drop-down menu, select the _sso-app-demo_ project space. Then click *Next*.
. Select *Do not bind at this time* radio button in the *Binding* step. Click *Create* to continue.
. In the *Results* step, click the *Continue to the project overview* link to verify the status of the deployment.
==== Accessing the Administrator Console of the {project_name} Pod
After the template got deployed, identify the available routes:
[source,bash,subs="attributes+,macros+"]
----
$ oc get routes
----
[cols="7",options="header"]
|===
|NAME |HOST/PORT |PATH |SERVICES |PORT |TERMINATION |WILDCARD
|sso
|sso-sso-app-demo.openshift.example.com
|
|sso
|<all>
|reencrypt
|None
|===
and access the {project_name} administrator console at:
* *\https://sso-sso-app-demo.openshift.example.com/auth/admin*
using the xref:sso-administrator-setup[administrator account].