--Task-- name: Deploy_all_forgerock_components enabled: True class_name: DeployComponentsTask source_name: controller source_namespace: >default< target_name: controller target_namespace: >default< start: 0 stop: None timeout: no timeout loop: False interval: None dependencies: ['Enable_prometheus_admin_api'] wait_for: [] options: {} group_name: None Current dir: /mnt/disks/data/xslou/lodestar-fork/pyrock ________________________________________________________________________________ [04/Feb/2023 03:35:48] Deploy_all_forgerock_components pre : Initialising task parameters ________________________________________________________________________________ task will be executed on controller (localhost) ________________________________________________________________________________ [04/Feb/2023 03:35:48] Deploy_all_forgerock_components step1 : Deploy components ________________________________________________________________________________ ******************************** Cleaning up existing namespace ******************************** ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- pod "admin-ui-5d95c47d49-ngwv8" force deleted pod "am-685d4f4864-k8tkh" force deleted pod "am-685d4f4864-ntjs4" force deleted pod "am-685d4f4864-xcmws" force deleted pod "amster-wfw9z" force deleted pod "ds-cts-0" force deleted pod "ds-cts-1" force deleted pod "ds-cts-2" force deleted pod "ds-idrepo-0" force deleted pod "ds-idrepo-1" force deleted pod "ds-idrepo-2" force deleted pod "end-user-ui-54cd6f9459-bkjc9" force deleted pod "idm-6ddf478c88-kztcz" force deleted pod "idm-6ddf478c88-tsh2p" force deleted pod "ldif-importer-bskk8" force deleted pod "login-ui-7678f6977f-wtdt4" force deleted pod "overseer-0-88449c8-rqpl2" force deleted service "admin-ui" force deleted service "am" force deleted service "ds-cts" force deleted service "ds-idrepo" force deleted service "end-user-ui" force deleted service "idm" force deleted service "login-ui" force deleted service "overseer-0" force deleted deployment.apps "admin-ui" force deleted deployment.apps "am" force deleted deployment.apps "end-user-ui" force deleted deployment.apps "idm" force deleted deployment.apps "login-ui" force deleted deployment.apps "overseer-0" force deleted statefulset.apps "ds-cts" force deleted statefulset.apps "ds-idrepo" force deleted job.batch "amster" force deleted job.batch "ldif-importer" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 10s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 21s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 31s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 42s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 52s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 02s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 13s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 23s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 34s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 44s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 54s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 2m 05s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-files amster-retain dev-utils idm idm-logging-properties kube-root-ca.crt overseer-config-0 platform-config --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap amster-files --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "amster-files" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap amster-retain --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "amster-retain" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap dev-utils --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "dev-utils" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap idm --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap idm-logging-properties --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm-logging-properties" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap kube-root-ca.crt --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "kube-root-ca.crt" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap overseer-config-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "overseer-config-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap platform-config --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "platform-config" deleted --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- cloud-storage-credentials-cts cloud-storage-credentials-idrepo --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-cts --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-cts" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-idrepo --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-idrepo" deleted --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- forgerock ig overseer-0 --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress forgerock --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "forgerock" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress ig --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "ig" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress overseer-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "overseer-0" deleted --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 overseer-0 --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc overseer-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "overseer-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ----------------- Deleting admin clusterrolebindings ----------------- [loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}" [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- k8s-svc-acct-crb-xlou-0 --- stderr --- Deleting clusterrolebinding k8s-svc-acct-crb-xlou-0 associated with xlou namespace [loop_until]: kubectl delete clusterrolebinding k8s-svc-acct-crb-xlou-0 [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- clusterrolebinding.rbac.authorization.k8s.io "k8s-svc-acct-crb-xlou-0" deleted --- stderr --- ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace "xlou" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ************************************* Creating deployment ************************************* Creating normal (forgeops) type deployment for deployment: stack ------- Custom component configuration present. Loading values ------- ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ----------------- Deleting admin clusterrolebindings ----------------- [loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}" [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl create namespace xlou [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou created --- stderr --- [loop_until]: kubectl label namespace xlou self-service=false timeout=48 [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou labeled --- stderr --- ************************************ Configuring components ************************************ No custom config provided. Nothing to do. No custom features provided. Nothing to do. ---- Updating components image tag/repo from platform-images repo ---- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --clean [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Cleaning up. [WARNING] Found nothing to clean. --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ds [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Repo is at 42e61ae351cd6d528f018f3a965cb73bd5343992 on branch HEAD [INFO] Updating products ds [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 42e61ae351cd6d528f018f3a965cb73bd5343992 on branch HEAD [INFO] Updating products am [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-00041da0042e1fd3ee67b8103908145322d13e72 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 42e61ae351cd6d528f018f3a965cb73bd5343992 on branch HEAD [INFO] Updating products amster [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-00041da0042e1fd3ee67b8103908145322d13e72 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 42e61ae351cd6d528f018f3a965cb73bd5343992 on branch HEAD [INFO] Updating products idm [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-8dbc647204f9868b7551f8eb4aa6dd36aa56f043 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 42e61ae351cd6d528f018f3a965cb73bd5343992 on branch HEAD [INFO] Updating products ui [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-912961a09be5defa681b874c087db33388002ff4 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-912961a09be5defa681b874c087db33388002ff4 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-912961a09be5defa681b874c087db33388002ff4 --- stderr --- - Checking if component Dockerfile/kustomize needs additional update - [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds cts [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts --- stderr --- Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds idrepo [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo --- stderr --- Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am --- stderr --- Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-00041da0042e1fd3ee67b8103908145322d13e72 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster --- stderr --- Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-00041da0042e1fd3ee67b8103908145322d13e72 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm --- stderr --- Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-8dbc647204f9868b7551f8eb4aa6dd36aa56f043 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base end-user-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui --- stderr --- Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-912961a09be5defa681b874c087db33388002ff4 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base login-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui --- stderr --- Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-912961a09be5defa681b874c087db33388002ff4 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base admin-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui --- stderr --- Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-912961a09be5defa681b874c087db33388002ff4 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize overlay medium [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/medium --- stderr --- [loop_until]: kubectl --namespace=xlou delete -f /tmp/tmpmbopg4ri [loop_until]: (max_time=180, interval=5, expected_rc=[0, 1] [loop_until]: OK (rc = 1) --- stdout --- --- stderr --- Error from server (NotFound): error when deleting "/tmp/tmpmbopg4ri": secrets "sslcert" not found [loop_until]: kubectl --namespace=xlou apply -f /tmp/tmpmbopg4ri [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret/sslcert created --- stderr --- The following components will be deployed: - ds-cts (DS) - ds-idrepo (DS) - am (AM) - amster (Amster) - idm (IDM) - end-user-ui (EndUserUi) - login-ui (LoginUi) - admin-ui (AdminUi) [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/forgeops build all --config-profile=cdk --push-to gcr.io/engineeringpit/lodestar-images --tag=xlou [run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'} Sending build context to Docker daemon 10.24kB Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-00041da0042e1fd3ee67b8103908145322d13e72 ---> b0e38773b9f3 Step 2/6 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> f10e145e56ec Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 576f6dc56906 Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/ ---> Using cache ---> c3bf595268eb Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/ ---> Using cache ---> 2a4d70fec9fd Step 6/6 : WORKDIR /home/forgerock ---> Using cache ---> bec2ada3c406 Successfully built bec2ada3c406 Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/am] f52353067bab: Preparing bc4b01e5f4be: Preparing d0c1de767f55: Preparing e34a3a0752f5: Preparing c036916c10fd: Preparing 4034a0bd60f7: Preparing 26bef6226d90: Preparing c284ef5a95ff: Preparing 0b45911b37e2: Preparing edd142d2cfa9: Preparing b407b0a77401: Preparing b35b3224e90e: Preparing 9104152b8b62: Preparing 3f46a076585b: Preparing cb3b2cf1d9d2: Preparing d3ea706f5155: Preparing ec3462f5a675: Preparing c1d9c8946493: Preparing 574a36c3051f: Preparing ff02f409ce0b: Preparing 9bea29b39391: Preparing c40c193af109: Preparing 4ea488ed4421: Preparing 21ad03d1c8b2: Preparing da185b405e69: Preparing 58d43b67d5ad: Preparing 8878ab435c3c: Preparing 4f5f6b573582: Preparing 71b38085acd2: Preparing eb6ee5b9581f: Preparing e3abdc2e9252: Preparing eafe6e032dbd: Preparing 92a4e8a3140f: Preparing 4034a0bd60f7: Waiting 26bef6226d90: Waiting c284ef5a95ff: Waiting 0b45911b37e2: Waiting edd142d2cfa9: Waiting b407b0a77401: Waiting b35b3224e90e: Waiting 9104152b8b62: Waiting 3f46a076585b: Waiting cb3b2cf1d9d2: Waiting d3ea706f5155: Waiting ec3462f5a675: Waiting c1d9c8946493: Waiting 574a36c3051f: Waiting ff02f409ce0b: Waiting 9bea29b39391: Waiting c40c193af109: Waiting 4ea488ed4421: Waiting 21ad03d1c8b2: Waiting da185b405e69: Waiting 58d43b67d5ad: Waiting 8878ab435c3c: Waiting 4f5f6b573582: Waiting 71b38085acd2: Waiting eb6ee5b9581f: Waiting e3abdc2e9252: Waiting eafe6e032dbd: Waiting 92a4e8a3140f: Waiting d0c1de767f55: Layer already exists e34a3a0752f5: Layer already exists bc4b01e5f4be: Layer already exists f52353067bab: Layer already exists 26bef6226d90: Layer already exists 0b45911b37e2: Layer already exists c284ef5a95ff: Layer already exists c036916c10fd: Layer already exists 4034a0bd60f7: Layer already exists edd142d2cfa9: Layer already exists b407b0a77401: Layer already exists 9104152b8b62: Layer already exists b35b3224e90e: Layer already exists d3ea706f5155: Layer already exists ec3462f5a675: Layer already exists 3f46a076585b: Layer already exists c1d9c8946493: Layer already exists 574a36c3051f: Layer already exists cb3b2cf1d9d2: Layer already exists ff02f409ce0b: Layer already exists c40c193af109: Layer already exists 4ea488ed4421: Layer already exists 21ad03d1c8b2: Layer already exists 9bea29b39391: Layer already exists da185b405e69: Layer already exists 58d43b67d5ad: Layer already exists 8878ab435c3c: Layer already exists 71b38085acd2: Layer already exists e3abdc2e9252: Layer already exists 4f5f6b573582: Layer already exists eb6ee5b9581f: Layer already exists eafe6e032dbd: Layer already exists 92a4e8a3140f: Layer already exists xlou: digest: sha256:8d40cf580836d9170a2c871b359926d853c240fcc77130ee654e0b7a03ac94c6 size: 7221 Sending build context to Docker daemon 316.4kB Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-8dbc647204f9868b7551f8eb4aa6dd36aa56f043 ---> fd41311d70e6 Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> a986d7e35b96 Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar ---> Using cache ---> a7f9b7c1d912 Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal ---> Using cache ---> 116c48ef6c0d Step 5/8 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> 06afa4b9fd15 Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> d441dfb6cbbf Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm ---> Using cache ---> 190c0d513f62 Step 8/8 : COPY --chown=forgerock:root . /opt/openidm ---> Using cache ---> eadae1b35e05 Successfully built eadae1b35e05 Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/idm] fb59f4bb15be: Preparing c4512c689cbc: Preparing 9b10f25fea18: Preparing 622d43952c1c: Preparing 27068810ed4e: Preparing 957f5242ce17: Preparing b3fb48bd801b: Preparing e314b2edd023: Preparing d81baefabad0: Preparing 801cd8c51d7a: Preparing 957f5242ce17: Waiting b3fb48bd801b: Waiting e314b2edd023: Waiting d81baefabad0: Waiting 801cd8c51d7a: Waiting fb59f4bb15be: Layer already exists c4512c689cbc: Layer already exists 9b10f25fea18: Layer already exists 622d43952c1c: Layer already exists 27068810ed4e: Layer already exists b3fb48bd801b: Layer already exists e314b2edd023: Layer already exists 957f5242ce17: Layer already exists 801cd8c51d7a: Layer already exists d81baefabad0: Layer already exists xlou: digest: sha256:f9cbcfdb8c3d5475a6ffc1109b8d25a42413e14db4aa59c9e27e7bb9bbb7477f size: 2415 Sending build context to Docker daemon 129kB Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 ---> dbff81292ead Step 2/11 : USER root ---> Using cache ---> 78a28946c1d6 Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils ---> Using cache ---> 9ad62863a236 Step 4/11 : USER forgerock ---> Using cache ---> ead193672d7d Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data ---> Using cache ---> 8afbffa44d20 Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds" ---> Using cache ---> 74cbb0a63772 Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore" ---> Using cache ---> 863ad50351b3 Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts ---> Using cache ---> d61d98b95416 Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext ---> Using cache ---> ce1af0f7fb4c Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/ ---> Using cache ---> 42def4fae7b8 Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext ---> Using cache ---> e1fbdd52a060 [Warning] One or more build-args [CONFIG_PROFILE] were not consumed Successfully built e1fbdd52a060 Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds] 687337fc29fa: Preparing e364c2b60dee: Preparing 8b07c6cd0386: Preparing 1c8f942c80eb: Preparing 92ac2adbfe8b: Preparing 19b18afea0d3: Preparing 40a3f3a371eb: Preparing 7cdabc880e86: Preparing 9452f9e977af: Preparing a95541680708: Preparing b0f94f52290c: Preparing 67a4178b7d47: Preparing 40a3f3a371eb: Waiting 7cdabc880e86: Waiting 9452f9e977af: Waiting a95541680708: Waiting b0f94f52290c: Waiting 67a4178b7d47: Waiting 19b18afea0d3: Waiting 1c8f942c80eb: Layer already exists 687337fc29fa: Layer already exists e364c2b60dee: Layer already exists 92ac2adbfe8b: Layer already exists 8b07c6cd0386: Layer already exists 19b18afea0d3: Layer already exists 40a3f3a371eb: Layer already exists 7cdabc880e86: Layer already exists 9452f9e977af: Layer already exists b0f94f52290c: Layer already exists a95541680708: Layer already exists 67a4178b7d47: Layer already exists xlou: digest: sha256:f380f464f9f8e9ac683214151a127c64c2ed08162fc5aa06653396327efee996 size: 2840 Sending build context to Docker daemon 293.4kB Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 ---> dbff81292ead Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> b2ce7aaec48e Step 3/10 : WORKDIR /opt/opendj ---> Using cache ---> 2e0338c1b481 Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/ ---> Using cache ---> 7168ddc5592d Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/ ---> Using cache ---> a775b70d73e5 Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts ---> Using cache ---> 9c1b9f8c8033 Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma ---> Using cache ---> a633263fe8a8 Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/ ---> Using cache ---> a7d7db7af764 Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif ---> Using cache ---> 0befe55ac8bd Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh ---> Using cache ---> 0eed5ab39b1f [Warning] One or more build-args [CONFIG_PROFILE] were not consumed Successfully built 0eed5ab39b1f Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-idrepo] 908770149431: Preparing 667f4be61249: Preparing 956ace8d0013: Preparing 54c2c85ce3ce: Preparing ae793690598b: Preparing 00d6d6b975a4: Preparing a80bd4ced8fd: Preparing 68083f01a93b: Preparing 19b18afea0d3: Preparing 40a3f3a371eb: Preparing 7cdabc880e86: Preparing 9452f9e977af: Preparing a95541680708: Preparing b0f94f52290c: Preparing 67a4178b7d47: Preparing 19b18afea0d3: Waiting 40a3f3a371eb: Waiting 7cdabc880e86: Waiting 9452f9e977af: Waiting a95541680708: Waiting b0f94f52290c: Waiting 67a4178b7d47: Waiting 00d6d6b975a4: Waiting a80bd4ced8fd: Waiting 68083f01a93b: Waiting ae793690598b: Layer already exists 667f4be61249: Layer already exists 908770149431: Layer already exists 956ace8d0013: Layer already exists 54c2c85ce3ce: Layer already exists 00d6d6b975a4: Layer already exists 68083f01a93b: Layer already exists 19b18afea0d3: Layer already exists a80bd4ced8fd: Layer already exists 40a3f3a371eb: Layer already exists 7cdabc880e86: Layer already exists 9452f9e977af: Layer already exists b0f94f52290c: Layer already exists a95541680708: Layer already exists 67a4178b7d47: Layer already exists xlou: digest: sha256:eb9677660e9d0db6c9d939f9302ce33af68f126dd9f5a825e4390f8303dd72bd size: 3456 Sending build context to Docker daemon 293.4kB Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-35060cde0333c76dfba29a4e88896b885a0deba7 ---> dbff81292ead Step 2/10 : USER root ---> Using cache ---> 78a28946c1d6 Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> c06f986f620f Step 4/10 : RUN chown -R forgerock:root /opt/opendj ---> Using cache ---> 5956b9a61755 Step 5/10 : USER forgerock ---> Using cache ---> 92de484f9e6a Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/ ---> Using cache ---> 57e7548d0b69 Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/ ---> Using cache ---> 14071486f4ce Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts ---> Using cache ---> ab36bdaf08f8 Step 9/10 : ARG profile_version ---> Using cache ---> f3fff374b103 Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh ---> Using cache ---> ba04d80e4b7c [Warning] One or more build-args [CONFIG_PROFILE] were not consumed Successfully built ba04d80e4b7c Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-cts] 16cc269a4fda: Preparing d889e3ccd4a0: Preparing fcfc5ef85f4c: Preparing bc72c72324e5: Preparing 27204a05559e: Preparing 74bb51abf27d: Preparing 19b18afea0d3: Preparing 40a3f3a371eb: Preparing 7cdabc880e86: Preparing 9452f9e977af: Preparing a95541680708: Preparing b0f94f52290c: Preparing 67a4178b7d47: Preparing 40a3f3a371eb: Waiting 7cdabc880e86: Waiting 9452f9e977af: Waiting a95541680708: Waiting b0f94f52290c: Waiting 67a4178b7d47: Waiting 74bb51abf27d: Waiting 19b18afea0d3: Waiting 27204a05559e: Layer already exists fcfc5ef85f4c: Layer already exists d889e3ccd4a0: Layer already exists bc72c72324e5: Layer already exists 16cc269a4fda: Layer already exists 19b18afea0d3: Layer already exists 40a3f3a371eb: Layer already exists 7cdabc880e86: Layer already exists 74bb51abf27d: Layer already exists 9452f9e977af: Layer already exists a95541680708: Layer already exists b0f94f52290c: Layer already exists 67a4178b7d47: Layer already exists xlou: digest: sha256:2f5e629c9e43cd93416b4e4b620f483c256373ddc4c94061066a733d6cf15ddd size: 3045 Sending build context to Docker daemon 34.3kB Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit ---> 7dd93b447159 Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> a0fd85dd28c0 Step 3/6 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> bf64423643e1 Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> a54dcec9db21 Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig ---> Using cache ---> ad84ee6840a1 Step 6/6 : COPY --chown=forgerock:root . /var/ig ---> Using cache ---> 73bb9a6608fb Successfully built 73bb9a6608fb Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/ig] db18ca515cef: Preparing a75fd5555aea: Preparing 06fb97320911: Preparing ab5d061ff1d2: Preparing b6a26486dd56: Preparing 10f40a6a43c5: Preparing 1810b7217266: Preparing 60267818b020: Preparing 6972c6a4586d: Preparing 5e660b55249c: Preparing a12586ed027f: Preparing 1810b7217266: Waiting 60267818b020: Waiting 6972c6a4586d: Waiting 5e660b55249c: Waiting a12586ed027f: Waiting 10f40a6a43c5: Waiting db18ca515cef: Layer already exists a75fd5555aea: Layer already exists 06fb97320911: Layer already exists b6a26486dd56: Layer already exists 10f40a6a43c5: Layer already exists ab5d061ff1d2: Layer already exists 6972c6a4586d: Layer already exists 1810b7217266: Layer already exists 60267818b020: Layer already exists a12586ed027f: Layer already exists 5e660b55249c: Layer already exists xlou: digest: sha256:2d19c32393ff10bdef053aa4b6a1587ba8c258af792704bf9d83bd0ad1f566d8 size: 2621 Updated the image_defaulter with your new image for am: "gcr.io/engineeringpit/lodestar-images/am:xlou". Updated the image_defaulter with your new image for idm: "gcr.io/engineeringpit/lodestar-images/idm:xlou". Updated the image_defaulter with your new image for ds: "gcr.io/engineeringpit/lodestar-images/ds:xlou". Updated the image_defaulter with your new image for ds-idrepo: "gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou". Updated the image_defaulter with your new image for ds-cts: "gcr.io/engineeringpit/lodestar-images/ds-cts:xlou". Updated the image_defaulter with your new image for ig: "gcr.io/engineeringpit/lodestar-images/ig:xlou". [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/forgeops install --namespace=xlou --fqdn xlou.iam.xlou-bsln.engineeringpit.com --custom /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/internal-profiles/medium-old --legacy all [run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'} customresourcedefinition.apiextensions.k8s.io/secretagentconfigurations.secret-agent.secrets.forgerock.io condition met deployment.apps/secret-agent-controller-manager condition met NAME READY STATUS RESTARTS AGE secret-agent-controller-manager-75c755487b-k7mgk 2/2 Running 0 6h48m configmap/dev-utils created configmap/platform-config created ingress.networking.k8s.io/forgerock created ingress.networking.k8s.io/ig created certificate.cert-manager.io/ds-master-cert created certificate.cert-manager.io/ds-ssl-cert created issuer.cert-manager.io/selfsigned-issuer created secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created secret/cloud-storage-credentials-cts created secret/cloud-storage-credentials-idrepo created service/ds-cts created service/ds-idrepo created statefulset.apps/ds-cts created statefulset.apps/ds-idrepo created job.batch/ldif-importer created Checking cert-manager and related CRDs: cert-manager CRD found in cluster. Checking secret-agent operator and related CRDs: secret-agent CRD found in cluster.  Checking secret-agent operator is running... secret-agent operator is running Installing component(s): ['all'] platform: "custom-old" in namespace: "xlou".  Deploying base.yaml. This is a one time activity.  Deploying ds.yaml. This includes all directory resources.  Waiting for DS deployment. This can take a few minutes. First installation takes longer. Waiting for statefulset "ds-idrepo" to exist in the cluster: Waiting for 3 pods to be ready... Waiting for 2 pods to be ready... Waiting for 1 pods to be ready... statefulset rolling update complete 3 pods at revision ds-idrepo-7b446fff4d... done Waiting for Service Account Password Update: done Waiting for statefulset "ds-cts" to exist in the cluster: statefulset rolling update complete 3 pods at revision ds-cts-87b85b6bd... done Waiting for Service Account Password Update: configmap/amster-files created configmap/idm created configmap/idm-logging-properties created service/am created service/idm created deployment.apps/am created deployment.apps/idm created job.batch/amster created done Cleaning up amster components.  Deploying apps.  Waiting for AM deployment. This can take a few minutes. First installation takes longer. Waiting for deployment "am" to exist in the cluster: deployment.apps/am condition met configmap/amster-retain created done  Waiting for amster job to complete. This can take several minutes. Waiting for job "amster" to exist in the cluster: job.batch/amster condition met done  Waiting for IDM deployment. This can take a few minutes. First installation takes longer. Waiting for deployment "idm" to exist in the cluster: pod/idm-6ddf478c88-7wfrg condition met pod/idm-6ddf478c88-cqc29 condition met service/admin-ui created service/end-user-ui created service/login-ui created deployment.apps/admin-ui created deployment.apps/end-user-ui created deployment.apps/login-ui created done  Deploying UI.  Waiting for K8s secrets. Waiting for secret "am-env-secrets" to exist in the cluster: done Waiting for secret "idm-env-secrets" to exist in the cluster: done Waiting for secret "ds-passwords" to exist in the cluster: done Waiting for secret "ds-env-secrets" to exist in the cluster: done  Relevant passwords: WQAeIj6YQPJvtvWMDwZ3R2DR (amadmin user) k7raQF3XHmafcPfKXqlq3PYJhqhbkC3C (uid=admin user) nw1tVGqwk7CEGKFHLVzTzXDtoFBhlgOl (App str svc acct (uid=am-config,ou=admins,ou=am-config)) qF1WmbxRuYiZ6YZmTCJHUAzmqjFl6BGw (CTS svc acct (uid=openam_cts,ou=admins,ou=famrecords,ou=openam-session,ou=tokens)) BKJ7tLe3sJK0WeomDvEIdp5zJkn2WTza (ID repo svc acct (uid=am-identity-bind-account,ou=admins,ou=identities))  Relevant URLs: https://xlou.iam.xlou-bsln.engineeringpit.com/platform https://xlou.iam.xlou-bsln.engineeringpit.com/admin https://xlou.iam.xlou-bsln.engineeringpit.com/am https://xlou.iam.xlou-bsln.engineeringpit.com/enduser  Enjoy your deployment! **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- -------------------- Check pod ds-cts-0 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:39:58Z --- stderr --- ------------- Check pod ds-cts-0 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-0 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-0 has been restarted 0 times. -------------------- Check pod ds-cts-1 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:40:20Z --- stderr --- ------------- Check pod ds-cts-1 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-1 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-1 has been restarted 0 times. -------------------- Check pod ds-cts-2 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:40:43Z --- stderr --- ------------- Check pod ds-cts-2 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-2 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-2 has been restarted 0 times. ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ------------------ Check pod ds-idrepo-0 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:39:58Z --- stderr --- ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-0 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-0 has been restarted 0 times. ------------------ Check pod ds-idrepo-1 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:40:31Z --- stderr --- ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-1 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-1 has been restarted 0 times. ------------------ Check pod ds-idrepo-2 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:41:09Z --- stderr --- ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-2 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-2 has been restarted 0 times. ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- am-685d4f4864-cqdmp am-685d4f4864-gpjj8 am-685d4f4864-spg8m --- stderr --- -------------- Check pod am-685d4f4864-cqdmp is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-cqdmp -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-cqdmp -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-cqdmp -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:41:49Z --- stderr --- ------- Check pod am-685d4f4864-cqdmp filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-685d4f4864-cqdmp -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-685d4f4864-cqdmp restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-cqdmp -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-685d4f4864-cqdmp has been restarted 0 times. -------------- Check pod am-685d4f4864-gpjj8 is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-gpjj8 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-gpjj8 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-gpjj8 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:41:49Z --- stderr --- ------- Check pod am-685d4f4864-gpjj8 filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-685d4f4864-gpjj8 -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-685d4f4864-gpjj8 restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-gpjj8 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-685d4f4864-gpjj8 has been restarted 0 times. -------------- Check pod am-685d4f4864-spg8m is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-spg8m -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-spg8m -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-spg8m -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:41:49Z --- stderr --- ------- Check pod am-685d4f4864-spg8m filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-685d4f4864-spg8m -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-685d4f4864-spg8m restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-spg8m -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-685d4f4864-spg8m has been restarted 0 times. **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-6pl8w --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- idm-6ddf478c88-7wfrg idm-6ddf478c88-cqc29 --- stderr --- -------------- Check pod idm-6ddf478c88-7wfrg is running -------------- [loop_until]: kubectl --namespace=xlou get pods idm-6ddf478c88-7wfrg -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods idm-6ddf478c88-7wfrg -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod idm-6ddf478c88-7wfrg -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:41:49Z --- stderr --- ------- Check pod idm-6ddf478c88-7wfrg filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec idm-6ddf478c88-7wfrg -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-6ddf478c88-7wfrg restart count ------------ [loop_until]: kubectl --namespace=xlou get pod idm-6ddf478c88-7wfrg -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-6ddf478c88-7wfrg has been restarted 0 times. -------------- Check pod idm-6ddf478c88-cqc29 is running -------------- [loop_until]: kubectl --namespace=xlou get pods idm-6ddf478c88-cqc29 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods idm-6ddf478c88-cqc29 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod idm-6ddf478c88-cqc29 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:41:49Z --- stderr --- ------- Check pod idm-6ddf478c88-cqc29 filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec idm-6ddf478c88-cqc29 -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-6ddf478c88-cqc29 restart count ------------ [loop_until]: kubectl --namespace=xlou get pod idm-6ddf478c88-cqc29 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-6ddf478c88-cqc29 has been restarted 0 times. ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-54cd6f9459-vkkcz --- stderr --- ---------- Check pod end-user-ui-54cd6f9459-vkkcz is running ---------- [loop_until]: kubectl --namespace=xlou get pods end-user-ui-54cd6f9459-vkkcz -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods end-user-ui-54cd6f9459-vkkcz -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod end-user-ui-54cd6f9459-vkkcz -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:42:52Z --- stderr --- --- Check pod end-user-ui-54cd6f9459-vkkcz filesystem is accessible --- [loop_until]: kubectl --namespace=xlou exec end-user-ui-54cd6f9459-vkkcz -c end-user-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- -------- Check pod end-user-ui-54cd6f9459-vkkcz restart count -------- [loop_until]: kubectl --namespace=xlou get pod end-user-ui-54cd6f9459-vkkcz -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod end-user-ui-54cd6f9459-vkkcz has been restarted 0 times. *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- login-ui-7678f6977f-9swxh --- stderr --- ----------- Check pod login-ui-7678f6977f-9swxh is running ----------- [loop_until]: kubectl --namespace=xlou get pods login-ui-7678f6977f-9swxh -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods login-ui-7678f6977f-9swxh -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod login-ui-7678f6977f-9swxh -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:42:52Z --- stderr --- ---- Check pod login-ui-7678f6977f-9swxh filesystem is accessible ---- [loop_until]: kubectl --namespace=xlou exec login-ui-7678f6977f-9swxh -c login-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod login-ui-7678f6977f-9swxh restart count ---------- [loop_until]: kubectl --namespace=xlou get pod login-ui-7678f6977f-9swxh -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod login-ui-7678f6977f-9swxh has been restarted 0 times. *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- admin-ui-5d95c47d49-psg6l --- stderr --- ----------- Check pod admin-ui-5d95c47d49-psg6l is running ----------- [loop_until]: kubectl --namespace=xlou get pods admin-ui-5d95c47d49-psg6l -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods admin-ui-5d95c47d49-psg6l -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod admin-ui-5d95c47d49-psg6l -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-02-04T03:42:51Z --- stderr --- ---- Check pod admin-ui-5d95c47d49-psg6l filesystem is accessible ---- [loop_until]: kubectl --namespace=xlou exec admin-ui-5d95c47d49-psg6l -c admin-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod admin-ui-5d95c47d49-psg6l restart count ---------- [loop_until]: kubectl --namespace=xlou get pod admin-ui-5d95c47d49-psg6l -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod admin-ui-5d95c47d49-psg6l has been restarted 0 times. ***************************** Checking DS-CTS component is running ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- *************************** Checking DS-IDREPO component is running *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- ******************************* Checking AM component is running ******************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:3 replicas:3 --- stderr --- ***************************** Checking AMSTER component is running ***************************** ------------------ Waiting for Amster job to finish ------------------ --------------------- Get expected number of pods --------------------- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get jobs amster -o jsonpath="{.status.succeeded}" | grep "1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ****************************** Checking IDM component is running ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- -------------- Waiting for 2 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:2 replicas:2 --- stderr --- ************************** Checking END-USER-UI component is running ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking LOGIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking ADMIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Initializing component pods for DS-CTS **************************** --------------------- Get DS-CTS software version --------------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** ------------------- Get DS-IDREPO software version ------------------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ****************************** Initializing component pods for AM ****************************** ----------------------- Get AM software version ----------------------- Getting product version from https://xlou.iam.xlou-bsln.engineeringpit.com/am/json/serverinfo/version - Login amadmin to get token [loop_until]: kubectl --namespace=xlou get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- V1FBZUlqNllRUEp2dHZXTUR3WjNSMkRS --- stderr --- Authenticate user via REST [http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: WQAeIj6YQPJvtvWMDwZ3R2DR" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-bsln.engineeringpit.com/am/json/authenticate?realm=/" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "tokenId": "uXxaURXdcBqjmOlllP-z0Ld_wZE.*AAJTSQACMDIAAlNLABxzMUFXcnVuUitOM3AvbzMrbkpLdGg1ZlRsdmc9AAR0eXBlAANDVFMAAlMxAAIwMQ..*", "successUrl": "/am/console", "realm": "/" } [http_cmd]: curl -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=uXxaURXdcBqjmOlllP-z0Ld_wZE.*AAJTSQACMDIAAlNLABxzMUFXcnVuUitOM3AvbzMrbkpLdGg1ZlRsdmc9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1675482218.985.13376.264661|f60edb382037eb2df1e800d563ad78a7" "https://xlou.iam.xlou-bsln.engineeringpit.com/am/json/serverinfo/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "_rev": "438208014", "version": "7.3.0-SNAPSHOT", "fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build 00041da0042e1fd3ee67b8103908145322d13e72 (2023-February-02 22:01)", "revision": "00041da0042e1fd3ee67b8103908145322d13e72", "date": "2023-February-02 22:01" } **************************** Initializing component pods for AMSTER **************************** ***************************** Initializing component pods for IDM ***************************** ---------------------- Get IDM software version ---------------------- Getting product version from https://xlou.iam.xlou-bsln.engineeringpit.com/openidm/info/version [http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-bsln.engineeringpit.com/openidm/info/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "productVersion": "7.3.0-SNAPSHOT", "productBuildDate": "20230118074839", "productRevision": "8dbc647" } ************************* Initializing component pods for END-USER-UI ************************* ------------------ Get END-USER-UI software version ------------------ [loop_until]: kubectl --namespace=xlou exec end-user-ui-54cd6f9459-vkkcz -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.392c7102.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp end-user-ui-54cd6f9459-vkkcz:/usr/share/nginx/html/js/chunk-vendors.392c7102.js /tmp/end-user-ui_info/chunk-vendors.392c7102.js -c end-user-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** -------------------- Get LOGIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou exec login-ui-7678f6977f-9swxh -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.2eded935.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp login-ui-7678f6977f-9swxh:/usr/share/nginx/html/js/chunk-vendors.2eded935.js /tmp/login-ui_info/chunk-vendors.2eded935.js -c login-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** -------------------- Get ADMIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou exec admin-ui-5d95c47d49-psg6l -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.5b6784f5.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp admin-ui-5d95c47d49-psg6l:/usr/share/nginx/html/js/chunk-vendors.5b6784f5.js /tmp/admin-ui_info/chunk-vendors.5b6784f5.js -c admin-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- [loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- azdyYVFGM1hIbWFmY1BmS1hxbHEzUFlKaHFoYmtDM0M= --- stderr --- ==================================================================================================== ================ Admin password for DS-CTS is: k7raQF3XHmafcPfKXqlq3PYJhqhbkC3C ================ ==================================================================================================== [loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- azdyYVFGM1hIbWFmY1BmS1hxbHEzUFlKaHFoYmtDM0M= --- stderr --- ==================================================================================================== ============== Admin password for DS-IDREPO is: k7raQF3XHmafcPfKXqlq3PYJhqhbkC3C ============== ==================================================================================================== ==================================================================================================== ====================== Admin password for AM is: WQAeIj6YQPJvtvWMDwZ3R2DR ====================== ==================================================================================================== [loop_until]: kubectl --namespace=xlou get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- UkRCdHZKWnhocDFjOUZialk4RnExUUdE --- stderr --- ==================================================================================================== ===================== Admin password for IDM is: RDBtvJZxhp1c9FbjY8Fq1QGD ===================== ==================================================================================================== *************************************** Dumping pod list *************************************** Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/_pod-list.txt **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- am-685d4f4864-cqdmp am-685d4f4864-gpjj8 am-685d4f4864-spg8m --- stderr --- **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-6pl8w --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- idm-6ddf478c88-7wfrg idm-6ddf478c88-cqc29 --- stderr --- ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-54cd6f9459-vkkcz --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- login-ui-7678f6977f-9swxh --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- admin-ui-5d95c47d49-psg6l --- stderr --- *********************************** Dumping components logs *********************************** ----------------------- Dumping logs for DS-CTS ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/ds-cts-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/ds-cts-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/ds-cts-2.txt Check pod logs for errors --------------------- Dumping logs for DS-IDREPO --------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/ds-idrepo-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/ds-idrepo-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/ds-idrepo-2.txt Check pod logs for errors ------------------------- Dumping logs for AM ------------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/am-685d4f4864-cqdmp.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/am-685d4f4864-gpjj8.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/am-685d4f4864-spg8m.txt Check pod logs for errors ----------------------- Dumping logs for AMSTER ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/amster-6pl8w.txt Check pod logs for errors ------------------------ Dumping logs for IDM ------------------------ Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/idm-6ddf478c88-7wfrg.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/idm-6ddf478c88-cqc29.txt Check pod logs for errors -------------------- Dumping logs for END-USER-UI -------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/end-user-ui-54cd6f9459-vkkcz.txt Check pod logs for errors ---------------------- Dumping logs for LOGIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/login-ui-7678f6977f-9swxh.txt Check pod logs for errors ---------------------- Dumping logs for ADMIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230204-034344-after-deployment/admin-ui-5d95c47d49-psg6l.txt Check pod logs for errors [04/Feb/2023 03:44:04] - INFO: Deployment successful ________________________________________________________________________________ [04/Feb/2023 03:44:04] Deploy_all_forgerock_components post : Post method ________________________________________________________________________________ Setting result to PASS Task has been successfully stopped