--Task-- name: Deploy_all_forgerock_components enabled: True class_name: DeployComponentsTask source_name: controller source_namespace: >default< target_name: controller target_namespace: >default< start: 0 stop: None timeout: no timeout loop: False interval: None dependencies: ['Enable_prometheus_admin_api'] wait_for: [] options: {} group_name: None Current dir: /mnt/disks/data/xslou/lodestar-fork/pyrock ________________________________________________________________________________ [06/Apr/2023 21:13:24] Deploy_all_forgerock_components pre : Initialising task parameters ________________________________________________________________________________ task will be executed on controller (localhost) ________________________________________________________________________________ [06/Apr/2023 21:13:24] Deploy_all_forgerock_components step1 : Deploy components ________________________________________________________________________________ ******************************** Cleaning up existing namespace ******************************** ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- pod "admin-ui-56c658bb6-xrw9n" force deleted pod "am-685d4f4864-86fvx" force deleted pod "am-685d4f4864-f64vp" force deleted pod "am-685d4f4864-nwb65" force deleted pod "amster-m26p4" force deleted pod "ds-cts-0" force deleted pod "ds-cts-1" force deleted pod "ds-cts-2" force deleted pod "ds-idrepo-0" force deleted pod "ds-idrepo-1" force deleted pod "ds-idrepo-2" force deleted pod "end-user-ui-6f5dbc46f8-l4wxr" force deleted pod "idm-6b4845dbc5-6tj5s" force deleted pod "idm-6b4845dbc5-ntbkq" force deleted pod "ldif-importer-9vd5s" force deleted pod "login-ui-857ffdc996-rpkk9" force deleted pod "overseer-0-58bdc499d5-ncn68" force deleted service "admin-ui" force deleted service "am" force deleted service "ds-cts" force deleted service "ds-idrepo" force deleted service "end-user-ui" force deleted service "idm" force deleted service "login-ui" force deleted service "overseer-0" force deleted deployment.apps "admin-ui" force deleted deployment.apps "am" force deleted deployment.apps "end-user-ui" force deleted deployment.apps "idm" force deleted deployment.apps "login-ui" force deleted deployment.apps "overseer-0" force deleted statefulset.apps "ds-cts" force deleted statefulset.apps "ds-idrepo" force deleted job.batch "amster" force deleted job.batch "ldif-importer" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 10s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 21s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 31s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 42s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 52s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 02s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 13s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 23s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 33s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 44s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 54s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 2m 05s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-files amster-retain dev-utils idm idm-logging-properties kube-root-ca.crt overseer-config-0 platform-config --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap amster-files --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "amster-files" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap amster-retain --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "amster-retain" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap dev-utils --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "dev-utils" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap idm --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap idm-logging-properties --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm-logging-properties" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap kube-root-ca.crt --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "kube-root-ca.crt" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap overseer-config-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "overseer-config-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap platform-config --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "platform-config" deleted --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- cloud-storage-credentials-cts cloud-storage-credentials-idrepo --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-cts --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-cts" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-idrepo --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-idrepo" deleted --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- forgerock ig overseer-0 --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress forgerock --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "forgerock" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress ig --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "ig" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress overseer-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "overseer-0" deleted --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 overseer-0 --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc overseer-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "overseer-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ----------------- Deleting admin clusterrolebindings ----------------- [loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}" [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- k8s-svc-acct-crb-xlou-0 --- stderr --- Deleting clusterrolebinding k8s-svc-acct-crb-xlou-0 associated with xlou namespace [loop_until]: kubectl delete clusterrolebinding k8s-svc-acct-crb-xlou-0 [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- clusterrolebinding.rbac.authorization.k8s.io "k8s-svc-acct-crb-xlou-0" deleted --- stderr --- ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace "xlou" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ************************************* Creating deployment ************************************* Creating normal (forgeops) type deployment for deployment: stack ------- Custom component configuration present. Loading values ------- ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ----------------- Deleting admin clusterrolebindings ----------------- [loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}" [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl create namespace xlou [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou created --- stderr --- [loop_until]: kubectl label namespace xlou self-service=false timeout=48 [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou labeled --- stderr --- ************************************ Configuring components ************************************ No custom config provided. Nothing to do. No custom features provided. Nothing to do. ---- Updating components image tag/repo from platform-images repo ---- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --clean [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Cleaning up. [WARNING] Found nothing to clean. --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products ds [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Repo is at 9777e59ff49e0bfc417da81c0ee2b528cbae330c on branch sustaining/7.3.x [INFO] Updating products ds [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 9777e59ff49e0bfc417da81c0ee2b528cbae330c on branch sustaining/7.3.x [INFO] Updating products am [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-d0e0f0f08d45c97e1331f11b6b2ae9fc31d5e28d --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 9777e59ff49e0bfc417da81c0ee2b528cbae330c on branch sustaining/7.3.x [INFO] Updating products amster [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-d0e0f0f08d45c97e1331f11b6b2ae9fc31d5e28d --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 9777e59ff49e0bfc417da81c0ee2b528cbae330c on branch sustaining/7.3.x [INFO] Updating products idm [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 9777e59ff49e0bfc417da81c0ee2b528cbae330c on branch sustaining/7.3.x [INFO] Updating products ui [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-1dec1ed7c0e3192e8afe46c4c8239c7108414857 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-1dec1ed7c0e3192e8afe46c4c8239c7108414857 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-1dec1ed7c0e3192e8afe46c4c8239c7108414857 --- stderr --- - Checking if component Dockerfile/kustomize needs additional update - [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds cts [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts --- stderr --- Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds idrepo [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo --- stderr --- Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am --- stderr --- Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-d0e0f0f08d45c97e1331f11b6b2ae9fc31d5e28d No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster --- stderr --- Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-d0e0f0f08d45c97e1331f11b6b2ae9fc31d5e28d No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm --- stderr --- Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base end-user-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui --- stderr --- Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-1dec1ed7c0e3192e8afe46c4c8239c7108414857 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base login-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui --- stderr --- Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-1dec1ed7c0e3192e8afe46c4c8239c7108414857 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base admin-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui --- stderr --- Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-1dec1ed7c0e3192e8afe46c4c8239c7108414857 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize overlay medium [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/medium --- stderr --- [loop_until]: kubectl --namespace=xlou delete -f /tmp/tmpl7j19n3s [loop_until]: (max_time=180, interval=5, expected_rc=[0, 1] [loop_until]: OK (rc = 1) --- stdout --- --- stderr --- Error from server (NotFound): error when deleting "/tmp/tmpl7j19n3s": secrets "sslcert" not found [loop_until]: kubectl --namespace=xlou apply -f /tmp/tmpl7j19n3s [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret/sslcert created --- stderr --- The following components will be deployed: - ds-cts (DS) - ds-idrepo (DS) - am (AM) - amster (Amster) - idm (IDM) - end-user-ui (EndUserUi) - login-ui (LoginUi) - admin-ui (AdminUi) [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/forgeops build all --config-profile=cdk --push-to gcr.io/engineeringpit/lodestar-images --tag=xlou [run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'} Sending build context to Docker daemon 10.24kB Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-d0e0f0f08d45c97e1331f11b6b2ae9fc31d5e28d ---> 7fde2db0c1a7 Step 2/6 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> 3b6bafb9dd96 Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> e061dbf289d4 Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/ ---> Using cache ---> a15461410049 Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/ ---> Using cache ---> bdceaa346236 Step 6/6 : WORKDIR /home/forgerock ---> Using cache ---> 4616b0c0f5b5 Successfully built 4616b0c0f5b5 Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/am] 0960b22180ad: Preparing b887d792e734: Preparing c57088fdcee3: Preparing d4ad75fa2fdc: Preparing a0c5569fc01c: Preparing a944cfc93389: Preparing 0d41684213f1: Preparing a6bf82e5f30e: Preparing f4e8ea536b52: Preparing 0c36748cae3b: Preparing a58435e83fb9: Preparing dd3634e8d393: Preparing 2d12481c2ea9: Preparing 87be3a900d8b: Preparing 7e80f3ea9489: Preparing b72448ecede0: Preparing 978097307a8b: Preparing 433373b25f31: Preparing 3af12dc1cc36: Preparing f09f443715a2: Preparing 2a26d579527d: Preparing 5477d6949583: Preparing a292c25c2f9a: Preparing 80818bbf4b5d: Preparing 0fa2950fff43: Preparing 691ea1d866ef: Preparing 82f4367b1510: Preparing 1ddb90d48e77: Preparing 16979ca3c8a2: Preparing df948e73a878: Preparing 9af81ef986f7: Preparing 4d02a34d0051: Preparing 749d6cfaa5eb: Preparing 8ac55e7a3c46: Preparing 04d1dcab20cb: Preparing b93c1bd012ab: Preparing a944cfc93389: Waiting 0d41684213f1: Waiting a6bf82e5f30e: Waiting f4e8ea536b52: Waiting 0c36748cae3b: Waiting a58435e83fb9: Waiting dd3634e8d393: Waiting 2d12481c2ea9: Waiting 87be3a900d8b: Waiting 7e80f3ea9489: Waiting b72448ecede0: Waiting 978097307a8b: Waiting 433373b25f31: Waiting 3af12dc1cc36: Waiting f09f443715a2: Waiting 2a26d579527d: Waiting a292c25c2f9a: Waiting 80818bbf4b5d: Waiting 0fa2950fff43: Waiting 691ea1d866ef: Waiting 82f4367b1510: Waiting 1ddb90d48e77: Waiting 16979ca3c8a2: Waiting df948e73a878: Waiting 9af81ef986f7: Waiting 4d02a34d0051: Waiting 749d6cfaa5eb: Waiting 8ac55e7a3c46: Waiting 04d1dcab20cb: Waiting b93c1bd012ab: Waiting c57088fdcee3: Layer already exists d4ad75fa2fdc: Layer already exists b887d792e734: Layer already exists 0960b22180ad: Layer already exists a0c5569fc01c: Layer already exists f4e8ea536b52: Layer already exists a6bf82e5f30e: Layer already exists 0d41684213f1: Layer already exists 0c36748cae3b: Layer already exists a944cfc93389: Layer already exists dd3634e8d393: Layer already exists a58435e83fb9: Layer already exists 2d12481c2ea9: Layer already exists 87be3a900d8b: Layer already exists 7e80f3ea9489: Layer already exists b72448ecede0: Layer already exists 978097307a8b: Layer already exists 433373b25f31: Layer already exists 3af12dc1cc36: Layer already exists f09f443715a2: Layer already exists 2a26d579527d: Layer already exists 80818bbf4b5d: Layer already exists 5477d6949583: Layer already exists 0fa2950fff43: Layer already exists a292c25c2f9a: Layer already exists 691ea1d866ef: Layer already exists 1ddb90d48e77: Layer already exists 82f4367b1510: Layer already exists 16979ca3c8a2: Layer already exists df948e73a878: Layer already exists 9af81ef986f7: Layer already exists 4d02a34d0051: Layer already exists 749d6cfaa5eb: Layer already exists 04d1dcab20cb: Layer already exists 8ac55e7a3c46: Layer already exists b93c1bd012ab: Layer already exists xlou: digest: sha256:fae86102dd8374e78ac5d98db56845fe98742e7aebebd5445a81ba84bfa5eed5 size: 7843 Sending build context to Docker daemon 316.4kB Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9 ---> a65af4c6b1fa Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> f35fd9042fbb Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar ---> Using cache ---> b3351455c0aa Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal ---> Using cache ---> 80244a39bd26 Step 5/8 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> be6b8a08193f Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 45b799b85c39 Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm ---> Using cache ---> 70676c2c2db5 Step 8/8 : COPY --chown=forgerock:root . /opt/openidm ---> Using cache ---> d94f706dc9fe Successfully built d94f706dc9fe Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/idm] f0f47c91bb0f: Preparing ed3fa3b9ee05: Preparing 8db55b71b5f9: Preparing 5f70bf18a086: Preparing 38fe551f2463: Preparing 68884240a8b2: Preparing 1ae67d4978a1: Preparing 8d20f850e8e0: Preparing 09308431d152: Preparing fd535567db7f: Preparing c9182c130984: Preparing 1ae67d4978a1: Waiting 8d20f850e8e0: Waiting 09308431d152: Waiting fd535567db7f: Waiting c9182c130984: Waiting 68884240a8b2: Waiting f0f47c91bb0f: Layer already exists 38fe551f2463: Layer already exists 8db55b71b5f9: Layer already exists ed3fa3b9ee05: Layer already exists 5f70bf18a086: Layer already exists 1ae67d4978a1: Layer already exists 68884240a8b2: Layer already exists 09308431d152: Layer already exists 8d20f850e8e0: Layer already exists fd535567db7f: Layer already exists c9182c130984: Layer already exists xlou: digest: sha256:65ef8394bbe3e47615f5a8f3854cb7d3e012702151c41a59dd937fa232344ec5 size: 2621 Sending build context to Docker daemon 128.5kB Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 ---> 9deba3244b26 Step 2/11 : USER root ---> Using cache ---> 940f8554f1f4 Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils ---> Using cache ---> cf9d5ab64a80 Step 4/11 : USER forgerock ---> Using cache ---> 809bbce7913d Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data ---> Using cache ---> 2ee60fd844e4 Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds" ---> Using cache ---> 5144146c5a06 Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore" ---> Using cache ---> 94a62f25db63 Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts ---> Using cache ---> d3058378d774 Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext ---> Using cache ---> 8325e4c48557 Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/ ---> Using cache ---> 409b1dea0e01 Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext ---> Using cache ---> a0fc9ea9dde5 [Warning] One or more build-args [CONFIG_PROFILE] were not consumed Successfully built a0fc9ea9dde5 Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds] 18e98dbd20ef: Preparing a336b11b2782: Preparing 4a0d86b33513: Preparing d67ba3f62f1a: Preparing f91128dd9303: Preparing 7e178ceb030b: Preparing 5f70bf18a086: Preparing ebc5f6f12c2b: Preparing 6a900ee656e0: Preparing 2a85b1b0509d: Preparing 5df41b110532: Preparing acc7038f329f: Preparing 3af14c9a24c9: Preparing 7e178ceb030b: Waiting 5f70bf18a086: Waiting ebc5f6f12c2b: Waiting 6a900ee656e0: Waiting 2a85b1b0509d: Waiting 5df41b110532: Waiting acc7038f329f: Waiting 3af14c9a24c9: Waiting f91128dd9303: Layer already exists 4a0d86b33513: Layer already exists d67ba3f62f1a: Layer already exists 18e98dbd20ef: Layer already exists a336b11b2782: Layer already exists 7e178ceb030b: Layer already exists 5f70bf18a086: Layer already exists 2a85b1b0509d: Layer already exists ebc5f6f12c2b: Layer already exists 6a900ee656e0: Layer already exists 5df41b110532: Layer already exists acc7038f329f: Layer already exists 3af14c9a24c9: Layer already exists xlou: digest: sha256:c681bf08f147ff85097975e25bc4573d10ce1252d197bed26796dcda7f95db6e size: 3046 Sending build context to Docker daemon 292.9kB Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 ---> 9deba3244b26 Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 242b4b870710 Step 3/10 : WORKDIR /opt/opendj ---> Using cache ---> 59c382afd681 Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/ ---> Using cache ---> 66596d7c68fa Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/ ---> Using cache ---> 0b2d90998258 Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts ---> Using cache ---> 0f23026554a0 Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma ---> Using cache ---> 474d385d011b Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/ ---> Using cache ---> 9796698ed556 Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif ---> Using cache ---> dedb019c50f8 Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh ---> Using cache ---> d38a1d7517ab [Warning] One or more build-args [CONFIG_PROFILE] were not consumed Successfully built d38a1d7517ab Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-idrepo] dc9f0ee16c52: Preparing 914325fc3061: Preparing ccdd15aa0213: Preparing fbc23e69881e: Preparing 5b0c208c31b9: Preparing 11178a752757: Preparing 1d7513b2afe6: Preparing d30a64a3168b: Preparing 7e178ceb030b: Preparing 5f70bf18a086: Preparing ebc5f6f12c2b: Preparing 6a900ee656e0: Preparing 2a85b1b0509d: Preparing 5df41b110532: Preparing 11178a752757: Waiting 1d7513b2afe6: Waiting d30a64a3168b: Waiting 7e178ceb030b: Waiting 5f70bf18a086: Waiting ebc5f6f12c2b: Waiting 6a900ee656e0: Waiting 2a85b1b0509d: Waiting 5df41b110532: Waiting acc7038f329f: Preparing 3af14c9a24c9: Preparing acc7038f329f: Waiting 3af14c9a24c9: Waiting 5b0c208c31b9: Layer already exists dc9f0ee16c52: Layer already exists fbc23e69881e: Layer already exists ccdd15aa0213: Layer already exists 914325fc3061: Layer already exists 11178a752757: Layer already exists d30a64a3168b: Layer already exists 1d7513b2afe6: Layer already exists 7e178ceb030b: Layer already exists 5f70bf18a086: Layer already exists ebc5f6f12c2b: Layer already exists 5df41b110532: Layer already exists 6a900ee656e0: Layer already exists 2a85b1b0509d: Layer already exists acc7038f329f: Layer already exists 3af14c9a24c9: Layer already exists xlou: digest: sha256:77a575f2f59e3ff456b297a22ae08bd765a0146e0929b0dfb78d3b46ed07d3e1 size: 3662 Sending build context to Docker daemon 292.9kB Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-47dd3dc1b26e0d8a982cad26d51b3a91ed1e9309 ---> 9deba3244b26 Step 2/10 : USER root ---> Using cache ---> 940f8554f1f4 Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 2902861e037e Step 4/10 : RUN chown -R forgerock:root /opt/opendj ---> Using cache ---> 6a476b8f89ff Step 5/10 : USER forgerock ---> Using cache ---> b40a6645c251 Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/ ---> Using cache ---> 0f56c74b8ed2 Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/ ---> Using cache ---> 6dbb457fc345 Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts ---> Using cache ---> c910481b416c Step 9/10 : ARG profile_version ---> Using cache ---> a92240d3dbaa Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh ---> Using cache ---> 17e53e0aefca [Warning] One or more build-args [CONFIG_PROFILE] were not consumed Successfully built 17e53e0aefca Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-cts] 990692bdb0ac: Preparing 03e5f1c13c2d: Preparing 97cc5908563c: Preparing 1b6ebd141400: Preparing e2da494ad62f: Preparing 0aa196dcfeeb: Preparing 7e178ceb030b: Preparing 5f70bf18a086: Preparing ebc5f6f12c2b: Preparing 6a900ee656e0: Preparing 2a85b1b0509d: Preparing 5df41b110532: Preparing acc7038f329f: Preparing 3af14c9a24c9: Preparing 5f70bf18a086: Waiting ebc5f6f12c2b: Waiting 6a900ee656e0: Waiting 2a85b1b0509d: Waiting 5df41b110532: Waiting acc7038f329f: Waiting 3af14c9a24c9: Waiting 0aa196dcfeeb: Waiting 7e178ceb030b: Waiting 03e5f1c13c2d: Layer already exists 1b6ebd141400: Layer already exists 97cc5908563c: Layer already exists e2da494ad62f: Layer already exists 990692bdb0ac: Layer already exists 0aa196dcfeeb: Layer already exists 5f70bf18a086: Layer already exists 7e178ceb030b: Layer already exists ebc5f6f12c2b: Layer already exists 6a900ee656e0: Layer already exists 2a85b1b0509d: Layer already exists 5df41b110532: Layer already exists acc7038f329f: Layer already exists 3af14c9a24c9: Layer already exists xlou: digest: sha256:0885ccb41d442b815fe0ec1443124023961234db5b285a663bed37dff7207864 size: 3251 Sending build context to Docker daemon 34.3kB Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit ---> b97c2a3010cf Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 6f1f8dfb827f Step 3/6 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> 1ea3d9405c1c Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 2c46bb490dfd Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig ---> Using cache ---> 1fa17c930470 Step 6/6 : COPY --chown=forgerock:root . /var/ig ---> Using cache ---> f91a15d0eb36 Successfully built f91a15d0eb36 Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/ig] 902bdba5abb8: Preparing 9de06495f434: Preparing bf96000f7582: Preparing 4fb17506c7d6: Preparing 5696f243e6cc: Preparing 964c1eecc7f5: Preparing ab8038891451: Preparing c6f8bfcecf05: Preparing 315cd8c5da97: Preparing d456513ae67c: Preparing 67a4178b7d47: Preparing ab8038891451: Waiting c6f8bfcecf05: Waiting 315cd8c5da97: Waiting d456513ae67c: Waiting 67a4178b7d47: Waiting 964c1eecc7f5: Waiting 4fb17506c7d6: Layer already exists bf96000f7582: Layer already exists 5696f243e6cc: Layer already exists 9de06495f434: Layer already exists 902bdba5abb8: Layer already exists 964c1eecc7f5: Layer already exists ab8038891451: Layer already exists c6f8bfcecf05: Layer already exists 315cd8c5da97: Layer already exists d456513ae67c: Layer already exists 67a4178b7d47: Layer already exists xlou: digest: sha256:91c570f2b277b1f7f2f6dee6d07e94121a7f9703dc7964abd12ed996abf848b5 size: 2621 Updated the image_defaulter with your new image for am: "gcr.io/engineeringpit/lodestar-images/am:xlou". Updated the image_defaulter with your new image for idm: "gcr.io/engineeringpit/lodestar-images/idm:xlou". Updated the image_defaulter with your new image for ds: "gcr.io/engineeringpit/lodestar-images/ds:xlou". Updated the image_defaulter with your new image for ds-idrepo: "gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou". Updated the image_defaulter with your new image for ds-cts: "gcr.io/engineeringpit/lodestar-images/ds-cts:xlou". Updated the image_defaulter with your new image for ig: "gcr.io/engineeringpit/lodestar-images/ig:xlou". [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/forgeops install --namespace=xlou --fqdn xlou.iam.xlou-cdm.engineeringpit.com --custom /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/internal-profiles/medium-old --legacy all [run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'} customresourcedefinition.apiextensions.k8s.io/secretagentconfigurations.secret-agent.secrets.forgerock.io condition met deployment.apps/secret-agent-controller-manager condition met NAME READY STATUS RESTARTS AGE secret-agent-controller-manager-75c755487b-nrrwf 2/2 Running 0 6h45m configmap/dev-utils created configmap/platform-config created ingress.networking.k8s.io/forgerock created ingress.networking.k8s.io/ig created certificate.cert-manager.io/ds-master-cert created certificate.cert-manager.io/ds-ssl-cert created issuer.cert-manager.io/selfsigned-issuer created secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created Checking cert-manager and related CRDs: cert-manager CRD found in cluster. Checking secret-agent operator and related CRDs: secret-agent CRD found in cluster.  Checking secret-agent operator is running... secret-agent operator is running Installing component(s): ['all'] platform: "custom-old" in namespace: "xlou".  Deploying base.yaml. This is a one time activity.  Waiting for K8s secrets. Waiting for secret "am-env-secrets" to exist in the cluster: ..done Waiting for secret "idm-env-secrets" to exist in the cluster: ...done Waiting for secret "ds-passwords" to exist in the cluster: done Waiting for secret "ds-env-secrets" to exist in the cluster: secret/cloud-storage-credentials-cts created secret/cloud-storage-credentials-idrepo created service/ds-cts created service/ds-idrepo created statefulset.apps/ds-cts created statefulset.apps/ds-idrepo created job.batch/ldif-importer created done  Deploying ds.yaml. This includes all directory resources.  Waiting for DS deployment. This can take a few minutes. First installation takes longer. Waiting for statefulset "ds-idrepo" to exist in the cluster: Waiting for 3 pods to be ready... Waiting for 2 pods to be ready... Waiting for 1 pods to be ready... statefulset rolling update complete 3 pods at revision ds-idrepo-7b446fff4d... done Waiting for Service Account Password Update: done Waiting for statefulset "ds-cts" to exist in the cluster: statefulset rolling update complete 3 pods at revision ds-cts-87b85b6bd... done Waiting for Service Account Password Update: configmap/amster-files created configmap/idm created configmap/idm-logging-properties created service/am created service/idm created deployment.apps/am created deployment.apps/idm created job.batch/amster created done Cleaning up amster components.  Deploying apps.  Waiting for AM deployment. This can take a few minutes. First installation takes longer. Waiting for deployment "am" to exist in the cluster: deployment.apps/am condition met configmap/amster-retain created done  Waiting for amster job to complete. This can take several minutes. Waiting for job "amster" to exist in the cluster: job.batch/amster condition met done  Waiting for IDM deployment. This can take a few minutes. First installation takes longer. Waiting for deployment "idm" to exist in the cluster: pod/idm-6b4845dbc5-bkxzm condition met pod/idm-6b4845dbc5-qq5rw condition met service/admin-ui created service/end-user-ui created service/login-ui created deployment.apps/admin-ui created deployment.apps/end-user-ui created deployment.apps/login-ui created done  Deploying UI.  Waiting for K8s secrets. Waiting for secret "am-env-secrets" to exist in the cluster: done Waiting for secret "idm-env-secrets" to exist in the cluster: done Waiting for secret "ds-passwords" to exist in the cluster: done Waiting for secret "ds-env-secrets" to exist in the cluster: done  Relevant passwords: 2TCWKQMVTZBLC7JAuZXYbabh (amadmin user) 4KtjFNaFyWcFx0XEEcWfBzr4Bp8nKShU (uid=admin user) gzPhx2TFfTxfmtwIcwmvcbY6zx2ZOWNm (App str svc acct (uid=am-config,ou=admins,ou=am-config)) zGyALUEqFMBSV571i4CAIs4R4FY2ExAy (CTS svc acct (uid=openam_cts,ou=admins,ou=famrecords,ou=openam-session,ou=tokens)) nutRP5SIi40dIHmzdINYl0uPCeJ7SciM (ID repo svc acct (uid=am-identity-bind-account,ou=admins,ou=identities))  Relevant URLs: https://xlou.iam.xlou-cdm.engineeringpit.com/platform https://xlou.iam.xlou-cdm.engineeringpit.com/admin https://xlou.iam.xlou-cdm.engineeringpit.com/am https://xlou.iam.xlou-cdm.engineeringpit.com/enduser  Enjoy your deployment! **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- -------------------- Check pod ds-cts-0 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:17:58Z --- stderr --- ------------- Check pod ds-cts-0 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-0 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-0 has been restarted 0 times. -------------------- Check pod ds-cts-1 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:18:25Z --- stderr --- ------------- Check pod ds-cts-1 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-1 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-1 has been restarted 0 times. -------------------- Check pod ds-cts-2 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:18:48Z --- stderr --- ------------- Check pod ds-cts-2 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-2 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-2 has been restarted 0 times. ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ------------------ Check pod ds-idrepo-0 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:17:59Z --- stderr --- ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-0 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-0 has been restarted 0 times. ------------------ Check pod ds-idrepo-1 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:18:31Z --- stderr --- ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-1 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-1 has been restarted 0 times. ------------------ Check pod ds-idrepo-2 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:19:03Z --- stderr --- ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-2 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-2 has been restarted 0 times. ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- am-685d4f4864-rjb9b am-685d4f4864-vc455 am-685d4f4864-wjgw4 --- stderr --- -------------- Check pod am-685d4f4864-rjb9b is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-rjb9b -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-rjb9b -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-rjb9b -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:19:36Z --- stderr --- ------- Check pod am-685d4f4864-rjb9b filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-685d4f4864-rjb9b -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-685d4f4864-rjb9b restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-rjb9b -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-685d4f4864-rjb9b has been restarted 0 times. -------------- Check pod am-685d4f4864-vc455 is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-vc455 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-vc455 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-vc455 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:19:36Z --- stderr --- ------- Check pod am-685d4f4864-vc455 filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-685d4f4864-vc455 -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-685d4f4864-vc455 restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-vc455 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-685d4f4864-vc455 has been restarted 0 times. -------------- Check pod am-685d4f4864-wjgw4 is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-wjgw4 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-wjgw4 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-wjgw4 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:19:36Z --- stderr --- ------- Check pod am-685d4f4864-wjgw4 filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-685d4f4864-wjgw4 -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-685d4f4864-wjgw4 restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-wjgw4 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-685d4f4864-wjgw4 has been restarted 0 times. **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-jzrwz --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- idm-6b4845dbc5-bkxzm idm-6b4845dbc5-qq5rw --- stderr --- -------------- Check pod idm-6b4845dbc5-bkxzm is running -------------- [loop_until]: kubectl --namespace=xlou get pods idm-6b4845dbc5-bkxzm -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods idm-6b4845dbc5-bkxzm -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod idm-6b4845dbc5-bkxzm -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:19:36Z --- stderr --- ------- Check pod idm-6b4845dbc5-bkxzm filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec idm-6b4845dbc5-bkxzm -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-6b4845dbc5-bkxzm restart count ------------ [loop_until]: kubectl --namespace=xlou get pod idm-6b4845dbc5-bkxzm -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-6b4845dbc5-bkxzm has been restarted 0 times. -------------- Check pod idm-6b4845dbc5-qq5rw is running -------------- [loop_until]: kubectl --namespace=xlou get pods idm-6b4845dbc5-qq5rw -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods idm-6b4845dbc5-qq5rw -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod idm-6b4845dbc5-qq5rw -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:19:36Z --- stderr --- ------- Check pod idm-6b4845dbc5-qq5rw filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec idm-6b4845dbc5-qq5rw -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-6b4845dbc5-qq5rw restart count ------------ [loop_until]: kubectl --namespace=xlou get pod idm-6b4845dbc5-qq5rw -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-6b4845dbc5-qq5rw has been restarted 0 times. ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-6f5dbc46f8-2zs42 --- stderr --- ---------- Check pod end-user-ui-6f5dbc46f8-2zs42 is running ---------- [loop_until]: kubectl --namespace=xlou get pods end-user-ui-6f5dbc46f8-2zs42 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods end-user-ui-6f5dbc46f8-2zs42 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod end-user-ui-6f5dbc46f8-2zs42 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:20:39Z --- stderr --- --- Check pod end-user-ui-6f5dbc46f8-2zs42 filesystem is accessible --- [loop_until]: kubectl --namespace=xlou exec end-user-ui-6f5dbc46f8-2zs42 -c end-user-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- -------- Check pod end-user-ui-6f5dbc46f8-2zs42 restart count -------- [loop_until]: kubectl --namespace=xlou get pod end-user-ui-6f5dbc46f8-2zs42 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod end-user-ui-6f5dbc46f8-2zs42 has been restarted 0 times. *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- login-ui-857ffdc996-r2bfs --- stderr --- ----------- Check pod login-ui-857ffdc996-r2bfs is running ----------- [loop_until]: kubectl --namespace=xlou get pods login-ui-857ffdc996-r2bfs -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods login-ui-857ffdc996-r2bfs -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod login-ui-857ffdc996-r2bfs -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:20:39Z --- stderr --- ---- Check pod login-ui-857ffdc996-r2bfs filesystem is accessible ---- [loop_until]: kubectl --namespace=xlou exec login-ui-857ffdc996-r2bfs -c login-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod login-ui-857ffdc996-r2bfs restart count ---------- [loop_until]: kubectl --namespace=xlou get pod login-ui-857ffdc996-r2bfs -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod login-ui-857ffdc996-r2bfs has been restarted 0 times. *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- admin-ui-56c658bb6-997qv --- stderr --- ------------ Check pod admin-ui-56c658bb6-997qv is running ------------ [loop_until]: kubectl --namespace=xlou get pods admin-ui-56c658bb6-997qv -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods admin-ui-56c658bb6-997qv -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod admin-ui-56c658bb6-997qv -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-04-06T21:20:39Z --- stderr --- ----- Check pod admin-ui-56c658bb6-997qv filesystem is accessible ----- [loop_until]: kubectl --namespace=xlou exec admin-ui-56c658bb6-997qv -c admin-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod admin-ui-56c658bb6-997qv restart count ---------- [loop_until]: kubectl --namespace=xlou get pod admin-ui-56c658bb6-997qv -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod admin-ui-56c658bb6-997qv has been restarted 0 times. ***************************** Checking DS-CTS component is running ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- *************************** Checking DS-IDREPO component is running *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- ******************************* Checking AM component is running ******************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:3 replicas:3 --- stderr --- ***************************** Checking AMSTER component is running ***************************** --------------------- Get expected number of pods --------------------- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get jobs amster -o jsonpath="{.status.succeeded}" | grep "1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ****************************** Checking IDM component is running ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- -------------- Waiting for 2 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:2 replicas:2 --- stderr --- ************************** Checking END-USER-UI component is running ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking LOGIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking ADMIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Initializing component pods for DS-CTS **************************** --------------------- Get DS-CTS software version --------------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** ------------------- Get DS-IDREPO software version ------------------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ****************************** Initializing component pods for AM ****************************** ----------------------- Get AM software version ----------------------- Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version - Login amadmin to get token [loop_until]: kubectl --namespace=xlou get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- MlRDV0tRTVZUWkJMQzdKQXVaWFliYWJo --- stderr --- Authenticate user via REST [http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: 2TCWKQMVTZBLC7JAuZXYbabh" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "tokenId": "3oyhfpU4fyL1S0jeGFhspkDKubo.*AAJTSQACMDIAAlNLABw4VWtEYUhkcjFpRysxNHZNZHFUaTVmN2VqQms9AAR0eXBlAANDVFMAAlMxAAIwMQ..*", "successUrl": "/am/console", "realm": "/" } [http_cmd]: curl -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=3oyhfpU4fyL1S0jeGFhspkDKubo.*AAJTSQACMDIAAlNLABw4VWtEYUhkcjFpRysxNHZNZHFUaTVmN2VqQms9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1680816087.049.12261.735081|f60edb382037eb2df1e800d563ad78a7" "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "_rev": "-560502796", "version": "7.3.0-SNAPSHOT", "fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build d0e0f0f08d45c97e1331f11b6b2ae9fc31d5e28d (2023-March-31 06:52)", "revision": "d0e0f0f08d45c97e1331f11b6b2ae9fc31d5e28d", "date": "2023-March-31 06:52" } **************************** Initializing component pods for AMSTER **************************** ***************************** Initializing component pods for IDM ***************************** ---------------------- Get IDM software version ---------------------- Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version [http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "productVersion": "7.3.0-SNAPSHOT", "productBuildDate": "20230330162641", "productRevision": "ed278902ce" } ************************* Initializing component pods for END-USER-UI ************************* ------------------ Get END-USER-UI software version ------------------ [loop_until]: kubectl --namespace=xlou exec end-user-ui-6f5dbc46f8-2zs42 -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.013bbd1b.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp end-user-ui-6f5dbc46f8-2zs42:/usr/share/nginx/html/js/chunk-vendors.013bbd1b.js /tmp/end-user-ui_info/chunk-vendors.013bbd1b.js -c end-user-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** -------------------- Get LOGIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou exec login-ui-857ffdc996-r2bfs -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.7c4c4742.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp login-ui-857ffdc996-r2bfs:/usr/share/nginx/html/js/chunk-vendors.7c4c4742.js /tmp/login-ui_info/chunk-vendors.7c4c4742.js -c login-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** -------------------- Get ADMIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou exec admin-ui-56c658bb6-997qv -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.25462d15.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp admin-ui-56c658bb6-997qv:/usr/share/nginx/html/js/chunk-vendors.25462d15.js /tmp/admin-ui_info/chunk-vendors.25462d15.js -c admin-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- [loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- NEt0akZOYUZ5V2NGeDBYRUVjV2ZCenI0QnA4bktTaFU= --- stderr --- ==================================================================================================== ================ Admin password for DS-CTS is: 4KtjFNaFyWcFx0XEEcWfBzr4Bp8nKShU ================ ==================================================================================================== [loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- NEt0akZOYUZ5V2NGeDBYRUVjV2ZCenI0QnA4bktTaFU= --- stderr --- ==================================================================================================== ============== Admin password for DS-IDREPO is: 4KtjFNaFyWcFx0XEEcWfBzr4Bp8nKShU ============== ==================================================================================================== ==================================================================================================== ====================== Admin password for AM is: 2TCWKQMVTZBLC7JAuZXYbabh ====================== ==================================================================================================== [loop_until]: kubectl --namespace=xlou get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- Zk9ZcUxxQUwzaXh6RVhKSFR5RXVtWk5x --- stderr --- ==================================================================================================== ===================== Admin password for IDM is: fOYqLqAL3ixzEXJHTyEumZNq ===================== ==================================================================================================== *************************************** Dumping pod list *************************************** Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/_pod-list.txt **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- am-685d4f4864-rjb9b am-685d4f4864-vc455 am-685d4f4864-wjgw4 --- stderr --- **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-jzrwz --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- idm-6b4845dbc5-bkxzm idm-6b4845dbc5-qq5rw --- stderr --- ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-6f5dbc46f8-2zs42 --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- login-ui-857ffdc996-r2bfs --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- admin-ui-56c658bb6-997qv --- stderr --- *********************************** Dumping components logs *********************************** ----------------------- Dumping logs for DS-CTS ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/ds-cts-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/ds-cts-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/ds-cts-2.txt Check pod logs for errors --------------------- Dumping logs for DS-IDREPO --------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/ds-idrepo-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/ds-idrepo-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/ds-idrepo-2.txt Check pod logs for errors ------------------------- Dumping logs for AM ------------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/am-685d4f4864-rjb9b.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/am-685d4f4864-vc455.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/am-685d4f4864-wjgw4.txt Check pod logs for errors ----------------------- Dumping logs for AMSTER ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/amster-jzrwz.txt Check pod logs for errors ------------------------ Dumping logs for IDM ------------------------ Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/idm-6b4845dbc5-bkxzm.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/idm-6b4845dbc5-qq5rw.txt Check pod logs for errors -------------------- Dumping logs for END-USER-UI -------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/end-user-ui-6f5dbc46f8-2zs42.txt Check pod logs for errors ---------------------- Dumping logs for LOGIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/login-ui-857ffdc996-r2bfs.txt Check pod logs for errors ---------------------- Dumping logs for ADMIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform/pod-logs/stack/20230406-212133-after-deployment/admin-ui-56c658bb6-997qv.txt Check pod logs for errors [06/Apr/2023 21:21:53] - INFO: Deployment successful ________________________________________________________________________________ [06/Apr/2023 21:21:53] Deploy_all_forgerock_components post : Post method ________________________________________________________________________________ Setting result to PASS Task has been successfully stopped