--Task-- name: Deploy_all_forgerock_components enabled: True class_name: DeployComponentsTask source_name: controller source_namespace: >default< target_name: controller target_namespace: >default< start: 0 stop: None timeout: no timeout loop: False interval: None dependencies: ['Enable_prometheus_admin_api'] wait_for: [] options: {} group_name: None Current dir: /mnt/disks/data/xslou/lodestar-fork/pyrock ________________________________________________________________________________ [04/Aug/2022 19:26:44] Deploy_all_forgerock_components pre : Initialising task parameters ________________________________________________________________________________ task will be executed on controller (localhost) ________________________________________________________________________________ [04/Aug/2022 19:26:44] Deploy_all_forgerock_components step1 : Deploy components ________________________________________________________________________________ ******************************** Cleaning up existing namespace ******************************** ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- pod "admin-ui-c9b4b8d7d-rnk75" force deleted pod "am-5b5df85c64-g9g9d" force deleted pod "am-5b5df85c64-kk5hp" force deleted pod "am-5b5df85c64-w8kt7" force deleted pod "amster-jcqkt" force deleted pod "ds-cts-0" force deleted pod "ds-cts-1" force deleted pod "ds-cts-2" force deleted pod "ds-idrepo-0" force deleted pod "ds-idrepo-1" force deleted pod "ds-idrepo-2" force deleted pod "end-user-ui-6fd8b6648d-m9vgb" force deleted pod "idm-55dc5c6786-85q68" force deleted pod "idm-55dc5c6786-wpgk2" force deleted pod "ldif-importer-9zrj9" force deleted pod "login-ui-658fcbc7d8-vft5r" force deleted pod "overseer-0-66fd5d7cfd-gx67z" force deleted service "admin-ui" force deleted service "am" force deleted service "ds-cts" force deleted service "ds-idrepo" force deleted service "end-user-ui" force deleted service "idm" force deleted service "login-ui" force deleted service "overseer-0" force deleted deployment.apps "admin-ui" force deleted deployment.apps "am" force deleted deployment.apps "end-user-ui" force deleted deployment.apps "idm" force deleted deployment.apps "login-ui" force deleted deployment.apps "overseer-0" force deleted statefulset.apps "ds-cts" force deleted statefulset.apps "ds-idrepo" force deleted job.batch "amster" force deleted job.batch "ldif-importer" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 10s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 21s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 31s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 41s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- idm idm-logging-properties kube-root-ca.crt overseer-config-0 platform-config --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap idm --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap idm-logging-properties --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm-logging-properties" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap kube-root-ca.crt --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "kube-root-ca.crt" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap overseer-config-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "overseer-config-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap platform-config --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "platform-config" deleted --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- cloud-storage-credentials-cts cloud-storage-credentials-idrepo --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-cts --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-cts" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-idrepo --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-idrepo" deleted --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- forgerock ig-web overseer-0 --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress forgerock --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "forgerock" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress ig-web --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "ig-web" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress overseer-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "overseer-0" deleted --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 overseer-0 --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc overseer-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "overseer-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace "xlou" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou-sp delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou-sp delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- pod "admin-ui-75d858f46-l2n4l" force deleted pod "am-66bb964b54-2q4mx" force deleted pod "am-66bb964b54-7vkkm" force deleted pod "amster-5qdlr" force deleted pod "ds-cts-0" force deleted pod "ds-cts-1" force deleted pod "ds-cts-2" force deleted pod "ds-idrepo-0" force deleted pod "ds-idrepo-1" force deleted pod "ds-idrepo-2" force deleted pod "end-user-ui-75c7c9b7d8-h9847" force deleted pod "idm-54cf4596bf-h8498" force deleted pod "idm-54cf4596bf-tdhvp" force deleted pod "ldif-importer-4gxr2" force deleted pod "login-ui-95c688884-6xl7p" force deleted service "admin-ui" force deleted service "am" force deleted service "ds-cts" force deleted service "ds-idrepo" force deleted service "end-user-ui" force deleted service "idm" force deleted service "login-ui" force deleted deployment.apps "admin-ui" force deleted deployment.apps "am" force deleted deployment.apps "end-user-ui" force deleted deployment.apps "idm" force deleted deployment.apps "login-ui" force deleted statefulset.apps "ds-cts" force deleted statefulset.apps "ds-idrepo" force deleted job.batch "amster" force deleted job.batch "ldif-importer" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou-sp get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 10s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou-sp namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou-sp get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- idm idm-logging-properties kube-root-ca.crt platform-config --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete configmap idm --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete configmap idm-logging-properties --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm-logging-properties" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete configmap kube-root-ca.crt --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "kube-root-ca.crt" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete configmap platform-config --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "platform-config" deleted --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou-sp get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- cloud-storage-credentials-cts cloud-storage-credentials-idrepo --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete secret cloud-storage-credentials-cts --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-cts" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete secret cloud-storage-credentials-idrepo --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-idrepo" deleted --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou-sp get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- forgerock ig-web --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete ingress forgerock --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "forgerock" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete ingress ig-web --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "ig-web" deleted --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-cts-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-cts-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-cts-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-idrepo-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-idrepo-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-idrepo-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete pv ds-backup-xlou-sp --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou-sp --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace "xlou-sp" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou-sp --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ************************************* Creating deployment ************************************* Creating normal (forgeops) type deployment for deployment: stack ------- Custom component configuration present. Loading values ------- ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl create namespace xlou [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou created --- stderr --- [loop_until]: kubectl label namespace xlou self-service=false timeout=48 [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou labeled --- stderr --- ************************************ Configuring components ************************************ Applying custom configuration, dockerfiles to deployment and custom lodestar component configuration Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/saml2/kustomize/overlay to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay No custom features provided. Nothing to do. ---- Updating components image tag/repo from platform-images repo ---- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --clean [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Cleaning up. [WARNING] Found nothing to clean. --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products am [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products amster [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products idm [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ds [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products ds [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products ui [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 --- stderr --- - Checking if component Dockerfile/kustomize needs additional update - [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am --- stderr --- Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster --- stderr --- Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm --- stderr --- Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds cts [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts --- stderr --- Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds idrepo [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo --- stderr --- Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base end-user-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui --- stderr --- Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base login-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui --- stderr --- Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base admin-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui --- stderr --- Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize overlay medium [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/medium --- stderr --- [loop_until]: kubectl --namespace=xlou delete -f /tmp/tmp6jyt618c [loop_until]: (max_time=180, interval=5, expected_rc=[0, 1] [loop_until]: OK (rc = 1) --- stdout --- --- stderr --- Error from server (NotFound): error when deleting "/tmp/tmp6jyt618c": secrets "sslcert" not found [loop_until]: kubectl --namespace=xlou apply -f /tmp/tmp6jyt618c [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret/sslcert created --- stderr --- ************************************* Creating deployment ************************************* Creating normal (forgeops) type deployment for deployment: sp ------- Custom component configuration present. Loading values ------- ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou-sp delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou-sp delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou-sp get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou-sp namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou-sp get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou-sp get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou-sp get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete pv ds-backup-xlou-sp --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou-sp --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou-sp --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl create namespace xlou-sp [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou-sp created --- stderr --- [loop_until]: kubectl label namespace xlou-sp self-service=false timeout=48 [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou-sp labeled --- stderr --- ************************************ Configuring components ************************************ Applying custom configuration, dockerfiles to deployment and custom lodestar component configuration Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/saml2/kustomize/overlay to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/overlay No custom features provided. Nothing to do. ---- Updating components image tag/repo from platform-images repo ---- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --clean [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Cleaning up. [WARNING] Found nothing to clean. --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images. [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products am [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images. [INFO] Found existing files, attempting to not clone [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products amster [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images. [INFO] Found existing files, attempting to not clone [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products idm [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products ds [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images. [INFO] Found existing files, attempting to not clone [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products ds [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images. [INFO] Found existing files, attempting to not clone [INFO] Repo is at d8db6633b5bc4bd40b6cf81e0ba8a05139852967 on branch HEAD [INFO] Updating products ui [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 --- stderr --- - Checking if component Dockerfile/kustomize needs additional update - [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am --- stderr --- Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster --- stderr --- Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm --- stderr --- Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker ds cts [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts --- stderr --- Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker ds idrepo [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo --- stderr --- Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path kustomize base end-user-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui --- stderr --- Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path kustomize base login-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui --- stderr --- Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path kustomize base admin-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui --- stderr --- Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-a96229d99486b694a65a3d11c19e788ee91055a9 No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path kustomize overlay small [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/overlay/small --- stderr --- [loop_until]: kubectl --namespace=xlou-sp delete -f /tmp/tmpzcwcytwn [loop_until]: (max_time=180, interval=5, expected_rc=[0, 1] [loop_until]: OK (rc = 1) --- stdout --- --- stderr --- Error from server (NotFound): error when deleting "/tmp/tmpzcwcytwn": secrets "sslcert" not found [loop_until]: kubectl --namespace=xlou-sp apply -f /tmp/tmpzcwcytwn [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret/sslcert created --- stderr --- The following components will be deployed: - am (AM) - amster (Amster) - idm (IDM) - ds-cts (DS) - ds-idrepo (DS) - end-user-ui (EndUserUi) - login-ui (LoginUi) - admin-ui (AdminUi) Run create-secrets.sh to create passwords [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/create-secrets.sh xlou [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- certificate.cert-manager.io/ds-master-cert created certificate.cert-manager.io/ds-ssl-cert created issuer.cert-manager.io/selfsigned-issuer created secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created --- stderr --- [loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=available deployment --all | grep "condition met" [loop_until]: (max_time=300, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- deployment.apps/secret-agent-controller-manager condition met --- stderr --- [loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=ready pod --all | grep "condition met" [loop_until]: (max_time=300, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- pod/secret-agent-controller-manager-59fcd58bbc-7lq45 condition met --- stderr --- [run_command]: skaffold build --file-output=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/medium.json --default-repo gcr.io/engineeringpit/lodestar-images --profile medium --config=/tmp/tmpr0in3eit --cache-artifacts=false --tag xlou --namespace=xlou [run_command]: env={'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'CONFIG_PROFILE': 'cdk'} Generating tags... - am -> gcr.io/engineeringpit/lodestar-images/am:xlou - amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou - idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou - ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou - ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou - ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou - ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou Starting build... Building [ds]... Sending build context to Docker daemon 115.2kB Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f 7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1 Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f ---> ed865decf122 Step 2/11 : USER root ---> Using cache ---> 4bdd9adb7b38 Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils ---> Using cache ---> f0868d2db47c Step 4/11 : USER forgerock ---> Using cache ---> 7c1d1df3ee67 Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data ---> Using cache ---> d9edd8b8d899 Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds" ---> Using cache ---> 06d762222685 Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore" ---> Using cache ---> c5e4e5b7bc10 Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts ---> Using cache ---> 5061cd0b5ede Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext ---> Using cache ---> 447344f14ce2 Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/ ---> Using cache ---> 7ecb22a7bea1 Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext ---> Using cache ---> e759a4968271 Successfully built e759a4968271 Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou Build [ds] succeeded Building [ds-cts]... Sending build context to Docker daemon 78.85kB Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f 7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1 Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f ---> ed865decf122 Step 2/10 : USER root ---> Using cache ---> 4bdd9adb7b38 Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 3fab72820015 Step 4/10 : RUN chown -R forgerock:root /opt/opendj ---> Using cache ---> 2207c68564d3 Step 5/10 : USER forgerock ---> Using cache ---> b789fa9ceb46 Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/ ---> Using cache ---> ce9a4f7c6ef4 Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/ ---> Using cache ---> 00e1c82ee168 Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts ---> Using cache ---> 22a62108774d Step 9/10 : ARG profile_version ---> Using cache ---> 35c38b644b11 Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh ---> Using cache ---> 0767c5a3865b Successfully built 0767c5a3865b Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou Build [ds-cts] succeeded Building [am]... Sending build context to Docker daemon 4.608kB Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 7.3.0-ef7fed73391311a2849509d1598d404ea1347307: Pulling from forgerock-io/am-cdk/pit1 1efc276f4ff9: Pulling fs layer a2f2f93da482: Pulling fs layer 12cca292b13c: Pulling fs layer 69e15dccd787: Pulling fs layer 33a6812dfc07: Pulling fs layer 1fd877fc4ddc: Pulling fs layer e615ee06031e: Pulling fs layer 4d9cc80cbd29: Pulling fs layer 54eedf13f332: Pulling fs layer bddd24b552c2: Pulling fs layer 91b842839cf7: Pulling fs layer f913df252c6f: Pulling fs layer e0f3867c261d: Pulling fs layer 708c5ab7eff1: Pulling fs layer e0695c51d73c: Pulling fs layer 829d7ffa993d: Pulling fs layer d4cb74bbc05a: Pulling fs layer c5f995f87089: Pulling fs layer 6760827725f2: Pulling fs layer 15ed63982b76: Pulling fs layer e6067cc440aa: Pulling fs layer 589c5f952334: Pulling fs layer fc5f3dcac78e: Pulling fs layer bbba51c1d59e: Pulling fs layer b2b8bfd8cfd7: Pulling fs layer 9102cd572665: Pulling fs layer b60f2619d309: Pulling fs layer 7cd7e050b36d: Pulling fs layer b24947c23fa3: Pulling fs layer 0dced51af7a4: Pulling fs layer 2b1a05bcd4f3: Pulling fs layer 33a6812dfc07: Waiting 1fd877fc4ddc: Waiting e615ee06031e: Waiting 4d9cc80cbd29: Waiting 54eedf13f332: Waiting bddd24b552c2: Waiting 91b842839cf7: Waiting f913df252c6f: Waiting e0f3867c261d: Waiting 708c5ab7eff1: Waiting 69e15dccd787: Waiting e0695c51d73c: Waiting 829d7ffa993d: Waiting d4cb74bbc05a: Waiting c5f995f87089: Waiting 6760827725f2: Waiting 15ed63982b76: Waiting e6067cc440aa: Waiting 589c5f952334: Waiting fc5f3dcac78e: Waiting bbba51c1d59e: Waiting b2b8bfd8cfd7: Waiting 9102cd572665: Waiting b60f2619d309: Waiting 7cd7e050b36d: Waiting b24947c23fa3: Waiting 0dced51af7a4: Waiting 2b1a05bcd4f3: Waiting 12cca292b13c: Verifying Checksum 12cca292b13c: Download complete a2f2f93da482: Verifying Checksum a2f2f93da482: Download complete 33a6812dfc07: Verifying Checksum 33a6812dfc07: Download complete 1efc276f4ff9: Verifying Checksum 1efc276f4ff9: Download complete e615ee06031e: Verifying Checksum e615ee06031e: Download complete 4d9cc80cbd29: Verifying Checksum 4d9cc80cbd29: Download complete 1fd877fc4ddc: Verifying Checksum 1fd877fc4ddc: Download complete 54eedf13f332: Verifying Checksum 54eedf13f332: Download complete 91b842839cf7: Verifying Checksum 91b842839cf7: Download complete bddd24b552c2: Verifying Checksum bddd24b552c2: Download complete f913df252c6f: Verifying Checksum f913df252c6f: Download complete e0f3867c261d: Verifying Checksum e0f3867c261d: Download complete 708c5ab7eff1: Verifying Checksum 708c5ab7eff1: Download complete e0695c51d73c: Verifying Checksum e0695c51d73c: Download complete d4cb74bbc05a: Verifying Checksum d4cb74bbc05a: Download complete c5f995f87089: Verifying Checksum c5f995f87089: Download complete 6760827725f2: Verifying Checksum 6760827725f2: Download complete 15ed63982b76: Verifying Checksum 15ed63982b76: Download complete e6067cc440aa: Verifying Checksum e6067cc440aa: Download complete 69e15dccd787: Verifying Checksum 69e15dccd787: Download complete 589c5f952334: Verifying Checksum 589c5f952334: Download complete fc5f3dcac78e: Verifying Checksum fc5f3dcac78e: Download complete b2b8bfd8cfd7: Verifying Checksum b2b8bfd8cfd7: Download complete 9102cd572665: Verifying Checksum 9102cd572665: Download complete 1efc276f4ff9: Pull complete bbba51c1d59e: Verifying Checksum bbba51c1d59e: Download complete 7cd7e050b36d: Verifying Checksum 7cd7e050b36d: Download complete b24947c23fa3: Verifying Checksum b24947c23fa3: Download complete a2f2f93da482: Pull complete b60f2619d309: Verifying Checksum b60f2619d309: Download complete 0dced51af7a4: Verifying Checksum 0dced51af7a4: Download complete 12cca292b13c: Pull complete 2b1a05bcd4f3: Verifying Checksum 2b1a05bcd4f3: Download complete 829d7ffa993d: Verifying Checksum 829d7ffa993d: Download complete 69e15dccd787: Pull complete 33a6812dfc07: Pull complete 1fd877fc4ddc: Pull complete e615ee06031e: Pull complete 4d9cc80cbd29: Pull complete 54eedf13f332: Pull complete bddd24b552c2: Pull complete 91b842839cf7: Pull complete f913df252c6f: Pull complete e0f3867c261d: Pull complete 708c5ab7eff1: Pull complete e0695c51d73c: Pull complete 829d7ffa993d: Pull complete d4cb74bbc05a: Pull complete c5f995f87089: Pull complete 6760827725f2: Pull complete 15ed63982b76: Pull complete e6067cc440aa: Pull complete 589c5f952334: Pull complete fc5f3dcac78e: Pull complete bbba51c1d59e: Pull complete b2b8bfd8cfd7: Pull complete 9102cd572665: Pull complete b60f2619d309: Pull complete 7cd7e050b36d: Pull complete b24947c23fa3: Pull complete 0dced51af7a4: Pull complete 2b1a05bcd4f3: Pull complete Digest: sha256:fc18f7964a93c81f81fda90bac5b7f92fa4c4eab374df7f243108fd7297d28a3 Status: Downloaded newer image for gcr.io/forgerock-io/am-cdk/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 ---> 7b6390286ddb Step 2/6 : ARG CONFIG_PROFILE=cdk ---> Running in 5fb024d5ff73 Removing intermediate container 5fb024d5ff73 ---> 4d9f4ef0b7cf Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Running in 1fa928de85b8 *** Building 'cdk' profile *** Removing intermediate container 1fa928de85b8 ---> 54e87ae10c88 Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/ ---> 98bc293f9f16 Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/ ---> 1f40aa47b9b9 Step 6/6 : WORKDIR /home/forgerock ---> Running in f428dc079ddb Removing intermediate container f428dc079ddb ---> ffeec671acc0 Successfully built ffeec671acc0 Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/am] 74020c762e90: Preparing ed976c114df1: Preparing 5a0bba8fe40f: Preparing 81fd00e5bbc1: Preparing 1bc35b01a315: Preparing 1280de5d213a: Preparing fd5d04236c89: Preparing b0ffec815e1d: Preparing 6a0575208639: Preparing 05d01e4d23c5: Preparing 497efed02241: Preparing f38b809125e6: Preparing fcbe710321b3: Preparing 79347933f184: Preparing 95f70f060b9c: Preparing 16eb5d2cf613: Preparing f789fcea46bf: Preparing 2385760f0ca8: Preparing 4bc161ef0ac8: Preparing 0a9d43a0ed50: Preparing aadf446e27f3: Preparing aa9e113a4654: Preparing 4ea488ed4421: Preparing 21ad03d1c8b2: Preparing a5c817f5604b: Preparing fdc26232329f: Preparing 8878ab435c3c: Preparing 4f5f6b573582: Preparing 71b38085acd2: Preparing eb6ee5b9581f: Preparing e3abdc2e9252: Preparing eafe6e032dbd: Preparing 92a4e8a3140f: Preparing 2385760f0ca8: Waiting 4bc161ef0ac8: Waiting 0a9d43a0ed50: Waiting aadf446e27f3: Waiting aa9e113a4654: Waiting 4ea488ed4421: Waiting 21ad03d1c8b2: Waiting a5c817f5604b: Waiting fdc26232329f: Waiting 8878ab435c3c: Waiting 4f5f6b573582: Waiting 71b38085acd2: Waiting eb6ee5b9581f: Waiting e3abdc2e9252: Waiting 1280de5d213a: Waiting eafe6e032dbd: Waiting fd5d04236c89: Waiting b0ffec815e1d: Waiting 92a4e8a3140f: Waiting 6a0575208639: Waiting 95f70f060b9c: Waiting 05d01e4d23c5: Waiting 16eb5d2cf613: Waiting 497efed02241: Waiting f789fcea46bf: Waiting f38b809125e6: Waiting 79347933f184: Waiting fcbe710321b3: Waiting 1bc35b01a315: Layer already exists 81fd00e5bbc1: Layer already exists 5a0bba8fe40f: Layer already exists fd5d04236c89: Layer already exists 1280de5d213a: Layer already exists b0ffec815e1d: Layer already exists 6a0575208639: Layer already exists 05d01e4d23c5: Layer already exists 497efed02241: Layer already exists f38b809125e6: Layer already exists 79347933f184: Layer already exists fcbe710321b3: Layer already exists 95f70f060b9c: Layer already exists 16eb5d2cf613: Layer already exists f789fcea46bf: Layer already exists 2385760f0ca8: Layer already exists 4bc161ef0ac8: Layer already exists 0a9d43a0ed50: Layer already exists aadf446e27f3: Layer already exists aa9e113a4654: Layer already exists 4ea488ed4421: Layer already exists a5c817f5604b: Layer already exists 21ad03d1c8b2: Layer already exists fdc26232329f: Layer already exists 8878ab435c3c: Layer already exists 4f5f6b573582: Layer already exists 71b38085acd2: Layer already exists eb6ee5b9581f: Layer already exists e3abdc2e9252: Layer already exists eafe6e032dbd: Layer already exists 92a4e8a3140f: Layer already exists ed976c114df1: Pushed 74020c762e90: Pushed xlou: digest: sha256:d1dae5f855f87cdc545f2286abe3fe582de04d9ec0ba87640e41b5dde1631ea8 size: 7221 Build [am] succeeded Building [amster]... Sending build context to Docker daemon 54.27kB Step 1/14 : FROM gcr.io/forgerock-io/amster/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 7.3.0-ef7fed73391311a2849509d1598d404ea1347307: Pulling from forgerock-io/amster/pit1 751ef25978b2: Pulling fs layer 58af2fe1bb83: Pulling fs layer 23f7e860b347: Pulling fs layer ce27966cf2a5: Pulling fs layer fce11787b00c: Pulling fs layer 6087543679d2: Pulling fs layer 6892a78e3ed8: Pulling fs layer 6a2edc3769e9: Pulling fs layer 9ff40eaaa0ab: Pulling fs layer c5ac6246f303: Pulling fs layer ce27966cf2a5: Waiting 6892a78e3ed8: Waiting 6a2edc3769e9: Waiting 9ff40eaaa0ab: Waiting c5ac6246f303: Waiting fce11787b00c: Waiting 6087543679d2: Waiting 23f7e860b347: Verifying Checksum 23f7e860b347: Download complete 751ef25978b2: Verifying Checksum 751ef25978b2: Download complete ce27966cf2a5: Verifying Checksum ce27966cf2a5: Download complete 58af2fe1bb83: Verifying Checksum 58af2fe1bb83: Download complete 6087543679d2: Verifying Checksum 6087543679d2: Download complete fce11787b00c: Verifying Checksum fce11787b00c: Download complete 6a2edc3769e9: Verifying Checksum 6a2edc3769e9: Download complete 9ff40eaaa0ab: Verifying Checksum 9ff40eaaa0ab: Download complete 6892a78e3ed8: Verifying Checksum 6892a78e3ed8: Download complete 751ef25978b2: Pull complete c5ac6246f303: Verifying Checksum c5ac6246f303: Download complete 58af2fe1bb83: Pull complete 23f7e860b347: Pull complete ce27966cf2a5: Pull complete fce11787b00c: Pull complete 6087543679d2: Pull complete 6892a78e3ed8: Pull complete 6a2edc3769e9: Pull complete 9ff40eaaa0ab: Pull complete c5ac6246f303: Pull complete Digest: sha256:fe255c34b34f702d121c876885ff61a775fc148844acbb1b4753ffe354723aa2 Status: Downloaded newer image for gcr.io/forgerock-io/amster/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 ---> a838c2dce118 Step 2/14 : USER root ---> Running in c9bea292003b Removing intermediate container c9bea292003b ---> 941ce4dcc88d Step 3/14 : COPY debian-buster-sources.list /etc/apt/sources.list ---> ecb7bb35e1f5 Step 4/14 : ENV DEBIAN_FRONTEND=noninteractive ---> Running in 6e1fd2c3c21c Removing intermediate container 6e1fd2c3c21c ---> a60cf09874e0 Step 5/14 : ENV APT_OPTS="--no-install-recommends --yes" ---> Running in ea1247128fdb Removing intermediate container ea1247128fdb ---> 6a6d0423d049 Step 6/14 : RUN apt-get update && apt-get install -y openldap-utils jq inotify-tools && apt-get clean && rm -r /var/lib/apt/lists /var/cache/apt/archives ---> Running in 7a6542a1b6be Hit:1 http://deb.debian.org/debian buster InRelease Get:2 http://deb.debian.org/debian buster-updates InRelease [56.6 kB] Get:3 http://security.debian.org/debian-security buster/updates InRelease [34.8 kB] Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [338 kB] Fetched 429 kB in 1s (811 kB/s) Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following additional packages will be installed: libinotifytools0 libjq1 libonig5 Suggested packages: libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal The following NEW packages will be installed: inotify-tools jq ldap-utils libinotifytools0 libjq1 libonig5 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 598 kB of archives. After this operation, 1945 kB of additional disk space will be used. Get:1 http://security.debian.org/debian-security buster/updates/main amd64 ldap-utils amd64 2.4.47+dfsg-3+deb10u7 [199 kB] Get:2 http://deb.debian.org/debian buster/main amd64 libinotifytools0 amd64 3.14-7 [18.7 kB] Get:3 http://deb.debian.org/debian buster/main amd64 inotify-tools amd64 3.14-7 [25.5 kB] Get:4 http://deb.debian.org/debian buster/main amd64 libonig5 amd64 6.9.1-1 [171 kB] Get:5 http://deb.debian.org/debian buster/main amd64 libjq1 amd64 1.5+dfsg-2+b1 [124 kB] Get:6 http://deb.debian.org/debian buster/main amd64 jq amd64 1.5+dfsg-2+b1 [59.4 kB] debconf: delaying package configuration, since apt-utils is not installed Fetched 598 kB in 0s (10.1 MB/s) Selecting previously unselected package libinotifytools0:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 9503 files and directories currently installed.) Preparing to unpack .../0-libinotifytools0_3.14-7_amd64.deb ... Unpacking libinotifytools0:amd64 (3.14-7) ... Selecting previously unselected package inotify-tools. Preparing to unpack .../1-inotify-tools_3.14-7_amd64.deb ... Unpacking inotify-tools (3.14-7) ... Selecting previously unselected package libonig5:amd64. Preparing to unpack .../2-libonig5_6.9.1-1_amd64.deb ... Unpacking libonig5:amd64 (6.9.1-1) ... Selecting previously unselected package libjq1:amd64. Preparing to unpack .../3-libjq1_1.5+dfsg-2+b1_amd64.deb ... Unpacking libjq1:amd64 (1.5+dfsg-2+b1) ... Selecting previously unselected package jq. Preparing to unpack .../4-jq_1.5+dfsg-2+b1_amd64.deb ... Unpacking jq (1.5+dfsg-2+b1) ... Selecting previously unselected package ldap-utils. Preparing to unpack .../5-ldap-utils_2.4.47+dfsg-3+deb10u7_amd64.deb ... Unpacking ldap-utils (2.4.47+dfsg-3+deb10u7) ... Setting up libinotifytools0:amd64 (3.14-7) ... Setting up ldap-utils (2.4.47+dfsg-3+deb10u7) ... Setting up inotify-tools (3.14-7) ... Setting up libonig5:amd64 (6.9.1-1) ... Setting up libjq1:amd64 (1.5+dfsg-2+b1) ... Setting up jq (1.5+dfsg-2+b1) ... Processing triggers for libc-bin (2.28-10+deb10u1) ... Removing intermediate container 7a6542a1b6be ---> 223ccce3ec41 Step 7/14 : USER forgerock ---> Running in 27b79959de1a Removing intermediate container 27b79959de1a ---> 79bb789a9b1f Step 8/14 : ENV SERVER_URI /am ---> Running in b1cd96aabb82 Removing intermediate container b1cd96aabb82 ---> dec1a702526e Step 9/14 : ARG CONFIG_PROFILE=cdk ---> Running in 29a04a6a6a9d Removing intermediate container 29a04a6a6a9d ---> e58e889f7da2 Step 10/14 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Running in e1be6adab674 *** Building 'cdk' profile *** Removing intermediate container e1be6adab674 ---> 8cdd16e1262c Step 11/14 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/amster ---> 0f4172b50769 Step 12/14 : COPY --chown=forgerock:root scripts /opt/amster ---> e141f979ed25 Step 13/14 : RUN chmod 777 /opt/amster ---> Running in daf5739ac50d Removing intermediate container daf5739ac50d ---> a164deca8efd Step 14/14 : ENTRYPOINT [ "/opt/amster/docker-entrypoint.sh" ] ---> Running in aac21bce1eec Removing intermediate container aac21bce1eec ---> 32b29cf05210 Successfully built 32b29cf05210 Successfully tagged gcr.io/engineeringpit/lodestar-images/amster:xlou The push refers to repository [gcr.io/engineeringpit/lodestar-images/amster] 7660af58df90: Preparing ac2bc589496c: Preparing f3adea28a657: Preparing 3b028f8409e3: Preparing b3e5e415f868: Preparing 22bb4cd12094: Preparing 00bf1426d6cd: Preparing b6a1fd8410a1: Preparing bfb746400e49: Preparing 178d3db39985: Preparing 08cc940b3cb3: Preparing d3bd8301a2f6: Preparing 194cc08cbea2: Preparing 6db889e47719: Preparing 735956b91a18: Preparing 22bb4cd12094: Waiting 00bf1426d6cd: Waiting b6a1fd8410a1: Waiting bfb746400e49: Waiting 178d3db39985: Waiting 08cc940b3cb3: Waiting d3bd8301a2f6: Waiting 194cc08cbea2: Waiting 6db889e47719: Waiting 735956b91a18: Waiting b3e5e415f868: Pushed 7660af58df90: Pushed 22bb4cd12094: Layer already exists ac2bc589496c: Pushed f3adea28a657: Pushed 00bf1426d6cd: Layer already exists b6a1fd8410a1: Layer already exists 08cc940b3cb3: Layer already exists 3b028f8409e3: Pushed bfb746400e49: Layer already exists 178d3db39985: Layer already exists 194cc08cbea2: Layer already exists d3bd8301a2f6: Layer already exists 6db889e47719: Layer already exists 735956b91a18: Layer already exists xlou: digest: sha256:3b5054f14680a4f43b57bc04f169f27bf192921c32620e66da85f5a30d5a6533 size: 3465 Build [amster] succeeded Building [idm]... Sending build context to Docker daemon 312.8kB Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486 7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486: Pulling from forgerock-io/idm-cdk/pit1 Digest: sha256:5aa52d043b5c1d2b135e9a9506298560449856d1b7532645a910ce267f863489 Status: Image is up to date for gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486 ---> 6ac69b27d8dd Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 96400f1503eb Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar ---> Using cache ---> 200679f871d2 Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal ---> Using cache ---> dc00e22f62e4 Step 5/8 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> dace62f6a608 Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 023ab9877022 Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm ---> Using cache ---> fa873ccb6cfd Step 8/8 : COPY --chown=forgerock:root . /opt/openidm ---> Using cache ---> cb5b3348ea34 Successfully built cb5b3348ea34 Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou Build [idm] succeeded Building [ig]... Sending build context to Docker daemon 29.18kB Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit 7.3.0-latest-postcommit: Pulling from forgerock-io/ig/pit1 Digest: sha256:4818c7cd5c625cc2d0ed7c354ec4ece0a74a0871698207aea51b9146b4aa1998 Status: Image is up to date for gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit ---> 3c4055bd0013 Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 0396dcc74c88 Step 3/6 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> 23c862ae51c9 Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 8f2cd79410ee Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig ---> Using cache ---> 393b19b8d305 Step 6/6 : COPY --chown=forgerock:root . /var/ig ---> Using cache ---> f3307cdfd563 Successfully built f3307cdfd563 Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou Build [ig] succeeded Building [ds-idrepo]... Sending build context to Docker daemon 117.8kB Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f 7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1 Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f ---> ed865decf122 Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 726223d8ca2c Step 3/10 : WORKDIR /opt/opendj ---> Using cache ---> 78e0c668d78e Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/ ---> Using cache ---> b8fe4095700e Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/ ---> Using cache ---> 7248f41593d3 Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts ---> Using cache ---> 584135303781 Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma ---> Using cache ---> ea4266be4a5b Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/ ---> Using cache ---> ced241ac3480 Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif ---> Using cache ---> 2c2d57ba3888 Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh ---> Using cache ---> a2073c0ed261 Successfully built a2073c0ed261 Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou Build [ds-idrepo] succeeded There is a new version (1.39.1) of Skaffold available. Download it from: https://github.com/GoogleContainerTools/skaffold/releases/tag/v1.39.1 Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey' To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy You may choose to opt out of this collection by running the following command: skaffold config set --global collect-metrics false [run_command]: skaffold deploy --build-artifacts=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/medium.json --profile medium --config=/tmp/tmpjva5i7yz --label skaffold.dev/profile=medium --label skaffold.dev/run-id=xlou --force=false --status-check=true --namespace=xlou Tags used in deployment: - am -> gcr.io/engineeringpit/lodestar-images/am:xlou@sha256:d1dae5f855f87cdc545f2286abe3fe582de04d9ec0ba87640e41b5dde1631ea8 - amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou@sha256:3b5054f14680a4f43b57bc04f169f27bf192921c32620e66da85f5a30d5a6533 - idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou@sha256:d78714e1399885eb05033d8f25c14ac16d867790252775e15b626720c5321d69 - ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou@sha256:49535d0bf97efec6e4cd2f538bd223e04416c165aa333cf90572655d9202d20a - ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou@sha256:522202e08f884938c837ab58634f5b1b8ff2b77c022b258a0ffbebb943578fc8 - ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou@sha256:664c2d0f3c9b33bfc0567b4f0bfb1508d7af74a16f859576a5b27eeae7591257 - ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou@sha256:7452c72b763a7bbfb56911a59dddae303358c001ae1f464a4c9ef9be885a39ac Starting deploy... - configmap/idm created - configmap/idm-logging-properties created - configmap/platform-config created - secret/cloud-storage-credentials-cts created - secret/cloud-storage-credentials-idrepo created - service/admin-ui created - service/am created - service/ds-cts created - service/ds-idrepo created - service/end-user-ui created - service/idm created - service/login-ui created - deployment.apps/admin-ui created - deployment.apps/am created - deployment.apps/end-user-ui created - deployment.apps/idm created - deployment.apps/login-ui created - statefulset.apps/ds-cts created - statefulset.apps/ds-idrepo created - poddisruptionbudget.policy/am created - poddisruptionbudget.policy/ds-idrepo created - poddisruptionbudget.policy/idm created - poddisruptionbudget.policy/ig created - Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget - poddisruptionbudget.policy/ds-cts created - job.batch/amster created - job.batch/ldif-importer created - ingress.networking.k8s.io/forgerock created - ingress.networking.k8s.io/ig-web created Waiting for deployments to stabilize... - xlou:deployment/admin-ui is ready. [6/7 deployment(s) still pending] - xlou:deployment/am: waiting for init container fbc-init to start - xlou:pod/am-59f674c5d4-r5k7h: waiting for init container fbc-init to start - xlou:pod/am-59f674c5d4-4jndl: waiting for init container fbc-init to start - xlou:pod/am-59f674c5d4-l64fb: waiting for init container fbc-init to start - xlou:deployment/end-user-ui: creating container end-user-ui - xlou:pod/end-user-ui-6fd8b6648d-6ns7t: creating container end-user-ui - xlou:deployment/idm: waiting for rollout to finish: 0 of 2 updated replicas are available... - xlou:deployment/login-ui: creating container login-ui - xlou:pod/login-ui-658fcbc7d8-5pl8f: creating container login-ui - xlou:statefulset/ds-cts: FailedMount: MountVolume.SetUp failed for volume "passwords" : failed to sync secret cache: timed out waiting for the condition - xlou:pod/ds-cts-0: FailedMount: MountVolume.SetUp failed for volume "passwords" : failed to sync secret cache: timed out waiting for the condition - xlou:statefulset/ds-idrepo: FailedMount: MountVolume.SetUp failed for volume "secrets" : failed to sync secret cache: timed out waiting for the condition - xlou:pod/ds-idrepo-0: FailedMount: MountVolume.SetUp failed for volume "secrets" : failed to sync secret cache: timed out waiting for the condition - xlou:deployment/login-ui: waiting for rollout to finish: 0 of 1 updated replicas are available... - xlou:deployment/login-ui is ready. [5/7 deployment(s) still pending] - xlou:deployment/end-user-ui is ready. [4/7 deployment(s) still pending] - xlou:deployment/am: Startup probe failed: Get "http://10.0.8.8:8080/am/json/health/live": dial tcp 10.0.8.8:8080: connect: connection refused - xlou:pod/am-59f674c5d4-4jndl: Startup probe failed: Get "http://10.0.8.8:8080/am/json/health/live": dial tcp 10.0.8.8:8080: connect: connection refused - xlou:pod/am-59f674c5d4-l64fb: Startup probe failed: Get "http://10.0.6.24:8080/am/json/health/live": dial tcp 10.0.6.24:8080: connect: connection refused - xlou:statefulset/ds-cts: waiting for init container initialize to start - xlou:pod/ds-cts-1: waiting for init container initialize to start - xlou:statefulset/ds-idrepo: - xlou:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-2" - xlou:pod/ds-cts-2: unable to determine current service state of pod "ds-cts-2" - xlou:statefulset/ds-idrepo: unable to determine current service state of pod "ds-idrepo-2" - xlou:pod/ds-idrepo-2: unable to determine current service state of pod "ds-idrepo-2" - xlou:deployment/idm is ready. [3/7 deployment(s) still pending] - xlou:deployment/am: Startup probe failed: Get "http://10.0.8.8:8080/am/json/health/live": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - xlou:pod/am-59f674c5d4-4jndl: Startup probe failed: Get "http://10.0.8.8:8080/am/json/health/live": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - xlou:pod/am-59f674c5d4-l64fb: Startup probe failed: Get "http://10.0.6.24:8080/am/json/health/live": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - xlou:pod/am-59f674c5d4-r5k7h: Startup probe failed: Get "http://10.0.7.22:8080/am/json/health/live": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - xlou:statefulset/ds-cts is ready. [2/7 deployment(s) still pending] - xlou:deployment/am is ready. [1/7 deployment(s) still pending] - xlou:statefulset/ds-idrepo is ready. Deployments stabilized in 2 minutes 2.03 seconds There is a new version (1.39.1) of Skaffold available. Download it from: https://github.com/GoogleContainerTools/skaffold/releases/tag/v1.39.1 Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey' To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy You may choose to opt out of this collection by running the following command: skaffold config set --global collect-metrics false ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- am-59f674c5d4-4jndl am-59f674c5d4-l64fb am-59f674c5d4-r5k7h --- stderr --- -------------- Check pod am-59f674c5d4-4jndl is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-59f674c5d4-4jndl -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-59f674c5d4-4jndl -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-59f674c5d4-4jndl -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:09Z --- stderr --- ------- Check pod am-59f674c5d4-4jndl filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-59f674c5d4-4jndl -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-59f674c5d4-4jndl restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-59f674c5d4-4jndl -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-59f674c5d4-4jndl has been restarted 0 times. -------------- Check pod am-59f674c5d4-l64fb is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-59f674c5d4-l64fb -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-59f674c5d4-l64fb -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-59f674c5d4-l64fb -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:09Z --- stderr --- ------- Check pod am-59f674c5d4-l64fb filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-59f674c5d4-l64fb -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-59f674c5d4-l64fb restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-59f674c5d4-l64fb -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-59f674c5d4-l64fb has been restarted 0 times. -------------- Check pod am-59f674c5d4-r5k7h is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-59f674c5d4-r5k7h -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-59f674c5d4-r5k7h -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-59f674c5d4-r5k7h -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:09Z --- stderr --- ------- Check pod am-59f674c5d4-r5k7h filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-59f674c5d4-r5k7h -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-59f674c5d4-r5k7h restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-59f674c5d4-r5k7h -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-59f674c5d4-r5k7h has been restarted 0 times. **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-p6hrs --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- idm-55dc5c6786-lhjcm idm-55dc5c6786-r5895 --- stderr --- -------------- Check pod idm-55dc5c6786-lhjcm is running -------------- [loop_until]: kubectl --namespace=xlou get pods idm-55dc5c6786-lhjcm -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods idm-55dc5c6786-lhjcm -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod idm-55dc5c6786-lhjcm -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:09Z --- stderr --- ------- Check pod idm-55dc5c6786-lhjcm filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec idm-55dc5c6786-lhjcm -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-55dc5c6786-lhjcm restart count ------------ [loop_until]: kubectl --namespace=xlou get pod idm-55dc5c6786-lhjcm -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-55dc5c6786-lhjcm has been restarted 0 times. -------------- Check pod idm-55dc5c6786-r5895 is running -------------- [loop_until]: kubectl --namespace=xlou get pods idm-55dc5c6786-r5895 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods idm-55dc5c6786-r5895 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod idm-55dc5c6786-r5895 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:09Z --- stderr --- ------- Check pod idm-55dc5c6786-r5895 filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec idm-55dc5c6786-r5895 -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-55dc5c6786-r5895 restart count ------------ [loop_until]: kubectl --namespace=xlou get pod idm-55dc5c6786-r5895 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-55dc5c6786-r5895 has been restarted 0 times. **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- -------------------- Check pod ds-cts-0 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:14Z --- stderr --- ------------- Check pod ds-cts-0 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-0 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-0 has been restarted 0 times. -------------------- Check pod ds-cts-1 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:46Z --- stderr --- ------------- Check pod ds-cts-1 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-1 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-1 has been restarted 0 times. -------------------- Check pod ds-cts-2 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:33:18Z --- stderr --- ------------- Check pod ds-cts-2 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-2 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-2 has been restarted 0 times. ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ------------------ Check pod ds-idrepo-0 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:14Z --- stderr --- ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-0 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-0 has been restarted 0 times. ------------------ Check pod ds-idrepo-1 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:55Z --- stderr --- ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-1 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-1 has been restarted 0 times. ------------------ Check pod ds-idrepo-2 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:33:36Z --- stderr --- ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-2 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-2 has been restarted 0 times. ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-6fd8b6648d-6ns7t --- stderr --- ---------- Check pod end-user-ui-6fd8b6648d-6ns7t is running ---------- [loop_until]: kubectl --namespace=xlou get pods end-user-ui-6fd8b6648d-6ns7t -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods end-user-ui-6fd8b6648d-6ns7t -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod end-user-ui-6fd8b6648d-6ns7t -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:09Z --- stderr --- --- Check pod end-user-ui-6fd8b6648d-6ns7t filesystem is accessible --- [loop_until]: kubectl --namespace=xlou exec end-user-ui-6fd8b6648d-6ns7t -c end-user-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- -------- Check pod end-user-ui-6fd8b6648d-6ns7t restart count -------- [loop_until]: kubectl --namespace=xlou get pod end-user-ui-6fd8b6648d-6ns7t -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod end-user-ui-6fd8b6648d-6ns7t has been restarted 0 times. *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- login-ui-658fcbc7d8-5pl8f --- stderr --- ----------- Check pod login-ui-658fcbc7d8-5pl8f is running ----------- [loop_until]: kubectl --namespace=xlou get pods login-ui-658fcbc7d8-5pl8f -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods login-ui-658fcbc7d8-5pl8f -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod login-ui-658fcbc7d8-5pl8f -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:10Z --- stderr --- ---- Check pod login-ui-658fcbc7d8-5pl8f filesystem is accessible ---- [loop_until]: kubectl --namespace=xlou exec login-ui-658fcbc7d8-5pl8f -c login-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod login-ui-658fcbc7d8-5pl8f restart count ---------- [loop_until]: kubectl --namespace=xlou get pod login-ui-658fcbc7d8-5pl8f -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod login-ui-658fcbc7d8-5pl8f has been restarted 0 times. *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- admin-ui-c9b4b8d7d-sbrbk --- stderr --- ------------ Check pod admin-ui-c9b4b8d7d-sbrbk is running ------------ [loop_until]: kubectl --namespace=xlou get pods admin-ui-c9b4b8d7d-sbrbk -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods admin-ui-c9b4b8d7d-sbrbk -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod admin-ui-c9b4b8d7d-sbrbk -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:32:09Z --- stderr --- ----- Check pod admin-ui-c9b4b8d7d-sbrbk filesystem is accessible ----- [loop_until]: kubectl --namespace=xlou exec admin-ui-c9b4b8d7d-sbrbk -c admin-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod admin-ui-c9b4b8d7d-sbrbk restart count ---------- [loop_until]: kubectl --namespace=xlou get pod admin-ui-c9b4b8d7d-sbrbk -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod admin-ui-c9b4b8d7d-sbrbk has been restarted 0 times. ******************************* Checking AM component is running ******************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:3 replicas:3 --- stderr --- ***************************** Checking AMSTER component is running ***************************** ------------------ Waiting for Amster job to finish ------------------ --------------------- Get expected number of pods --------------------- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get jobs amster -o jsonpath="{.status.succeeded}" | grep "1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ****************************** Checking IDM component is running ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- -------------- Waiting for 2 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:2 replicas:2 --- stderr --- ***************************** Checking DS-CTS component is running ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- *************************** Checking DS-IDREPO component is running *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- ************************** Checking END-USER-UI component is running ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking LOGIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking ADMIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- ****************************** Livecheck stage: After deployment ****************************** ------------------------ Running AM livecheck ------------------------ Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready [http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- [loop_until]: kubectl --namespace=xlou get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- TEd0MWZFdnNZZGhNMlo0ZWhkYkZGNWdF --- stderr --- Authenticate user via REST [http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: LGt1fEvsYdhM2Z4ehdbFF5gE" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "tokenId": "36ZWgmfmkhTTCDE7f2V11HK4Kkg.*AAJTSQACMDIAAlNLABxQblY1cW9nelg3K0h4V3pBVHMvMG9tcmVBN2M9AAR0eXBlAANDVFMAAlMxAAIwMQ..*", "successUrl": "/am/console", "realm": "/" } ---------------------- Running AMSTER livecheck ---------------------- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-p6hrs --- stderr --- Amster import completed. AM is now configured Amster livecheck is passed ------------------------ Running IDM livecheck ------------------------ Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping [loop_until]: kubectl --namespace=xlou get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- NGdYVW1QOW83ZmgyQnlERlRuQ1VwV0x1 --- stderr --- Set admin password: 4gXUmP9o7fh2ByDFTnCUpWLu [http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "", "_rev": "", "shortDesc": "OpenIDM ready", "state": "ACTIVE_READY" } ---------------------- Running DS-CTS livecheck ---------------------- Livecheck to ds-cts-0 [loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- cEt1OWRjbWJNNGNtNTQzTlo0cjZuUW44Nk9xMW1haXk= --- stderr --- [run_command]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "pKu9dcmbM4cm543NZ4r6nQn86Oq1maiy" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-cts-1 [run_command]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "pKu9dcmbM4cm543NZ4r6nQn86Oq1maiy" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-cts-2 [run_command]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "pKu9dcmbM4cm543NZ4r6nQn86Oq1maiy" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- --------------------- Running DS-IDREPO livecheck --------------------- Livecheck to ds-idrepo-0 [loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- cEt1OWRjbWJNNGNtNTQzTlo0cjZuUW44Nk9xMW1haXk= --- stderr --- [run_command]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "pKu9dcmbM4cm543NZ4r6nQn86Oq1maiy" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-idrepo-1 [run_command]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "pKu9dcmbM4cm543NZ4r6nQn86Oq1maiy" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-idrepo-2 [run_command]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "pKu9dcmbM4cm543NZ4r6nQn86Oq1maiy" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- -------------------- Running END-USER-UI livecheck -------------------- Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/enduser [http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/enduser" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Identity Management
[] --------------------- Running LOGIN-UI livecheck --------------------- Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI [http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Login
[] --------------------- Running ADMIN-UI livecheck --------------------- Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/platform [http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/platform" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Platform Admin
[] LIVECHECK SUCCEEDED ****************************** Initializing component pods for AM ****************************** ----------------------- Get AM software version ----------------------- Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version - Login amadmin to get token Authenticate user via REST [http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: LGt1fEvsYdhM2Z4ehdbFF5gE" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "tokenId": "YkB_D2sgofhIj0wDWbuOsbdt5_o.*AAJTSQACMDIAAlNLABx0TnZKVHA5eDR3SWdqNzJRbXdidWxxc2Rna2c9AAR0eXBlAANDVFMAAlMxAAIwMQ..*", "successUrl": "/am/console", "realm": "/" } [http_cmd]: curl -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=YkB_D2sgofhIj0wDWbuOsbdt5_o.*AAJTSQACMDIAAlNLABx0TnZKVHA5eDR3SWdqNzJRbXdidWxxc2Rna2c9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1659641713.849.8249.724643|f60edb382037eb2df1e800d563ad78a7" "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "_rev": "848224852", "version": "7.3.0-SNAPSHOT", "fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build ef7fed73391311a2849509d1598d404ea1347307 (2022-August-04 13:06)", "revision": "ef7fed73391311a2849509d1598d404ea1347307", "date": "2022-August-04 13:06" } **************************** Initializing component pods for AMSTER **************************** ***************************** Initializing component pods for IDM ***************************** ---------------------- Get IDM software version ---------------------- Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version [http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "productVersion": "7.3.0-SNAPSHOT", "productBuildDate": "20220803152508", "productRevision": "dcec447" } **************************** Initializing component pods for DS-CTS **************************** --------------------- Get DS-CTS software version --------------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** ------------------- Get DS-IDREPO software version ------------------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ************************* Initializing component pods for END-USER-UI ************************* ------------------ Get END-USER-UI software version ------------------ [loop_until]: kubectl --namespace=xlou exec end-user-ui-6fd8b6648d-6ns7t -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.810b8cc5.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp end-user-ui-6fd8b6648d-6ns7t:/usr/share/nginx/html/js/chunk-vendors.810b8cc5.js /tmp/end-user-ui_info/chunk-vendors.810b8cc5.js -c end-user-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** -------------------- Get LOGIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou exec login-ui-658fcbc7d8-5pl8f -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.78e49524.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp login-ui-658fcbc7d8-5pl8f:/usr/share/nginx/html/js/chunk-vendors.78e49524.js /tmp/login-ui_info/chunk-vendors.78e49524.js -c login-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** -------------------- Get ADMIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou exec admin-ui-c9b4b8d7d-sbrbk -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.ce429e2b.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp admin-ui-c9b4b8d7d-sbrbk:/usr/share/nginx/html/js/chunk-vendors.ce429e2b.js /tmp/admin-ui_info/chunk-vendors.ce429e2b.js -c admin-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ==================================================================================================== ====================== Admin password for AM is: LGt1fEvsYdhM2Z4ehdbFF5gE ====================== ==================================================================================================== ==================================================================================================== ===================== Admin password for IDM is: 4gXUmP9o7fh2ByDFTnCUpWLu ===================== ==================================================================================================== ==================================================================================================== ================ Admin password for DS-CTS is: pKu9dcmbM4cm543NZ4r6nQn86Oq1maiy ================ ==================================================================================================== ==================================================================================================== ============== Admin password for DS-IDREPO is: pKu9dcmbM4cm543NZ4r6nQn86Oq1maiy ============== ==================================================================================================== *************************************** Dumping pod list *************************************** Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/_pod-list.txt ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- am-59f674c5d4-4jndl am-59f674c5d4-l64fb am-59f674c5d4-r5k7h --- stderr --- **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-p6hrs --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- idm-55dc5c6786-lhjcm idm-55dc5c6786-r5895 --- stderr --- **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-6fd8b6648d-6ns7t --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- login-ui-658fcbc7d8-5pl8f --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- admin-ui-c9b4b8d7d-sbrbk --- stderr --- *********************************** Dumping components logs *********************************** ------------------------- Dumping logs for AM ------------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/am-59f674c5d4-4jndl.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/am-59f674c5d4-l64fb.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/am-59f674c5d4-r5k7h.txt Check pod logs for errors ----------------------- Dumping logs for AMSTER ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/amster-p6hrs.txt Check pod logs for errors ------------------------ Dumping logs for IDM ------------------------ Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/idm-55dc5c6786-lhjcm.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/idm-55dc5c6786-r5895.txt Check pod logs for errors ----------------------- Dumping logs for DS-CTS ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/ds-cts-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/ds-cts-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/ds-cts-2.txt Check pod logs for errors --------------------- Dumping logs for DS-IDREPO --------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/ds-idrepo-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/ds-idrepo-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/ds-idrepo-2.txt Check pod logs for errors -------------------- Dumping logs for END-USER-UI -------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/end-user-ui-6fd8b6648d-6ns7t.txt Check pod logs for errors ---------------------- Dumping logs for LOGIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/login-ui-658fcbc7d8-5pl8f.txt Check pod logs for errors ---------------------- Dumping logs for ADMIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220804-193527-after-deployment/admin-ui-c9b4b8d7d-sbrbk.txt Check pod logs for errors The following components will be deployed: - am (AM) - amster (Amster) - idm (IDM) - ds-cts (DS) - ds-idrepo (DS) - end-user-ui (EndUserUi) - login-ui (LoginUi) - admin-ui (AdminUi) Run create-secrets.sh to create passwords [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/create-secrets.sh xlou-sp [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- certificate.cert-manager.io/ds-master-cert created certificate.cert-manager.io/ds-ssl-cert created issuer.cert-manager.io/selfsigned-issuer created secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created --- stderr --- [loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=available deployment --all | grep "condition met" [loop_until]: (max_time=300, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- deployment.apps/secret-agent-controller-manager condition met --- stderr --- [loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=ready pod --all | grep "condition met" [loop_until]: (max_time=300, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- pod/secret-agent-controller-manager-59fcd58bbc-7lq45 condition met --- stderr --- [run_command]: skaffold build --file-output=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/small.json --default-repo gcr.io/engineeringpit/lodestar-images --profile small --config=/tmp/tmpfx5o42k1 --cache-artifacts=false --tag xlou-sp --namespace=xlou-sp [run_command]: env={'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'CONFIG_PROFILE': 'cdk'} Generating tags... - am -> gcr.io/engineeringpit/lodestar-images/am:xlou-sp - amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou-sp - idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou-sp - ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou-sp - ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou-sp - ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou-sp - ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou-sp Starting build... Building [ds]... Sending build context to Docker daemon 115.2kB Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f 7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1 Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f ---> ed865decf122 Step 2/11 : USER root ---> Using cache ---> 4bdd9adb7b38 Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils ---> Using cache ---> f0868d2db47c Step 4/11 : USER forgerock ---> Using cache ---> 7c1d1df3ee67 Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data ---> Using cache ---> d9edd8b8d899 Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds" ---> Using cache ---> 06d762222685 Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore" ---> Using cache ---> c5e4e5b7bc10 Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts ---> Using cache ---> 5061cd0b5ede Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext ---> Using cache ---> 447344f14ce2 Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/ ---> Using cache ---> 7ecb22a7bea1 Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext ---> Using cache ---> e759a4968271 Successfully built e759a4968271 Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou-sp Build [ds] succeeded Building [am]... Sending build context to Docker daemon 4.608kB Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 7.3.0-ef7fed73391311a2849509d1598d404ea1347307: Pulling from forgerock-io/am-cdk/pit1 Digest: sha256:fc18f7964a93c81f81fda90bac5b7f92fa4c4eab374df7f243108fd7297d28a3 Status: Image is up to date for gcr.io/forgerock-io/am-cdk/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 ---> 7b6390286ddb Step 2/6 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> 4d9f4ef0b7cf Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 54e87ae10c88 Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/ ---> Using cache ---> 98bc293f9f16 Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/ ---> Using cache ---> 1f40aa47b9b9 Step 6/6 : WORKDIR /home/forgerock ---> Using cache ---> ffeec671acc0 Successfully built ffeec671acc0 Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou-sp The push refers to repository [gcr.io/engineeringpit/lodestar-images/am] 74020c762e90: Preparing ed976c114df1: Preparing 5a0bba8fe40f: Preparing 81fd00e5bbc1: Preparing 1bc35b01a315: Preparing 1280de5d213a: Preparing fd5d04236c89: Preparing b0ffec815e1d: Preparing 6a0575208639: Preparing 05d01e4d23c5: Preparing 497efed02241: Preparing f38b809125e6: Preparing fcbe710321b3: Preparing 79347933f184: Preparing 95f70f060b9c: Preparing 16eb5d2cf613: Preparing f789fcea46bf: Preparing 2385760f0ca8: Preparing 4bc161ef0ac8: Preparing 0a9d43a0ed50: Preparing aadf446e27f3: Preparing aa9e113a4654: Preparing 4ea488ed4421: Preparing 21ad03d1c8b2: Preparing a5c817f5604b: Preparing fdc26232329f: Preparing 8878ab435c3c: Preparing 4f5f6b573582: Preparing 71b38085acd2: Preparing eb6ee5b9581f: Preparing e3abdc2e9252: Preparing eafe6e032dbd: Preparing 92a4e8a3140f: Preparing 1280de5d213a: Waiting fd5d04236c89: Waiting b0ffec815e1d: Waiting 6a0575208639: Waiting 05d01e4d23c5: Waiting 497efed02241: Waiting f38b809125e6: Waiting fcbe710321b3: Waiting 79347933f184: Waiting 95f70f060b9c: Waiting 16eb5d2cf613: Waiting f789fcea46bf: Waiting 2385760f0ca8: Waiting 4bc161ef0ac8: Waiting 0a9d43a0ed50: Waiting aadf446e27f3: Waiting aa9e113a4654: Waiting 4ea488ed4421: Waiting 21ad03d1c8b2: Waiting a5c817f5604b: Waiting fdc26232329f: Waiting 8878ab435c3c: Waiting 4f5f6b573582: Waiting 71b38085acd2: Waiting eb6ee5b9581f: Waiting e3abdc2e9252: Waiting eafe6e032dbd: Waiting 92a4e8a3140f: Waiting ed976c114df1: Layer already exists 1bc35b01a315: Layer already exists 74020c762e90: Layer already exists 5a0bba8fe40f: Layer already exists 81fd00e5bbc1: Layer already exists 1280de5d213a: Layer already exists fd5d04236c89: Layer already exists b0ffec815e1d: Layer already exists 6a0575208639: Layer already exists 05d01e4d23c5: Layer already exists 497efed02241: Layer already exists 79347933f184: Layer already exists fcbe710321b3: Layer already exists 95f70f060b9c: Layer already exists f38b809125e6: Layer already exists 16eb5d2cf613: Layer already exists f789fcea46bf: Layer already exists 2385760f0ca8: Layer already exists 4bc161ef0ac8: Layer already exists 0a9d43a0ed50: Layer already exists aadf446e27f3: Layer already exists aa9e113a4654: Layer already exists 21ad03d1c8b2: Layer already exists 4ea488ed4421: Layer already exists fdc26232329f: Layer already exists 8878ab435c3c: Layer already exists a5c817f5604b: Layer already exists 4f5f6b573582: Layer already exists e3abdc2e9252: Layer already exists eb6ee5b9581f: Layer already exists eafe6e032dbd: Layer already exists 71b38085acd2: Layer already exists 92a4e8a3140f: Layer already exists xlou-sp: digest: sha256:d1dae5f855f87cdc545f2286abe3fe582de04d9ec0ba87640e41b5dde1631ea8 size: 7221 Build [am] succeeded Building [amster]... Sending build context to Docker daemon 54.27kB Step 1/14 : FROM gcr.io/forgerock-io/amster/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 7.3.0-ef7fed73391311a2849509d1598d404ea1347307: Pulling from forgerock-io/amster/pit1 Digest: sha256:fe255c34b34f702d121c876885ff61a775fc148844acbb1b4753ffe354723aa2 Status: Image is up to date for gcr.io/forgerock-io/amster/pit1:7.3.0-ef7fed73391311a2849509d1598d404ea1347307 ---> a838c2dce118 Step 2/14 : USER root ---> Using cache ---> 941ce4dcc88d Step 3/14 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> ecb7bb35e1f5 Step 4/14 : ENV DEBIAN_FRONTEND=noninteractive ---> Using cache ---> a60cf09874e0 Step 5/14 : ENV APT_OPTS="--no-install-recommends --yes" ---> Using cache ---> 6a6d0423d049 Step 6/14 : RUN apt-get update && apt-get install -y openldap-utils jq inotify-tools && apt-get clean && rm -r /var/lib/apt/lists /var/cache/apt/archives ---> Using cache ---> 223ccce3ec41 Step 7/14 : USER forgerock ---> Using cache ---> 79bb789a9b1f Step 8/14 : ENV SERVER_URI /am ---> Using cache ---> dec1a702526e Step 9/14 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> e58e889f7da2 Step 10/14 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 8cdd16e1262c Step 11/14 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/amster ---> Using cache ---> 0f4172b50769 Step 12/14 : COPY --chown=forgerock:root scripts /opt/amster ---> Using cache ---> e141f979ed25 Step 13/14 : RUN chmod 777 /opt/amster ---> Using cache ---> a164deca8efd Step 14/14 : ENTRYPOINT [ "/opt/amster/docker-entrypoint.sh" ] ---> Using cache ---> 32b29cf05210 Successfully built 32b29cf05210 Successfully tagged gcr.io/engineeringpit/lodestar-images/amster:xlou-sp The push refers to repository [gcr.io/engineeringpit/lodestar-images/amster] 7660af58df90: Preparing ac2bc589496c: Preparing f3adea28a657: Preparing 3b028f8409e3: Preparing b3e5e415f868: Preparing 22bb4cd12094: Preparing 00bf1426d6cd: Preparing b6a1fd8410a1: Preparing bfb746400e49: Preparing 178d3db39985: Preparing 08cc940b3cb3: Preparing d3bd8301a2f6: Preparing 194cc08cbea2: Preparing 6db889e47719: Preparing 735956b91a18: Preparing 22bb4cd12094: Waiting 00bf1426d6cd: Waiting b6a1fd8410a1: Waiting bfb746400e49: Waiting 178d3db39985: Waiting 08cc940b3cb3: Waiting d3bd8301a2f6: Waiting 194cc08cbea2: Waiting 6db889e47719: Waiting 735956b91a18: Waiting f3adea28a657: Layer already exists ac2bc589496c: Layer already exists 7660af58df90: Layer already exists b3e5e415f868: Layer already exists 3b028f8409e3: Layer already exists 22bb4cd12094: Layer already exists 00bf1426d6cd: Layer already exists bfb746400e49: Layer already exists b6a1fd8410a1: Layer already exists 178d3db39985: Layer already exists 08cc940b3cb3: Layer already exists 194cc08cbea2: Layer already exists 6db889e47719: Layer already exists d3bd8301a2f6: Layer already exists 735956b91a18: Layer already exists xlou-sp: digest: sha256:3b5054f14680a4f43b57bc04f169f27bf192921c32620e66da85f5a30d5a6533 size: 3465 Build [amster] succeeded Building [idm]... Sending build context to Docker daemon 312.8kB Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486 7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486: Pulling from forgerock-io/idm-cdk/pit1 Digest: sha256:5aa52d043b5c1d2b135e9a9506298560449856d1b7532645a910ce267f863489 Status: Image is up to date for gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-dcec447a222ffbb44b634a8b852f05d754ceb486 ---> 6ac69b27d8dd Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 96400f1503eb Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar ---> Using cache ---> 200679f871d2 Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal ---> Using cache ---> dc00e22f62e4 Step 5/8 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> dace62f6a608 Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 023ab9877022 Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm ---> Using cache ---> fa873ccb6cfd Step 8/8 : COPY --chown=forgerock:root . /opt/openidm ---> Using cache ---> cb5b3348ea34 Successfully built cb5b3348ea34 Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou-sp Build [idm] succeeded Building [ds-cts]... Sending build context to Docker daemon 78.85kB Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f 7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1 Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f ---> ed865decf122 Step 2/10 : USER root ---> Using cache ---> 4bdd9adb7b38 Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 3fab72820015 Step 4/10 : RUN chown -R forgerock:root /opt/opendj ---> Using cache ---> 2207c68564d3 Step 5/10 : USER forgerock ---> Using cache ---> b789fa9ceb46 Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/ ---> Using cache ---> ce9a4f7c6ef4 Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/ ---> Using cache ---> 00e1c82ee168 Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts ---> Using cache ---> 22a62108774d Step 9/10 : ARG profile_version ---> Using cache ---> 35c38b644b11 Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh ---> Using cache ---> 0767c5a3865b Successfully built 0767c5a3865b Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou-sp Build [ds-cts] succeeded Building [ds-idrepo]... Sending build context to Docker daemon 117.8kB Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f 7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1 Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f ---> ed865decf122 Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 726223d8ca2c Step 3/10 : WORKDIR /opt/opendj ---> Using cache ---> 78e0c668d78e Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/ ---> Using cache ---> b8fe4095700e Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/ ---> Using cache ---> 7248f41593d3 Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts ---> Using cache ---> 584135303781 Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma ---> Using cache ---> ea4266be4a5b Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/ ---> Using cache ---> ced241ac3480 Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif ---> Using cache ---> 2c2d57ba3888 Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh ---> Using cache ---> a2073c0ed261 Successfully built a2073c0ed261 Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou-sp Build [ds-idrepo] succeeded Building [ig]... Sending build context to Docker daemon 29.18kB Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit 7.3.0-latest-postcommit: Pulling from forgerock-io/ig/pit1 Digest: sha256:4818c7cd5c625cc2d0ed7c354ec4ece0a74a0871698207aea51b9146b4aa1998 Status: Image is up to date for gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit ---> 3c4055bd0013 Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list ---> Using cache ---> 0396dcc74c88 Step 3/6 : ARG CONFIG_PROFILE=cdk ---> Using cache ---> 23c862ae51c9 Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m" ---> Using cache ---> 8f2cd79410ee Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig ---> Using cache ---> 393b19b8d305 Step 6/6 : COPY --chown=forgerock:root . /var/ig ---> Using cache ---> f3307cdfd563 Successfully built f3307cdfd563 Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou-sp Build [ig] succeeded There is a new version (1.39.1) of Skaffold available. Download it from: https://github.com/GoogleContainerTools/skaffold/releases/tag/v1.39.1 Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey' To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy You may choose to opt out of this collection by running the following command: skaffold config set --global collect-metrics false [run_command]: skaffold deploy --build-artifacts=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/small.json --profile small --config=/tmp/tmpjmlzi8bm --label skaffold.dev/profile=small --label skaffold.dev/run-id=xlou-sp --force=false --status-check=true --namespace=xlou-sp Tags used in deployment: - am -> gcr.io/engineeringpit/lodestar-images/am:xlou-sp@sha256:d1dae5f855f87cdc545f2286abe3fe582de04d9ec0ba87640e41b5dde1631ea8 - amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou-sp@sha256:3b5054f14680a4f43b57bc04f169f27bf192921c32620e66da85f5a30d5a6533 - idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou-sp@sha256:d78714e1399885eb05033d8f25c14ac16d867790252775e15b626720c5321d69 - ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou-sp@sha256:49535d0bf97efec6e4cd2f538bd223e04416c165aa333cf90572655d9202d20a - ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou-sp@sha256:522202e08f884938c837ab58634f5b1b8ff2b77c022b258a0ffbebb943578fc8 - ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou-sp@sha256:664c2d0f3c9b33bfc0567b4f0bfb1508d7af74a16f859576a5b27eeae7591257 - ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou-sp@sha256:7452c72b763a7bbfb56911a59dddae303358c001ae1f464a4c9ef9be885a39ac Starting deploy... - configmap/idm created - configmap/idm-logging-properties created - configmap/platform-config created - secret/cloud-storage-credentials-cts created - secret/cloud-storage-credentials-idrepo created - service/admin-ui created - service/am created - service/ds-cts created - service/ds-idrepo created - service/end-user-ui created - service/idm created - service/login-ui created - deployment.apps/admin-ui created - deployment.apps/am created - deployment.apps/end-user-ui created - deployment.apps/idm created - deployment.apps/login-ui created - statefulset.apps/ds-cts created - statefulset.apps/ds-idrepo created - job.batch/amster created - job.batch/ldif-importer created - ingress.networking.k8s.io/forgerock created - ingress.networking.k8s.io/ig-web created Waiting for deployments to stabilize... - xlou-sp:deployment/admin-ui is ready. [6/7 deployment(s) still pending] - xlou-sp:deployment/login-ui is ready. [5/7 deployment(s) still pending] - xlou-sp:deployment/am: waiting for init container fbc-init to start - xlou-sp:pod/am-79fd59c494-bxmnk: waiting for init container fbc-init to start - xlou-sp:pod/am-79fd59c494-fshqw: waiting for init container fbc-init to start - xlou-sp:deployment/end-user-ui: creating container end-user-ui - xlou-sp:pod/end-user-ui-75c7c9b7d8-s48k5: creating container end-user-ui - xlou-sp:deployment/idm: waiting for init container fbc-init to start - xlou-sp:pod/idm-54cf4596bf-8whkl: waiting for init container fbc-init to start - xlou-sp:pod/idm-54cf4596bf-xnrx4: waiting for init container fbc-init to start - xlou-sp:statefulset/ds-cts: waiting for init container initialize to start - xlou-sp:pod/ds-cts-0: waiting for init container initialize to start - xlou-sp:statefulset/ds-idrepo: waiting for init container initialize to start - xlou-sp:pod/ds-idrepo-0: waiting for init container initialize to start - xlou-sp:deployment/end-user-ui is ready. [4/7 deployment(s) still pending] - xlou-sp:deployment/idm: waiting for rollout to finish: 0 of 2 updated replicas are available... - xlou-sp:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-1" - xlou-sp:pod/ds-cts-1: unable to determine current service state of pod "ds-cts-1" - xlou-sp:statefulset/ds-idrepo: - xlou-sp:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-2" - xlou-sp:pod/ds-cts-2: unable to determine current service state of pod "ds-cts-2" - xlou-sp:statefulset/ds-idrepo: waiting for init container initialize to complete - xlou-sp:pod/ds-idrepo-1: waiting for init container initialize to complete > [ds-idrepo-1 initialize] Initializing "data/db" from Docker image > [ds-idrepo-1 initialize] Initializing "data/changelogDb" from Docker image > [ds-idrepo-1 initialize] Initializing "data/import-tmp" from Docker image > [ds-idrepo-1 initialize] Initializing "data/locks" from Docker image > [ds-idrepo-1 initialize] Initializing "data/var" from Docker image > [ds-idrepo-1 initialize] Upgrading configuration and data... > [ds-idrepo-1 initialize] * OpenDJ data has already been upgraded to version > [ds-idrepo-1 initialize] 7.3.0.167d7aaf6f08d4399db54a86d3d62ae8e9552a3f > [ds-idrepo-1 initialize] Rebuilding degraded indexes for base DN "ou=tokens"... > [ds-idrepo-1 initialize] Rebuilding degraded indexes for base DN "ou=identities"... > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=39 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-meta is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=40 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-groups is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=41 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-authzroles-managed-role is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=42 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-roles is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=43 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-organization-admin is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=44 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-inactive-date is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=45 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-active-date is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=46 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-organization-member is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=47 msg=Due to changes in the configuration, index ou=identities_fr-idm-uuid is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=48 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-manager is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=49 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-notifications is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=50 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-organization-owner is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] category=BACKEND severity=WARNING seq=51 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-authzroles-internal-role is currently operating in a degraded state and must be rebuilt before it can be used > [ds-idrepo-1 initialize] Rebuilding degraded indexes for base DN "ou=am-config"... > [ds-idrepo-1 initialize] Rebuilding degraded indexes for base DN "dc=openidm,dc=forgerock,dc=io"... > [ds-idrepo-1 initialize] Updating the "uid=admin" password > [ds-idrepo-1 initialize] Updating the "uid=monitor" password > [ds-idrepo-1 initialize] Initialization completed > [ds-idrepo-1 initialize] AUTORESTORE_FROM_DSBACKUP is missing or not set to true. Skipping restore - xlou-sp:deployment/idm is ready. [3/7 deployment(s) still pending] - xlou-sp:deployment/am: Startup probe failed: Get "http://10.0.0.10:8080/am/json/health/live": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - xlou-sp:pod/am-79fd59c494-bxmnk: Startup probe failed: Get "http://10.0.0.10:8080/am/json/health/live": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - xlou-sp:pod/am-79fd59c494-fshqw: Startup probe failed: Get "http://10.0.11.10:8080/am/json/health/live": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - xlou-sp:statefulset/ds-cts is ready. [2/7 deployment(s) still pending] - xlou-sp:deployment/am is ready. [1/7 deployment(s) still pending] - xlou-sp:statefulset/ds-idrepo is ready. Deployments stabilized in 1 minute 59.358 seconds There is a new version (1.39.1) of Skaffold available. Download it from: https://github.com/GoogleContainerTools/skaffold/releases/tag/v1.39.1 Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey' To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy You may choose to opt out of this collection by running the following command: skaffold config set --global collect-metrics false ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 2 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- am-79fd59c494-bxmnk am-79fd59c494-fshqw --- stderr --- -------------- Check pod am-79fd59c494-bxmnk is running -------------- [loop_until]: kubectl --namespace=xlou-sp get pods am-79fd59c494-bxmnk -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods am-79fd59c494-bxmnk -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod am-79fd59c494-bxmnk -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:14Z --- stderr --- ------- Check pod am-79fd59c494-bxmnk filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou-sp exec am-79fd59c494-bxmnk -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-79fd59c494-bxmnk restart count ------------- [loop_until]: kubectl --namespace=xlou-sp get pod am-79fd59c494-bxmnk -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-79fd59c494-bxmnk has been restarted 0 times. -------------- Check pod am-79fd59c494-fshqw is running -------------- [loop_until]: kubectl --namespace=xlou-sp get pods am-79fd59c494-fshqw -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods am-79fd59c494-fshqw -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod am-79fd59c494-fshqw -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:14Z --- stderr --- ------- Check pod am-79fd59c494-fshqw filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou-sp exec am-79fd59c494-fshqw -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-79fd59c494-fshqw restart count ------------- [loop_until]: kubectl --namespace=xlou-sp get pod am-79fd59c494-fshqw -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-79fd59c494-fshqw has been restarted 0 times. **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-l744t --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- idm-54cf4596bf-8whkl idm-54cf4596bf-xnrx4 --- stderr --- -------------- Check pod idm-54cf4596bf-8whkl is running -------------- [loop_until]: kubectl --namespace=xlou-sp get pods idm-54cf4596bf-8whkl -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods idm-54cf4596bf-8whkl -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod idm-54cf4596bf-8whkl -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:15Z --- stderr --- ------- Check pod idm-54cf4596bf-8whkl filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou-sp exec idm-54cf4596bf-8whkl -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-54cf4596bf-8whkl restart count ------------ [loop_until]: kubectl --namespace=xlou-sp get pod idm-54cf4596bf-8whkl -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-54cf4596bf-8whkl has been restarted 0 times. -------------- Check pod idm-54cf4596bf-xnrx4 is running -------------- [loop_until]: kubectl --namespace=xlou-sp get pods idm-54cf4596bf-xnrx4 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods idm-54cf4596bf-xnrx4 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod idm-54cf4596bf-xnrx4 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:15Z --- stderr --- ------- Check pod idm-54cf4596bf-xnrx4 filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou-sp exec idm-54cf4596bf-xnrx4 -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-54cf4596bf-xnrx4 restart count ------------ [loop_until]: kubectl --namespace=xlou-sp get pod idm-54cf4596bf-xnrx4 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-54cf4596bf-xnrx4 has been restarted 0 times. **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- -------------------- Check pod ds-cts-0 is running -------------------- [loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:19Z --- stderr --- ------------- Check pod ds-cts-0 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou-sp exec ds-cts-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-0 restart count ------------------ [loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-0 has been restarted 0 times. -------------------- Check pod ds-cts-1 is running -------------------- [loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:51Z --- stderr --- ------------- Check pod ds-cts-1 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou-sp exec ds-cts-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-1 restart count ------------------ [loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-1 has been restarted 0 times. -------------------- Check pod ds-cts-2 is running -------------------- [loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:38:22Z --- stderr --- ------------- Check pod ds-cts-2 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou-sp exec ds-cts-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-2 restart count ------------------ [loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-2 has been restarted 0 times. ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ------------------ Check pod ds-idrepo-0 is running ------------------ [loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:20Z --- stderr --- ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou-sp exec ds-idrepo-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-0 restart count ----------------- [loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-0 has been restarted 0 times. ------------------ Check pod ds-idrepo-1 is running ------------------ [loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:38:00Z --- stderr --- ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou-sp exec ds-idrepo-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-1 restart count ----------------- [loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-1 has been restarted 0 times. ------------------ Check pod ds-idrepo-2 is running ------------------ [loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:38:40Z --- stderr --- ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou-sp exec ds-idrepo-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-2 restart count ----------------- [loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-2 has been restarted 0 times. ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-75c7c9b7d8-s48k5 --- stderr --- ---------- Check pod end-user-ui-75c7c9b7d8-s48k5 is running ---------- [loop_until]: kubectl --namespace=xlou-sp get pods end-user-ui-75c7c9b7d8-s48k5 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods end-user-ui-75c7c9b7d8-s48k5 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod end-user-ui-75c7c9b7d8-s48k5 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:15Z --- stderr --- --- Check pod end-user-ui-75c7c9b7d8-s48k5 filesystem is accessible --- [loop_until]: kubectl --namespace=xlou-sp exec end-user-ui-75c7c9b7d8-s48k5 -c end-user-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- -------- Check pod end-user-ui-75c7c9b7d8-s48k5 restart count -------- [loop_until]: kubectl --namespace=xlou-sp get pod end-user-ui-75c7c9b7d8-s48k5 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod end-user-ui-75c7c9b7d8-s48k5 has been restarted 0 times. *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- login-ui-95c688884-lxpvp --- stderr --- ------------ Check pod login-ui-95c688884-lxpvp is running ------------ [loop_until]: kubectl --namespace=xlou-sp get pods login-ui-95c688884-lxpvp -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods login-ui-95c688884-lxpvp -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod login-ui-95c688884-lxpvp -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:15Z --- stderr --- ----- Check pod login-ui-95c688884-lxpvp filesystem is accessible ----- [loop_until]: kubectl --namespace=xlou-sp exec login-ui-95c688884-lxpvp -c login-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod login-ui-95c688884-lxpvp restart count ---------- [loop_until]: kubectl --namespace=xlou-sp get pod login-ui-95c688884-lxpvp -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod login-ui-95c688884-lxpvp has been restarted 0 times. *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- admin-ui-75d858f46-m66zx --- stderr --- ------------ Check pod admin-ui-75d858f46-m66zx is running ------------ [loop_until]: kubectl --namespace=xlou-sp get pods admin-ui-75d858f46-m66zx -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pods admin-ui-75d858f46-m66zx -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou-sp get pod admin-ui-75d858f46-m66zx -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2022-08-04T19:37:14Z --- stderr --- ----- Check pod admin-ui-75d858f46-m66zx filesystem is accessible ----- [loop_until]: kubectl --namespace=xlou-sp exec admin-ui-75d858f46-m66zx -c admin-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod admin-ui-75d858f46-m66zx restart count ---------- [loop_until]: kubectl --namespace=xlou-sp get pod admin-ui-75d858f46-m66zx -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod admin-ui-75d858f46-m66zx has been restarted 0 times. ******************************* Checking AM component is running ******************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- -------------- Waiting for 2 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou-sp get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:2 replicas:2 --- stderr --- ***************************** Checking AMSTER component is running ***************************** ------------------ Waiting for Amster job to finish ------------------ --------------------- Get expected number of pods --------------------- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou-sp get jobs amster -o jsonpath="{.status.succeeded}" | grep "1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ****************************** Checking IDM component is running ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- -------------- Waiting for 2 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou-sp get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:2 replicas:2 --- stderr --- ***************************** Checking DS-CTS component is running ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou-sp get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- *************************** Checking DS-IDREPO component is running *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou-sp get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- ************************** Checking END-USER-UI component is running ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou-sp get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking LOGIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou-sp get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking ADMIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou-sp get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- ****************************** Livecheck stage: After deployment ****************************** ------------------------ Running AM livecheck ------------------------ Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/am/json/health/ready [http_cmd]: curl --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/am/json/health/ready" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- [loop_until]: kubectl --namespace=xlou-sp get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- V3d6UFh1N3QzNDZENTFjU2gwRVpTWVpx --- stderr --- Authenticate user via REST [http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: WwzPXu7t346D51cSh0EZSYZq" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" --insecure -L -X POST "https://xlou-sp.xlou-cdm.perf.freng.org/am/json/authenticate?realm=/" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "tokenId": "L7hLjTYSHvG6Jglk6JMDUCdi8nw.*AAJTSQACMDIAAlNLABxqbVNEbzV3Vi9nRXBweWV2WnhiSjA3aHNZK009AAR0eXBlAANDVFMAAlMxAAIwMQ..*", "successUrl": "/am/console", "realm": "/" } ---------------------- Running AMSTER livecheck ---------------------- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-l744t --- stderr --- Amster import completed. AM is now configured Amster livecheck is passed ------------------------ Running IDM livecheck ------------------------ Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/openidm/info/ping [loop_until]: kubectl --namespace=xlou-sp get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ZWxxMGc0NW9SZkF2bUUxUUVWV3pqaVdV --- stderr --- Set admin password: elq0g45oRfAvmE1QEVWzjiWU [http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/openidm/info/ping" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "", "_rev": "", "shortDesc": "OpenIDM ready", "state": "ACTIVE_READY" } ---------------------- Running DS-CTS livecheck ---------------------- Livecheck to ds-cts-0 [loop_until]: kubectl --namespace=xlou-sp get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ZnFIWE1lUjl2UE5VN2ZsWFFQSXFkNWd5anQwMjFDbXg= --- stderr --- [run_command]: kubectl --namespace=xlou-sp exec ds-cts-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "fqHXMeR9vPNU7flXQPIqd5gyjt021Cmx" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-cts-1 [run_command]: kubectl --namespace=xlou-sp exec ds-cts-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "fqHXMeR9vPNU7flXQPIqd5gyjt021Cmx" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-cts-2 [run_command]: kubectl --namespace=xlou-sp exec ds-cts-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "fqHXMeR9vPNU7flXQPIqd5gyjt021Cmx" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- --------------------- Running DS-IDREPO livecheck --------------------- Livecheck to ds-idrepo-0 [loop_until]: kubectl --namespace=xlou-sp get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ZnFIWE1lUjl2UE5VN2ZsWFFQSXFkNWd5anQwMjFDbXg= --- stderr --- [run_command]: kubectl --namespace=xlou-sp exec ds-idrepo-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "fqHXMeR9vPNU7flXQPIqd5gyjt021Cmx" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-idrepo-1 [run_command]: kubectl --namespace=xlou-sp exec ds-idrepo-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "fqHXMeR9vPNU7flXQPIqd5gyjt021Cmx" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-idrepo-2 [run_command]: kubectl --namespace=xlou-sp exec ds-idrepo-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "fqHXMeR9vPNU7flXQPIqd5gyjt021Cmx" -b "" -s base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- -------------------- Running END-USER-UI livecheck -------------------- Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/enduser [http_cmd]: curl --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/enduser" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Identity Management
[] --------------------- Running LOGIN-UI livecheck --------------------- Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/am/XUI [http_cmd]: curl --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/am/XUI" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Login
[] --------------------- Running ADMIN-UI livecheck --------------------- Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/platform [http_cmd]: curl --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/platform" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Platform Admin
[] LIVECHECK SUCCEEDED ****************************** Initializing component pods for AM ****************************** ----------------------- Get AM software version ----------------------- Getting product version from https://xlou-sp.xlou-cdm.perf.freng.org/am/json/serverinfo/version - Login amadmin to get token Authenticate user via REST [http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: WwzPXu7t346D51cSh0EZSYZq" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" --insecure -L -X POST "https://xlou-sp.xlou-cdm.perf.freng.org/am/json/authenticate?realm=/" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "tokenId": "Wpz-f5gOjtT2F_T2QEjBiePnfPQ.*AAJTSQACMDIAAlNLABxPTjduSkVvekwxaERJaVRRRWM3SmFTblYxZHc9AAR0eXBlAANDVFMAAlMxAAIwMQ..*", "successUrl": "/am/console", "realm": "/" } [http_cmd]: curl --insecure -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=Wpz-f5gOjtT2F_T2QEjBiePnfPQ.*AAJTSQACMDIAAlNLABxPTjduSkVvekwxaERJaVRRRWM3SmFTblYxZHc9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1659642009.822.8725.440949|95d24137157607aab620392fd4bfbc15" "https://xlou-sp.xlou-cdm.perf.freng.org/am/json/serverinfo/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "_rev": "848224852", "version": "7.3.0-SNAPSHOT", "fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build ef7fed73391311a2849509d1598d404ea1347307 (2022-August-04 13:06)", "revision": "ef7fed73391311a2849509d1598d404ea1347307", "date": "2022-August-04 13:06" } **************************** Initializing component pods for AMSTER **************************** ***************************** Initializing component pods for IDM ***************************** ---------------------- Get IDM software version ---------------------- Getting product version from https://xlou-sp.xlou-cdm.perf.freng.org/openidm/info/version [http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/openidm/info/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "productVersion": "7.3.0-SNAPSHOT", "productBuildDate": "20220803152508", "productRevision": "dcec447" } **************************** Initializing component pods for DS-CTS **************************** --------------------- Get DS-CTS software version --------------------- [loop_until]: kubectl --namespace=xlou-sp exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou-sp cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** ------------------- Get DS-IDREPO software version ------------------- [loop_until]: kubectl --namespace=xlou-sp exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou-sp cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ************************* Initializing component pods for END-USER-UI ************************* ------------------ Get END-USER-UI software version ------------------ [loop_until]: kubectl --namespace=xlou-sp exec end-user-ui-75c7c9b7d8-s48k5 -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.810b8cc5.js --- stderr --- [loop_until]: kubectl --namespace=xlou-sp cp end-user-ui-75c7c9b7d8-s48k5:/usr/share/nginx/html/js/chunk-vendors.810b8cc5.js /tmp/end-user-ui_info/chunk-vendors.810b8cc5.js -c end-user-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** -------------------- Get LOGIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou-sp exec login-ui-95c688884-lxpvp -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.78e49524.js --- stderr --- [loop_until]: kubectl --namespace=xlou-sp cp login-ui-95c688884-lxpvp:/usr/share/nginx/html/js/chunk-vendors.78e49524.js /tmp/login-ui_info/chunk-vendors.78e49524.js -c login-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** -------------------- Get ADMIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou-sp exec admin-ui-75d858f46-m66zx -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.ce429e2b.js --- stderr --- [loop_until]: kubectl --namespace=xlou-sp cp admin-ui-75d858f46-m66zx:/usr/share/nginx/html/js/chunk-vendors.ce429e2b.js /tmp/admin-ui_info/chunk-vendors.ce429e2b.js -c admin-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ==================================================================================================== ====================== Admin password for AM is: WwzPXu7t346D51cSh0EZSYZq ====================== ==================================================================================================== ==================================================================================================== ===================== Admin password for IDM is: elq0g45oRfAvmE1QEVWzjiWU ===================== ==================================================================================================== ==================================================================================================== ================ Admin password for DS-CTS is: fqHXMeR9vPNU7flXQPIqd5gyjt021Cmx ================ ==================================================================================================== ==================================================================================================== ============== Admin password for DS-IDREPO is: fqHXMeR9vPNU7flXQPIqd5gyjt021Cmx ============== ==================================================================================================== *************************************** Dumping pod list *************************************** Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/_pod-list.txt ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pods -l app=am -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- am-79fd59c494-bxmnk am-79fd59c494-fshqw --- stderr --- **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-l744t --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pods -l app=idm -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- idm-54cf4596bf-8whkl idm-54cf4596bf-xnrx4 --- stderr --- **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-75c7c9b7d8-s48k5 --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- login-ui-95c688884-lxpvp --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- admin-ui-75d858f46-m66zx --- stderr --- *********************************** Dumping components logs *********************************** ------------------------- Dumping logs for AM ------------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/am-79fd59c494-bxmnk.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/am-79fd59c494-fshqw.txt Check pod logs for errors ----------------------- Dumping logs for AMSTER ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/amster-l744t.txt Check pod logs for errors ------------------------ Dumping logs for IDM ------------------------ Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/idm-54cf4596bf-8whkl.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/idm-54cf4596bf-xnrx4.txt Check pod logs for errors ----------------------- Dumping logs for DS-CTS ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/ds-cts-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/ds-cts-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/ds-cts-2.txt Check pod logs for errors --------------------- Dumping logs for DS-IDREPO --------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/ds-idrepo-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/ds-idrepo-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/ds-idrepo-2.txt Check pod logs for errors -------------------- Dumping logs for END-USER-UI -------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/end-user-ui-75c7c9b7d8-s48k5.txt Check pod logs for errors ---------------------- Dumping logs for LOGIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/login-ui-95c688884-lxpvp.txt Check pod logs for errors ---------------------- Dumping logs for ADMIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220804-194021-after-deployment/admin-ui-75d858f46-m66zx.txt Check pod logs for errors [04/Aug/2022 19:40:41] - INFO: Deployment successful ________________________________________________________________________________ [04/Aug/2022 19:40:41] Deploy_all_forgerock_components post : Post method ________________________________________________________________________________ Setting result to PASS Task has been successfully stopped