--Task--
name: Deploy_all_forgerock_components
enabled: True
class_name: DeployComponentsTask
source_name: controller
source_namespace: >default<
target_name: controller
target_namespace: >default<
start: 0
stop: None
timeout: no timeout
loop: False
interval: None
dependencies: ['Enable_prometheus_admin_api']
wait_for: []
options: {}
group_name: None
Current dir: /mnt/disks/data/xslou/lodestar-fork/pyrock
________________________________________________________________________________
[07/Aug/2022 16:09:12] Deploy_all_forgerock_components pre : Initialising task parameters
________________________________________________________________________________
task will be executed on controller (localhost)
________________________________________________________________________________
[07/Aug/2022 16:09:12] Deploy_all_forgerock_components step1 : Deploy components
________________________________________________________________________________
******************************** Cleaning up existing namespace ********************************
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
pod "admin-ui-f7d498545-7c9hg" force deleted
pod "am-5d5df7bfb9-jbvrt" force deleted
pod "am-5d5df7bfb9-r57vt" force deleted
pod "am-5d5df7bfb9-wbcws" force deleted
pod "amster-wq5dq" force deleted
pod "ds-cts-0" force deleted
pod "ds-cts-1" force deleted
pod "ds-cts-2" force deleted
pod "ds-idrepo-0" force deleted
pod "ds-idrepo-1" force deleted
pod "ds-idrepo-2" force deleted
pod "end-user-ui-797bd795d9-6fhcl" force deleted
pod "idm-79bf57cfc8-8zlkj" force deleted
pod "idm-79bf57cfc8-zp8zs" force deleted
pod "ldif-importer-v282w" force deleted
pod "login-ui-64f6867fc5-rkk9w" force deleted
pod "overseer-0-75f557ccc7-khfk8" force deleted
service "admin-ui" force deleted
service "am" force deleted
service "ds-cts" force deleted
service "ds-idrepo" force deleted
service "end-user-ui" force deleted
service "idm" force deleted
service "login-ui" force deleted
service "overseer-0" force deleted
deployment.apps "admin-ui" force deleted
deployment.apps "am" force deleted
deployment.apps "end-user-ui" force deleted
deployment.apps "idm" force deleted
deployment.apps "login-ui" force deleted
deployment.apps "overseer-0" force deleted
statefulset.apps "ds-cts" force deleted
statefulset.apps "ds-idrepo" force deleted
job.batch "amster" force deleted
job.batch "ldif-importer" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 10s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm idm-logging-properties kube-root-ca.crt overseer-config-0 platform-config
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap idm --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap idm-logging-properties --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm-logging-properties" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap kube-root-ca.crt --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "kube-root-ca.crt" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap overseer-config-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "overseer-config-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap platform-config --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "platform-config" deleted
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
cloud-storage-credentials-cts cloud-storage-credentials-idrepo
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-cts --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-cts" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-idrepo --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-idrepo" deleted
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
forgerock ig-web overseer-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress forgerock --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "forgerock" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress ig-web --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "ig-web" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress overseer-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "overseer-0" deleted
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 overseer-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc overseer-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "overseer-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace "xlou" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou-sp delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou-sp delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
pod "admin-ui-75d4dcddd9-hh7x8" force deleted
pod "am-7cb58cf89f-6d9c5" force deleted
pod "am-7cb58cf89f-w7p64" force deleted
pod "amster-9pgbm" force deleted
pod "ds-cts-0" force deleted
pod "ds-cts-1" force deleted
pod "ds-cts-2" force deleted
pod "ds-idrepo-0" force deleted
pod "ds-idrepo-1" force deleted
pod "ds-idrepo-2" force deleted
pod "end-user-ui-7c69fbf965-d2cqs" force deleted
pod "idm-7c4bb66fdf-q6n4d" force deleted
pod "idm-7c4bb66fdf-tq7xz" force deleted
pod "ldif-importer-4kmsc" force deleted
pod "login-ui-75d4dbc487-2szsv" force deleted
service "admin-ui" force deleted
service "am" force deleted
service "ds-cts" force deleted
service "ds-idrepo" force deleted
service "end-user-ui" force deleted
service "idm" force deleted
service "login-ui" force deleted
deployment.apps "admin-ui" force deleted
deployment.apps "am" force deleted
deployment.apps "end-user-ui" force deleted
deployment.apps "idm" force deleted
deployment.apps "login-ui" force deleted
statefulset.apps "ds-cts" force deleted
statefulset.apps "ds-idrepo" force deleted
job.batch "amster" force deleted
job.batch "ldif-importer" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou-sp get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 10s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou-sp namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou-sp get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm idm-logging-properties kube-root-ca.crt platform-config
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete configmap idm --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete configmap idm-logging-properties --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm-logging-properties" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete configmap kube-root-ca.crt --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "kube-root-ca.crt" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete configmap platform-config --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "platform-config" deleted
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou-sp get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
cloud-storage-credentials-cts cloud-storage-credentials-idrepo
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete secret cloud-storage-credentials-cts --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-cts" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete secret cloud-storage-credentials-idrepo --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-idrepo" deleted
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou-sp get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
forgerock ig-web
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete ingress forgerock --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "forgerock" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete ingress ig-web --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "ig-web" deleted
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-cts-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-cts-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-cts-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-idrepo-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-idrepo-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete pvc data-ds-idrepo-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete pv ds-backup-xlou-sp --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou-sp --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace "xlou-sp" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou-sp --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
************************************* Creating deployment *************************************
Creating normal (forgeops) type deployment for deployment: stack
------- Custom component configuration present. Loading values -------
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl create namespace xlou
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou created
--- stderr ---
[loop_until]: kubectl label namespace xlou self-service=false timeout=48
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou labeled
--- stderr ---
************************************ Configuring components ************************************
Applying custom configuration, dockerfiles to deployment and custom lodestar component configuration
Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/saml2/kustomize/overlay to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay
No custom features provided. Nothing to do.
---- Updating components image tag/repo from platform-images repo ----
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --clean
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Cleaning up.
[WARNING] Found nothing to clean.
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products am
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products amster
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products idm
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ds
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products ds
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products ui
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
--- stderr ---
- Checking if component Dockerfile/kustomize needs additional update -
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am
--- stderr ---
Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster
--- stderr ---
Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm
--- stderr ---
Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds cts
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts
--- stderr ---
Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds idrepo
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo
--- stderr ---
Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base end-user-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui
--- stderr ---
Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base login-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui
--- stderr ---
Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base admin-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui
--- stderr ---
Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize overlay medium
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/medium
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete -f /tmp/tmpli491j54
[loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
[loop_until]: OK (rc = 1)
--- stdout ---
--- stderr ---
Error from server (NotFound): error when deleting "/tmp/tmpli491j54": secrets "sslcert" not found
[loop_until]: kubectl --namespace=xlou apply -f /tmp/tmpli491j54
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret/sslcert created
--- stderr ---
************************************* Creating deployment *************************************
Creating normal (forgeops) type deployment for deployment: sp
------- Custom component configuration present. Loading values -------
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou-sp delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou-sp delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou-sp get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou-sp namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou-sp get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou-sp get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou-sp get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete pv ds-backup-xlou-sp --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou-sp --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou-sp --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl create namespace xlou-sp
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou-sp created
--- stderr ---
[loop_until]: kubectl label namespace xlou-sp self-service=false timeout=48
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou-sp labeled
--- stderr ---
************************************ Configuring components ************************************
Applying custom configuration, dockerfiles to deployment and custom lodestar component configuration
Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/saml2/kustomize/overlay to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/overlay
No custom features provided. Nothing to do.
---- Updating components image tag/repo from platform-images repo ----
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --clean
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Cleaning up.
[WARNING] Found nothing to clean.
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images.
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products am
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products amster
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products idm
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products ds
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products ds
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/set-images --ref master-ready-for-dev-pipelines --products ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at a80cd976f9bdd117952f99523db5e6447c4f1f3f on branch HEAD
[INFO] Updating products ui
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
--- stderr ---
- Checking if component Dockerfile/kustomize needs additional update -
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am
--- stderr ---
Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/am/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster
--- stderr ---
Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/amster/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm
--- stderr ---
Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/idm/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker ds cts
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts
--- stderr ---
Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/cts/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path docker ds idrepo
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo
--- stderr ---
Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/docker/ds/idrepo/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path kustomize base end-user-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui
--- stderr ---
Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/end-user-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path kustomize base login-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui
--- stderr ---
Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/login-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path kustomize base admin-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui
--- stderr ---
Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-6f1979a1d015a243666b3ba95d697c59b85b536d
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/base/admin-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/config path kustomize overlay small
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/kustomize/overlay/small
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp delete -f /tmp/tmptkwfna4q
[loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
[loop_until]: OK (rc = 1)
--- stdout ---
--- stderr ---
Error from server (NotFound): error when deleting "/tmp/tmptkwfna4q": secrets "sslcert" not found
[loop_until]: kubectl --namespace=xlou-sp apply -f /tmp/tmptkwfna4q
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret/sslcert created
--- stderr ---
The following components will be deployed:
- am (AM)
- amster (Amster)
- idm (IDM)
- ds-cts (DS)
- ds-idrepo (DS)
- end-user-ui (EndUserUi)
- login-ui (LoginUi)
- admin-ui (AdminUi)
Run create-secrets.sh to create passwords
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/create-secrets.sh xlou
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
certificate.cert-manager.io/ds-master-cert created
certificate.cert-manager.io/ds-ssl-cert created
issuer.cert-manager.io/selfsigned-issuer created
secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created
--- stderr ---
[loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=available deployment --all | grep "condition met"
[loop_until]: (max_time=300, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
deployment.apps/secret-agent-controller-manager condition met
--- stderr ---
[loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=ready pod --all | grep "condition met"
[loop_until]: (max_time=300, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
pod/secret-agent-controller-manager-59fcd58bbc-7lq45 condition met
--- stderr ---
[run_command]: skaffold build --file-output=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/medium.json --default-repo gcr.io/engineeringpit/lodestar-images --profile medium --config=/tmp/tmpw0xtn7ux --cache-artifacts=false --tag xlou --namespace=xlou
[run_command]: env={'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'CONFIG_PROFILE': 'cdk'}
Generating tags...
- am -> gcr.io/engineeringpit/lodestar-images/am:xlou
- amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou
- idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou
- ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou
- ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou
- ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou
- ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou
Starting build...
Building [ds]...
Sending build context to Docker daemon 115.2kB
Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1
Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
---> ed865decf122
Step 2/11 : USER root
---> Using cache
---> 2a3754203d7f
Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils
---> Using cache
---> 52d83f3765cf
Step 4/11 : USER forgerock
---> Using cache
---> 913a6038f771
Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data
---> Using cache
---> 0ec850162ae3
Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds"
---> Using cache
---> 695a8213c565
Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore"
---> Using cache
---> 16c1d1c1787e
Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts
---> Using cache
---> 8990db84ea27
Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext
---> Using cache
---> caaea98f7378
Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/
---> Using cache
---> 9ee4422fc4e1
Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext
---> Using cache
---> fb6203f070a7
Successfully built fb6203f070a7
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou
Build [ds] succeeded
Building [ds-cts]...
Sending build context to Docker daemon 78.85kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1
Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
---> ed865decf122
Step 2/10 : USER root
---> Using cache
---> 2a3754203d7f
Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 404a9596eca1
Step 4/10 : RUN chown -R forgerock:root /opt/opendj
---> Using cache
---> 3edd4638601f
Step 5/10 : USER forgerock
---> Using cache
---> 7f210a4ac6f0
Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> ddfe992df770
Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/
---> Using cache
---> cfc093c50789
Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 4dd45390f834
Step 9/10 : ARG profile_version
---> Using cache
---> 880455457730
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> ae3536d6388d
Successfully built ae3536d6388d
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou
Build [ds-cts] succeeded
Building [am]...
Sending build context to Docker daemon 4.608kB
Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24: Pulling from forgerock-io/am-cdk/pit1
Digest: sha256:416c8163cd0e0dda600e6be4f12a701a08d97cf19140cc22c5b10fc19d5c227e
Status: Image is up to date for gcr.io/forgerock-io/am-cdk/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
---> 5629046b073a
Step 2/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> ac8359fba1ce
Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 1ea1f221af5b
Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/
---> Using cache
---> 5e95e27f7c77
Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/
---> Using cache
---> 8d62bf4a2596
Step 6/6 : WORKDIR /home/forgerock
---> Using cache
---> 899875000c7e
Successfully built 899875000c7e
Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou
Build [am] succeeded
Building [amster]...
Sending build context to Docker daemon 54.27kB
Step 1/14 : FROM gcr.io/forgerock-io/amster/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24: Pulling from forgerock-io/amster/pit1
Digest: sha256:ec8cdced4cdb6d8d33fee81cd19970a584f091376765ddcd87aeff45749f4291
Status: Image is up to date for gcr.io/forgerock-io/amster/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
---> 739cbe15f205
Step 2/14 : USER root
---> Using cache
---> 0c8198db8d71
Step 3/14 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 2f11e2f43442
Step 4/14 : ENV DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 17498942c58e
Step 5/14 : ENV APT_OPTS="--no-install-recommends --yes"
---> Using cache
---> 2204fa9a1967
Step 6/14 : RUN apt-get update && apt-get install -y openldap-utils jq inotify-tools && apt-get clean && rm -r /var/lib/apt/lists /var/cache/apt/archives
---> Using cache
---> d6d3e64e806f
Step 7/14 : USER forgerock
---> Using cache
---> c6b8bd9bec8a
Step 8/14 : ENV SERVER_URI /am
---> Using cache
---> 42d175f74461
Step 9/14 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> e5bb0fdd0248
Step 10/14 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 0b0d968b94ea
Step 11/14 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/amster
---> Using cache
---> f9e03c42917b
Step 12/14 : COPY --chown=forgerock:root scripts /opt/amster
---> Using cache
---> 2c291b1e6262
Step 13/14 : RUN chmod 777 /opt/amster
---> Using cache
---> b05254cb2bc7
Step 14/14 : ENTRYPOINT [ "/opt/amster/docker-entrypoint.sh" ]
---> Using cache
---> d3deb0d46c37
Successfully built d3deb0d46c37
Successfully tagged gcr.io/engineeringpit/lodestar-images/amster:xlou
Build [amster] succeeded
Building [idm]...
Sending build context to Docker daemon 312.8kB
Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c
7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c: Pulling from forgerock-io/idm-cdk/pit1
751ef25978b2: Already exists
58af2fe1bb83: Already exists
23f7e860b347: Already exists
ce27966cf2a5: Already exists
1b43dec98489: Pulling fs layer
44a6c98673f8: Pulling fs layer
fa5904e0b446: Pulling fs layer
fa5904e0b446: Verifying Checksum
fa5904e0b446: Download complete
1b43dec98489: Verifying Checksum
1b43dec98489: Download complete
1b43dec98489: Pull complete
44a6c98673f8: Download complete
44a6c98673f8: Pull complete
fa5904e0b446: Pull complete
Digest: sha256:5bac033634f6737347ebae5262d387fe4c666c1980220e6b612ff46ff229940a
Status: Downloaded newer image for gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c
---> 59689a525e03
Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list
---> 5731b4130587
Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar
---> Running in 1aeb73371100
Removing intermediate container 1aeb73371100
---> 15f9a2aa2a7c
Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal
---> Running in 8ef34239bb4f
Removing intermediate container 8ef34239bb4f
---> 7352c0f42043
Step 5/8 : ARG CONFIG_PROFILE=cdk
---> Running in 18f527930730
Removing intermediate container 18f527930730
---> 73671c9c9ae8
Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Running in c8ccbaadf417
[0;36m*** Building 'cdk' profile ***[0m
Removing intermediate container c8ccbaadf417
---> 0355c556a687
Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm
---> 016e85112097
Step 8/8 : COPY --chown=forgerock:root . /opt/openidm
---> 205f077ba2e8
Successfully built 205f077ba2e8
Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/idm]
f2f48ad2e4c3: Preparing
880253e7f8a6: Preparing
31b53e898fb0: Preparing
b17c1ed5efd0: Preparing
de8313018499: Preparing
31d45c5aac4f: Preparing
d3bd8301a2f6: Preparing
194cc08cbea2: Preparing
6db889e47719: Preparing
735956b91a18: Preparing
194cc08cbea2: Waiting
6db889e47719: Waiting
735956b91a18: Waiting
31d45c5aac4f: Waiting
d3bd8301a2f6: Waiting
b17c1ed5efd0: Layer already exists
de8313018499: Layer already exists
d3bd8301a2f6: Layer already exists
31d45c5aac4f: Layer already exists
194cc08cbea2: Layer already exists
6db889e47719: Layer already exists
735956b91a18: Layer already exists
f2f48ad2e4c3: Pushed
880253e7f8a6: Pushed
31b53e898fb0: Pushed
xlou: digest: sha256:860c3ebab6d21c2509940d029b66aeb6f45bb1e4c2390e542f66a1ed384eae9a size: 2415
Build [idm] succeeded
Building [ig]...
Sending build context to Docker daemon 29.18kB
Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
7.3.0-latest-postcommit: Pulling from forgerock-io/ig/pit1
Digest: sha256:4818c7cd5c625cc2d0ed7c354ec4ece0a74a0871698207aea51b9146b4aa1998
Status: Image is up to date for gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
---> 3c4055bd0013
Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 373058e6e5f7
Step 3/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> 1bb4d17a0fd5
Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> abf2c0f541e4
Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig
---> Using cache
---> a9cb2e5c39fd
Step 6/6 : COPY --chown=forgerock:root . /var/ig
---> Using cache
---> a049f4cd9f75
Successfully built a049f4cd9f75
Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou
Build [ig] succeeded
Building [ds-idrepo]...
Sending build context to Docker daemon 117.8kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1
Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
---> ed865decf122
Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> e397057857f0
Step 3/10 : WORKDIR /opt/opendj
---> Using cache
---> 54762a557c5d
Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> 30c7f7a533e5
Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/
---> Using cache
---> 5c9976aa2aa5
Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 1e831f9bc32f
Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma
---> Using cache
---> e73baf8ba9d6
Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/
---> Using cache
---> 0524fe407963
Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif
---> Using cache
---> e76f815abeb1
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 7edf7793dfc7
Successfully built 7edf7793dfc7
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou
Build [ds-idrepo] succeeded
There is a new version (1.39.1) of Skaffold available. Download it from:
https://github.com/GoogleContainerTools/skaffold/releases/tag/v1.39.1
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
[run_command]: skaffold deploy --build-artifacts=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/medium.json --profile medium --config=/tmp/tmp4qlz4wx0 --label skaffold.dev/profile=medium --label skaffold.dev/run-id=xlou --force=false --status-check=true --namespace=xlou
Tags used in deployment:
- am -> gcr.io/engineeringpit/lodestar-images/am:xlou@sha256:e2be3b24f724e416b86384b94cb9d1636008c6b3248a5b6f65e5a88ac6c555b4
- amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou@sha256:ecfecdbeeac80327355fb082516ebdc5dd25cf4bdae4c0bca89059c736413326
- idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou@sha256:860c3ebab6d21c2509940d029b66aeb6f45bb1e4c2390e542f66a1ed384eae9a
- ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou@sha256:820132c915ddd86e84eb96ce9b15b1b0c20a7861268e9b20b64bc46126d2db9b
- ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou@sha256:a47d27b4cefab33a83a8720187488738f171e329c3fc04cebc863a18289f8153
- ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou@sha256:9f1fc770d95497c02f32fff1d79f113a2509f8a4432185fe2fd6722abd45325e
- ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou@sha256:ceae5d9f82d35ce49d4a0d74bcff311746649779fa0e214cb35ac09c5f90e7f6
Starting deploy...
- configmap/idm created
- configmap/idm-logging-properties created
- configmap/platform-config created
- secret/cloud-storage-credentials-cts created
- secret/cloud-storage-credentials-idrepo created
- service/admin-ui created
- service/am created
- service/ds-cts created
- service/ds-idrepo created
- service/end-user-ui created
- service/idm created
- service/login-ui created
- deployment.apps/admin-ui created
- deployment.apps/am created
- deployment.apps/end-user-ui created
- deployment.apps/idm created
- deployment.apps/login-ui created
- statefulset.apps/ds-cts created
- statefulset.apps/ds-idrepo created
- poddisruptionbudget.policy/am created
- poddisruptionbudget.policy/ds-idrepo created
- poddisruptionbudget.policy/idm created
- poddisruptionbudget.policy/ig created
- Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
- poddisruptionbudget.policy/ds-cts created
- job.batch/amster created
- job.batch/ldif-importer created
- ingress.networking.k8s.io/forgerock created
- ingress.networking.k8s.io/ig-web created
Waiting for deployments to stabilize...
- xlou:deployment/admin-ui is ready. [6/7 deployment(s) still pending]
- xlou:deployment/end-user-ui is ready. [5/7 deployment(s) still pending]
- xlou:deployment/am: waiting for rollout to finish: 0 of 3 updated replicas are available...
- xlou:deployment/idm: waiting for init container fbc-init to start
- xlou:pod/idm-69ddb85d9b-9cjjw: waiting for init container fbc-init to start
- xlou:pod/idm-69ddb85d9b-bmnb9: waiting for init container fbc-init to start
- xlou:deployment/login-ui: waiting for rollout to finish: 0 of 1 updated replicas are available...
- xlou:statefulset/ds-cts: waiting for init container initialize to start
- xlou:pod/ds-cts-0: waiting for init container initialize to start
- xlou:statefulset/ds-idrepo: waiting for init container initialize to start
- xlou:pod/ds-idrepo-0: waiting for init container initialize to start
- xlou:deployment/login-ui is ready. [4/7 deployment(s) still pending]
- xlou:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-1"
- xlou:pod/ds-cts-1: unable to determine current service state of pod "ds-cts-1"
- xlou:statefulset/ds-idrepo:
- xlou:deployment/idm: Readiness probe failed: HTTP probe failed with statuscode: 404
- xlou:pod/idm-69ddb85d9b-bmnb9: Readiness probe failed: HTTP probe failed with statuscode: 404
- xlou:pod/idm-69ddb85d9b-9cjjw: Readiness probe failed: HTTP probe failed with statuscode: 404
- xlou:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-2"
- xlou:pod/ds-cts-2: unable to determine current service state of pod "ds-cts-2"
- xlou:statefulset/ds-idrepo: unable to determine current service state of pod "ds-idrepo-2"
- xlou:pod/ds-idrepo-2: unable to determine current service state of pod "ds-idrepo-2"
- xlou:deployment/idm is ready. [3/7 deployment(s) still pending]
- xlou:statefulset/ds-cts is ready. [2/7 deployment(s) still pending]
- xlou:deployment/am is ready. [1/7 deployment(s) still pending]
- xlou:statefulset/ds-idrepo is ready.
Deployments stabilized in 2 minutes 3.168 seconds
There is a new version (1.39.1) of Skaffold available. Download it from:
https://github.com/GoogleContainerTools/skaffold/releases/tag/v1.39.1
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
am-5d5df7bfb9-7nthj am-5d5df7bfb9-jfg4q am-5d5df7bfb9-wxgqb
--- stderr ---
-------------- Check pod am-5d5df7bfb9-7nthj is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-5d5df7bfb9-7nthj -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-5d5df7bfb9-7nthj -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-5d5df7bfb9-7nthj -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:28Z
--- stderr ---
------- Check pod am-5d5df7bfb9-7nthj filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-5d5df7bfb9-7nthj -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-5d5df7bfb9-7nthj restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-5d5df7bfb9-7nthj -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-5d5df7bfb9-7nthj has been restarted 0 times.
-------------- Check pod am-5d5df7bfb9-jfg4q is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-5d5df7bfb9-jfg4q -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-5d5df7bfb9-jfg4q -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-5d5df7bfb9-jfg4q -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:28Z
--- stderr ---
------- Check pod am-5d5df7bfb9-jfg4q filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-5d5df7bfb9-jfg4q -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-5d5df7bfb9-jfg4q restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-5d5df7bfb9-jfg4q -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-5d5df7bfb9-jfg4q has been restarted 0 times.
-------------- Check pod am-5d5df7bfb9-wxgqb is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-5d5df7bfb9-wxgqb -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-5d5df7bfb9-wxgqb -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-5d5df7bfb9-wxgqb -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:28Z
--- stderr ---
------- Check pod am-5d5df7bfb9-wxgqb filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-5d5df7bfb9-wxgqb -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-5d5df7bfb9-wxgqb restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-5d5df7bfb9-wxgqb -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-5d5df7bfb9-wxgqb has been restarted 0 times.
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-k2cj5
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
idm-69ddb85d9b-9cjjw idm-69ddb85d9b-bmnb9
--- stderr ---
-------------- Check pod idm-69ddb85d9b-9cjjw is running --------------
[loop_until]: kubectl --namespace=xlou get pods idm-69ddb85d9b-9cjjw -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods idm-69ddb85d9b-9cjjw -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod idm-69ddb85d9b-9cjjw -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:29Z
--- stderr ---
------- Check pod idm-69ddb85d9b-9cjjw filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec idm-69ddb85d9b-9cjjw -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-69ddb85d9b-9cjjw restart count ------------
[loop_until]: kubectl --namespace=xlou get pod idm-69ddb85d9b-9cjjw -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-69ddb85d9b-9cjjw has been restarted 0 times.
-------------- Check pod idm-69ddb85d9b-bmnb9 is running --------------
[loop_until]: kubectl --namespace=xlou get pods idm-69ddb85d9b-bmnb9 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods idm-69ddb85d9b-bmnb9 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod idm-69ddb85d9b-bmnb9 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:29Z
--- stderr ---
------- Check pod idm-69ddb85d9b-bmnb9 filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec idm-69ddb85d9b-bmnb9 -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-69ddb85d9b-bmnb9 restart count ------------
[loop_until]: kubectl --namespace=xlou get pod idm-69ddb85d9b-bmnb9 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-69ddb85d9b-bmnb9 has been restarted 0 times.
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
-------------------- Check pod ds-cts-0 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:33Z
--- stderr ---
------------- Check pod ds-cts-0 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-0 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-0 has been restarted 0 times.
-------------------- Check pod ds-cts-1 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:14:06Z
--- stderr ---
------------- Check pod ds-cts-1 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-1 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-1 has been restarted 0 times.
-------------------- Check pod ds-cts-2 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:14:38Z
--- stderr ---
------------- Check pod ds-cts-2 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-2 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-2 has been restarted 0 times.
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
------------------ Check pod ds-idrepo-0 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:33Z
--- stderr ---
----------- Check pod ds-idrepo-0 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-0 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-0 has been restarted 0 times.
------------------ Check pod ds-idrepo-1 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:14:15Z
--- stderr ---
----------- Check pod ds-idrepo-1 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-1 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-1 has been restarted 0 times.
------------------ Check pod ds-idrepo-2 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:14:56Z
--- stderr ---
----------- Check pod ds-idrepo-2 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-2 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-2 has been restarted 0 times.
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-797bd795d9-2zkxx
--- stderr ---
---------- Check pod end-user-ui-797bd795d9-2zkxx is running ----------
[loop_until]: kubectl --namespace=xlou get pods end-user-ui-797bd795d9-2zkxx -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods end-user-ui-797bd795d9-2zkxx -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod end-user-ui-797bd795d9-2zkxx -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:28Z
--- stderr ---
--- Check pod end-user-ui-797bd795d9-2zkxx filesystem is accessible ---
[loop_until]: kubectl --namespace=xlou exec end-user-ui-797bd795d9-2zkxx -c end-user-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
-------- Check pod end-user-ui-797bd795d9-2zkxx restart count --------
[loop_until]: kubectl --namespace=xlou get pod end-user-ui-797bd795d9-2zkxx -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod end-user-ui-797bd795d9-2zkxx has been restarted 0 times.
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-64f6867fc5-l6h7v
--- stderr ---
----------- Check pod login-ui-64f6867fc5-l6h7v is running -----------
[loop_until]: kubectl --namespace=xlou get pods login-ui-64f6867fc5-l6h7v -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods login-ui-64f6867fc5-l6h7v -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod login-ui-64f6867fc5-l6h7v -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:29Z
--- stderr ---
---- Check pod login-ui-64f6867fc5-l6h7v filesystem is accessible ----
[loop_until]: kubectl --namespace=xlou exec login-ui-64f6867fc5-l6h7v -c login-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod login-ui-64f6867fc5-l6h7v restart count ----------
[loop_until]: kubectl --namespace=xlou get pod login-ui-64f6867fc5-l6h7v -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod login-ui-64f6867fc5-l6h7v has been restarted 0 times.
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-f7d498545-rpn6t
--- stderr ---
------------ Check pod admin-ui-f7d498545-rpn6t is running ------------
[loop_until]: kubectl --namespace=xlou get pods admin-ui-f7d498545-rpn6t -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods admin-ui-f7d498545-rpn6t -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod admin-ui-f7d498545-rpn6t -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:13:28Z
--- stderr ---
----- Check pod admin-ui-f7d498545-rpn6t filesystem is accessible -----
[loop_until]: kubectl --namespace=xlou exec admin-ui-f7d498545-rpn6t -c admin-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod admin-ui-f7d498545-rpn6t restart count ----------
[loop_until]: kubectl --namespace=xlou get pod admin-ui-f7d498545-rpn6t -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod admin-ui-f7d498545-rpn6t has been restarted 0 times.
******************************* Checking AM component is running *******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:3 replicas:3
--- stderr ---
***************************** Checking AMSTER component is running *****************************
------------------ Waiting for Amster job to finish ------------------
--------------------- Get expected number of pods ---------------------
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get jobs amster -o jsonpath="{.status.succeeded}" | grep "1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
****************************** Checking IDM component is running ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
-------------- Waiting for 2 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:2 replicas:2
--- stderr ---
***************************** Checking DS-CTS component is running *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
*************************** Checking DS-IDREPO component is running ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
************************** Checking END-USER-UI component is running **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking LOGIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking ADMIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
****************************** Livecheck stage: After deployment ******************************
------------------------ Running AM livecheck ------------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
[loop_until]: kubectl --namespace=xlou get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
YUdYZFNtNDBrOUhhdjVIR0g4UFRpbFZi
--- stderr ---
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: aGXdSm40k9Hav5HGH8PTilVb" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "_hE6A2zFOwc2xGcPVt8kKbPYLMg.*AAJTSQACMDIAAlNLABxhSnJRMjNseU5JQ2U5bGNHbXVzcnBIVE1TOUU9AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
---------------------- Running AMSTER livecheck ----------------------
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-k2cj5
--- stderr ---
Amster import completed. AM is now configured
Amster livecheck is passed
------------------------ Running IDM livecheck ------------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping
[loop_until]: kubectl --namespace=xlou get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
Tk5IUXZvVkJOQ3FxbzI2WkZjQkRrblJP
--- stderr ---
Set admin password: NNHQvoVBNCqqo26ZFcBDknRO
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "",
"_rev": "",
"shortDesc": "OpenIDM ready",
"state": "ACTIVE_READY"
}
---------------------- Running DS-CTS livecheck ----------------------
Livecheck to ds-cts-0
[loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
WXlndDNBRU1IQnpZbllyd3pZWmxEZjR3NG00R3FCWlY=
--- stderr ---
[run_command]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Yygt3AEMHBzYnYrwzYZlDf4w4m4GqBZV" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-cts-1
[run_command]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Yygt3AEMHBzYnYrwzYZlDf4w4m4GqBZV" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-cts-2
[run_command]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Yygt3AEMHBzYnYrwzYZlDf4w4m4GqBZV" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
--------------------- Running DS-IDREPO livecheck ---------------------
Livecheck to ds-idrepo-0
[loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
WXlndDNBRU1IQnpZbllyd3pZWmxEZjR3NG00R3FCWlY=
--- stderr ---
[run_command]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Yygt3AEMHBzYnYrwzYZlDf4w4m4GqBZV" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-idrepo-1
[run_command]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Yygt3AEMHBzYnYrwzYZlDf4w4m4GqBZV" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-idrepo-2
[run_command]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Yygt3AEMHBzYnYrwzYZlDf4w4m4GqBZV" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
-------------------- Running END-USER-UI livecheck --------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/enduser
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/enduser"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Identity Management
[]
--------------------- Running LOGIN-UI livecheck ---------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Login
[]
--------------------- Running ADMIN-UI livecheck ---------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/platform
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/platform"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Platform Admin
[]
LIVECHECK SUCCEEDED
****************************** Initializing component pods for AM ******************************
----------------------- Get AM software version -----------------------
Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version
- Login amadmin to get token
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: aGXdSm40k9Hav5HGH8PTilVb" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "ouhIKvZjqR3IEgSMgwGnWqIXirg.*AAJTSQACMDIAAlNLABxFL2VQKysybk9rdFRsSXdoR3A0WHVSbHliWW89AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
[http_cmd]: curl -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=ouhIKvZjqR3IEgSMgwGnWqIXirg.*AAJTSQACMDIAAlNLABxFL2VQKysybk9rdFRsSXdoR3A0WHVSbHliWW89AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1659888990.696.70217.752670|f60edb382037eb2df1e800d563ad78a7" "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"_rev": "-340041102",
"version": "7.3.0-SNAPSHOT",
"fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build 7c2ce51c1b0baf635933e3d41b38c25ee535de24 (2022-August-05 08:39)",
"revision": "7c2ce51c1b0baf635933e3d41b38c25ee535de24",
"date": "2022-August-05 08:39"
}
**************************** Initializing component pods for AMSTER ****************************
***************************** Initializing component pods for IDM *****************************
---------------------- Get IDM software version ----------------------
Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"productVersion": "7.3.0-SNAPSHOT",
"productBuildDate": "20220806104653",
"productRevision": "7edcaf2"
}
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get DS-CTS software version ---------------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
------------------- Get DS-IDREPO software version -------------------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
------------------ Get END-USER-UI software version ------------------
[loop_until]: kubectl --namespace=xlou exec end-user-ui-797bd795d9-2zkxx -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.62e17808.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp end-user-ui-797bd795d9-2zkxx:/usr/share/nginx/html/js/chunk-vendors.62e17808.js /tmp/end-user-ui_info/chunk-vendors.62e17808.js -c end-user-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
-------------------- Get LOGIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou exec login-ui-64f6867fc5-l6h7v -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.ab096f69.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp login-ui-64f6867fc5-l6h7v:/usr/share/nginx/html/js/chunk-vendors.ab096f69.js /tmp/login-ui_info/chunk-vendors.ab096f69.js -c login-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
-------------------- Get ADMIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou exec admin-ui-f7d498545-rpn6t -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.ee0fc829.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp admin-ui-f7d498545-rpn6t:/usr/share/nginx/html/js/chunk-vendors.ee0fc829.js /tmp/admin-ui_info/chunk-vendors.ee0fc829.js -c admin-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
====================================================================================================
====================== Admin password for AM is: aGXdSm40k9Hav5HGH8PTilVb ======================
====================================================================================================
====================================================================================================
===================== Admin password for IDM is: NNHQvoVBNCqqo26ZFcBDknRO =====================
====================================================================================================
====================================================================================================
================ Admin password for DS-CTS is: Yygt3AEMHBzYnYrwzYZlDf4w4m4GqBZV ================
====================================================================================================
====================================================================================================
============== Admin password for DS-IDREPO is: Yygt3AEMHBzYnYrwzYZlDf4w4m4GqBZV ==============
====================================================================================================
*************************************** Dumping pod list ***************************************
Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/_pod-list.txt
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
am-5d5df7bfb9-7nthj am-5d5df7bfb9-jfg4q am-5d5df7bfb9-wxgqb
--- stderr ---
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-k2cj5
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm-69ddb85d9b-9cjjw idm-69ddb85d9b-bmnb9
--- stderr ---
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-797bd795d9-2zkxx
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-64f6867fc5-l6h7v
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-f7d498545-rpn6t
--- stderr ---
*********************************** Dumping components logs ***********************************
------------------------- Dumping logs for AM -------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/am-5d5df7bfb9-7nthj.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/am-5d5df7bfb9-jfg4q.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/am-5d5df7bfb9-wxgqb.txt
Check pod logs for errors
----------------------- Dumping logs for AMSTER -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/amster-k2cj5.txt
Check pod logs for errors
------------------------ Dumping logs for IDM ------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/idm-69ddb85d9b-9cjjw.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/idm-69ddb85d9b-bmnb9.txt
Check pod logs for errors
----------------------- Dumping logs for DS-CTS -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/ds-cts-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/ds-cts-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/ds-cts-2.txt
Check pod logs for errors
--------------------- Dumping logs for DS-IDREPO ---------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/ds-idrepo-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/ds-idrepo-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/ds-idrepo-2.txt
Check pod logs for errors
-------------------- Dumping logs for END-USER-UI --------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/end-user-ui-797bd795d9-2zkxx.txt
Check pod logs for errors
---------------------- Dumping logs for LOGIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/login-ui-64f6867fc5-l6h7v.txt
Check pod logs for errors
---------------------- Dumping logs for ADMIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/stack/20220807-161641-after-deployment/admin-ui-f7d498545-rpn6t.txt
Check pod logs for errors
The following components will be deployed:
- am (AM)
- amster (Amster)
- idm (IDM)
- ds-cts (DS)
- ds-idrepo (DS)
- end-user-ui (EndUserUi)
- login-ui (LoginUi)
- admin-ui (AdminUi)
Run create-secrets.sh to create passwords
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_sp/bin/create-secrets.sh xlou-sp
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
certificate.cert-manager.io/ds-master-cert created
certificate.cert-manager.io/ds-ssl-cert created
issuer.cert-manager.io/selfsigned-issuer created
secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created
--- stderr ---
[loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=available deployment --all | grep "condition met"
[loop_until]: (max_time=300, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
deployment.apps/secret-agent-controller-manager condition met
--- stderr ---
[loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=ready pod --all | grep "condition met"
[loop_until]: (max_time=300, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
pod/secret-agent-controller-manager-59fcd58bbc-7lq45 condition met
--- stderr ---
[run_command]: skaffold build --file-output=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/small.json --default-repo gcr.io/engineeringpit/lodestar-images --profile small --config=/tmp/tmpg6rlwcky --cache-artifacts=false --tag xlou-sp --namespace=xlou-sp
[run_command]: env={'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'CONFIG_PROFILE': 'cdk'}
Generating tags...
- am -> gcr.io/engineeringpit/lodestar-images/am:xlou-sp
- amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou-sp
- idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou-sp
- ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou-sp
- ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou-sp
- ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou-sp
- ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou-sp
Starting build...
Building [ds]...
Sending build context to Docker daemon 115.2kB
Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1
Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
---> ed865decf122
Step 2/11 : USER root
---> Using cache
---> 2a3754203d7f
Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils
---> Using cache
---> 52d83f3765cf
Step 4/11 : USER forgerock
---> Using cache
---> 913a6038f771
Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data
---> Using cache
---> 0ec850162ae3
Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds"
---> Using cache
---> 695a8213c565
Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore"
---> Using cache
---> 16c1d1c1787e
Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts
---> Using cache
---> 8990db84ea27
Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext
---> Using cache
---> caaea98f7378
Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/
---> Using cache
---> 9ee4422fc4e1
Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext
---> Using cache
---> fb6203f070a7
Successfully built fb6203f070a7
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou-sp
Build [ds] succeeded
Building [am]...
Sending build context to Docker daemon 4.608kB
Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24: Pulling from forgerock-io/am-cdk/pit1
Digest: sha256:416c8163cd0e0dda600e6be4f12a701a08d97cf19140cc22c5b10fc19d5c227e
Status: Image is up to date for gcr.io/forgerock-io/am-cdk/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
---> 5629046b073a
Step 2/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> ac8359fba1ce
Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 1ea1f221af5b
Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/
---> Using cache
---> 5e95e27f7c77
Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/
---> Using cache
---> 8d62bf4a2596
Step 6/6 : WORKDIR /home/forgerock
---> Using cache
---> 899875000c7e
Successfully built 899875000c7e
Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou-sp
Build [am] succeeded
Building [amster]...
Sending build context to Docker daemon 54.27kB
Step 1/14 : FROM gcr.io/forgerock-io/amster/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24: Pulling from forgerock-io/amster/pit1
Digest: sha256:ec8cdced4cdb6d8d33fee81cd19970a584f091376765ddcd87aeff45749f4291
Status: Image is up to date for gcr.io/forgerock-io/amster/pit1:7.3.0-7c2ce51c1b0baf635933e3d41b38c25ee535de24
---> 739cbe15f205
Step 2/14 : USER root
---> Using cache
---> 0c8198db8d71
Step 3/14 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 2f11e2f43442
Step 4/14 : ENV DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 17498942c58e
Step 5/14 : ENV APT_OPTS="--no-install-recommends --yes"
---> Using cache
---> 2204fa9a1967
Step 6/14 : RUN apt-get update && apt-get install -y openldap-utils jq inotify-tools && apt-get clean && rm -r /var/lib/apt/lists /var/cache/apt/archives
---> Using cache
---> d6d3e64e806f
Step 7/14 : USER forgerock
---> Using cache
---> c6b8bd9bec8a
Step 8/14 : ENV SERVER_URI /am
---> Using cache
---> 42d175f74461
Step 9/14 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> e5bb0fdd0248
Step 10/14 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 0b0d968b94ea
Step 11/14 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/amster
---> Using cache
---> f9e03c42917b
Step 12/14 : COPY --chown=forgerock:root scripts /opt/amster
---> Using cache
---> 2c291b1e6262
Step 13/14 : RUN chmod 777 /opt/amster
---> Using cache
---> b05254cb2bc7
Step 14/14 : ENTRYPOINT [ "/opt/amster/docker-entrypoint.sh" ]
---> Using cache
---> d3deb0d46c37
Successfully built d3deb0d46c37
Successfully tagged gcr.io/engineeringpit/lodestar-images/amster:xlou-sp
Build [amster] succeeded
Building [idm]...
Sending build context to Docker daemon 312.8kB
Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c
7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c: Pulling from forgerock-io/idm-cdk/pit1
Digest: sha256:5bac033634f6737347ebae5262d387fe4c666c1980220e6b612ff46ff229940a
Status: Image is up to date for gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-7edcaf211d615f737f7c180f109be15496e5cc1c
---> 59689a525e03
Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 5731b4130587
Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar
---> Using cache
---> 15f9a2aa2a7c
Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal
---> Using cache
---> 7352c0f42043
Step 5/8 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> 73671c9c9ae8
Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 0355c556a687
Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm
---> Using cache
---> 016e85112097
Step 8/8 : COPY --chown=forgerock:root . /opt/openidm
---> Using cache
---> 205f077ba2e8
Successfully built 205f077ba2e8
Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou-sp
The push refers to repository [gcr.io/engineeringpit/lodestar-images/idm]
f2f48ad2e4c3: Preparing
880253e7f8a6: Preparing
31b53e898fb0: Preparing
b17c1ed5efd0: Preparing
de8313018499: Preparing
31d45c5aac4f: Preparing
d3bd8301a2f6: Preparing
194cc08cbea2: Preparing
6db889e47719: Preparing
735956b91a18: Preparing
31d45c5aac4f: Waiting
d3bd8301a2f6: Waiting
194cc08cbea2: Waiting
6db889e47719: Waiting
735956b91a18: Waiting
880253e7f8a6: Layer already exists
31b53e898fb0: Layer already exists
de8313018499: Layer already exists
f2f48ad2e4c3: Layer already exists
b17c1ed5efd0: Layer already exists
31d45c5aac4f: Layer already exists
d3bd8301a2f6: Layer already exists
6db889e47719: Layer already exists
194cc08cbea2: Layer already exists
735956b91a18: Layer already exists
xlou-sp: digest: sha256:860c3ebab6d21c2509940d029b66aeb6f45bb1e4c2390e542f66a1ed384eae9a size: 2415
Build [idm] succeeded
Building [ds-cts]...
Sending build context to Docker daemon 78.85kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1
Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
---> ed865decf122
Step 2/10 : USER root
---> Using cache
---> 2a3754203d7f
Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 404a9596eca1
Step 4/10 : RUN chown -R forgerock:root /opt/opendj
---> Using cache
---> 3edd4638601f
Step 5/10 : USER forgerock
---> Using cache
---> 7f210a4ac6f0
Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> ddfe992df770
Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/
---> Using cache
---> cfc093c50789
Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 4dd45390f834
Step 9/10 : ARG profile_version
---> Using cache
---> 880455457730
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> ae3536d6388d
Successfully built ae3536d6388d
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou-sp
Build [ds-cts] succeeded
Building [ds-idrepo]...
Sending build context to Docker daemon 117.8kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f: Pulling from forgerock-io/ds/pit1
Digest: sha256:101a0a4902e6edf8e70ee5f7a3ba30e498ac523c29eaff69badc5e22f2f9136c
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
---> ed865decf122
Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> e397057857f0
Step 3/10 : WORKDIR /opt/opendj
---> Using cache
---> 54762a557c5d
Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> 30c7f7a533e5
Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/
---> Using cache
---> 5c9976aa2aa5
Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 1e831f9bc32f
Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma
---> Using cache
---> e73baf8ba9d6
Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/
---> Using cache
---> 0524fe407963
Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif
---> Using cache
---> e76f815abeb1
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 7edf7793dfc7
Successfully built 7edf7793dfc7
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou-sp
Build [ds-idrepo] succeeded
Building [ig]...
Sending build context to Docker daemon 29.18kB
Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
7.3.0-latest-postcommit: Pulling from forgerock-io/ig/pit1
Digest: sha256:4818c7cd5c625cc2d0ed7c354ec4ece0a74a0871698207aea51b9146b4aa1998
Status: Image is up to date for gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
---> 3c4055bd0013
Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 373058e6e5f7
Step 3/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> 1bb4d17a0fd5
Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> abf2c0f541e4
Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig
---> Using cache
---> a9cb2e5c39fd
Step 6/6 : COPY --chown=forgerock:root . /var/ig
---> Using cache
---> a049f4cd9f75
Successfully built a049f4cd9f75
Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou-sp
Build [ig] succeeded
There is a new version (1.39.1) of Skaffold available. Download it from:
https://github.com/GoogleContainerTools/skaffold/releases/tag/v1.39.1
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
[run_command]: skaffold deploy --build-artifacts=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/small.json --profile small --config=/tmp/tmpbmwm7nti --label skaffold.dev/profile=small --label skaffold.dev/run-id=xlou-sp --force=false --status-check=true --namespace=xlou-sp
Tags used in deployment:
- am -> gcr.io/engineeringpit/lodestar-images/am:xlou-sp@sha256:e2be3b24f724e416b86384b94cb9d1636008c6b3248a5b6f65e5a88ac6c555b4
- amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou-sp@sha256:ecfecdbeeac80327355fb082516ebdc5dd25cf4bdae4c0bca89059c736413326
- idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou-sp@sha256:860c3ebab6d21c2509940d029b66aeb6f45bb1e4c2390e542f66a1ed384eae9a
- ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou-sp@sha256:820132c915ddd86e84eb96ce9b15b1b0c20a7861268e9b20b64bc46126d2db9b
- ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou-sp@sha256:a47d27b4cefab33a83a8720187488738f171e329c3fc04cebc863a18289f8153
- ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou-sp@sha256:9f1fc770d95497c02f32fff1d79f113a2509f8a4432185fe2fd6722abd45325e
- ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou-sp@sha256:ceae5d9f82d35ce49d4a0d74bcff311746649779fa0e214cb35ac09c5f90e7f6
Starting deploy...
- configmap/idm created
- configmap/idm-logging-properties created
- configmap/platform-config created
- secret/cloud-storage-credentials-cts created
- secret/cloud-storage-credentials-idrepo created
- service/admin-ui created
- service/am created
- service/ds-cts created
- service/ds-idrepo created
- service/end-user-ui created
- service/idm created
- service/login-ui created
- deployment.apps/admin-ui created
- deployment.apps/am created
- deployment.apps/end-user-ui created
- deployment.apps/idm created
- deployment.apps/login-ui created
- statefulset.apps/ds-cts created
- statefulset.apps/ds-idrepo created
- job.batch/amster created
- job.batch/ldif-importer created
- ingress.networking.k8s.io/forgerock created
- ingress.networking.k8s.io/ig-web created
Waiting for deployments to stabilize...
- xlou-sp:deployment/login-ui is ready. [6/7 deployment(s) still pending]
- xlou-sp:deployment/admin-ui is ready. [5/7 deployment(s) still pending]
- xlou-sp:deployment/end-user-ui is ready. [4/7 deployment(s) still pending]
- xlou-sp:deployment/am: waiting for init container fbc-init to start
- xlou-sp:pod/am-7cb58cf89f-rhtt7: waiting for init container fbc-init to start
- xlou-sp:deployment/idm: FailedMount: MountVolume.SetUp failed for volume "truststore" : failed to sync secret cache: timed out waiting for the condition
- xlou-sp:pod/idm-6f64c5998c-s8d65: FailedMount: MountVolume.SetUp failed for volume "truststore" : failed to sync secret cache: timed out waiting for the condition
- xlou-sp:pod/idm-6f64c5998c-wklqg: waiting for init container fbc-init to start
- xlou-sp:statefulset/ds-cts: waiting for init container initialize to start
- xlou-sp:pod/ds-cts-0: waiting for init container initialize to start
- xlou-sp:statefulset/ds-idrepo: waiting for init container initialize to start
- xlou-sp:pod/ds-idrepo-0: waiting for init container initialize to start
- xlou-sp:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-1"
- xlou-sp:pod/ds-cts-1: unable to determine current service state of pod "ds-cts-1"
- xlou-sp:statefulset/ds-idrepo: waiting for init container initialize to complete
- xlou-sp:pod/ds-idrepo-0: waiting for init container initialize to complete
> [ds-idrepo-0 initialize] Initializing "data/db" from Docker image
> [ds-idrepo-0 initialize] Initializing "data/changelogDb" from Docker image
> [ds-idrepo-0 initialize] Initializing "data/import-tmp" from Docker image
> [ds-idrepo-0 initialize] Initializing "data/locks" from Docker image
> [ds-idrepo-0 initialize] Initializing "data/var" from Docker image
> [ds-idrepo-0 initialize] Upgrading configuration and data...
> [ds-idrepo-0 initialize] * OpenDJ data has already been upgraded to version
> [ds-idrepo-0 initialize] 7.3.0.167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
> [ds-idrepo-0 initialize] Rebuilding degraded indexes for base DN "ou=tokens"...
> [ds-idrepo-0 initialize] Rebuilding degraded indexes for base DN "ou=identities"...
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=39 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-meta is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=40 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-groups is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=41 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-authzroles-managed-role is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=42 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-roles is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=43 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-organization-admin is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=44 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-inactive-date is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=45 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-active-date is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=46 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-organization-member is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=47 msg=Due to changes in the configuration, index ou=identities_fr-idm-uuid is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=48 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-manager is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=49 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-notifications is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=50 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-organization-owner is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] category=BACKEND severity=WARNING seq=51 msg=Due to changes in the configuration, index ou=identities_fr-idm-managed-user-authzroles-internal-role is currently operating in a degraded state and must be rebuilt before it can be used
> [ds-idrepo-0 initialize] Rebuilding degraded indexes for base DN "ou=am-config"...
> [ds-idrepo-0 initialize] Rebuilding degraded indexes for base DN "dc=openidm,dc=forgerock,dc=io"...
> [ds-idrepo-0 initialize] Updating the "uid=admin" password
> [ds-idrepo-0 initialize] Updating the "uid=monitor" password
> [ds-idrepo-0 initialize] Initialization completed
> [ds-idrepo-0 initialize] AUTORESTORE_FROM_DSBACKUP is missing or not set to true. Skipping restore
- xlou-sp:deployment/idm: Readiness probe failed: HTTP probe failed with statuscode: 404
- xlou-sp:pod/idm-6f64c5998c-s8d65: Readiness probe failed: HTTP probe failed with statuscode: 404
- xlou-sp:pod/idm-6f64c5998c-wklqg: Readiness probe failed: HTTP probe failed with statuscode: 404
- xlou-sp:statefulset/ds-cts: waiting for init container initialize to complete
- xlou-sp:pod/ds-cts-1: waiting for init container initialize to complete
> [ds-cts-1 initialize] Initializing "data/db" from Docker image
> [ds-cts-1 initialize] Initializing "data/changelogDb" from Docker image
> [ds-cts-1 initialize] Initializing "data/import-tmp" from Docker image
> [ds-cts-1 initialize] Initializing "data/locks" from Docker image
> [ds-cts-1 initialize] Initializing "data/var" from Docker image
> [ds-cts-1 initialize] Upgrading configuration and data...
> [ds-cts-1 initialize] * OpenDJ data has already been upgraded to version
> [ds-cts-1 initialize] 7.3.0.167d7aaf6f08d4399db54a86d3d62ae8e9552a3f
> [ds-cts-1 initialize] Rebuilding degraded indexes for base DN "ou=tokens"...
> [ds-cts-1 initialize] Updating the "uid=admin" password
> [ds-cts-1 initialize] Updating the "uid=monitor" password
> [ds-cts-1 initialize] Initialization completed
> [ds-cts-1 initialize] AUTORESTORE_FROM_DSBACKUP is missing or not set to true. Skipping restore
- xlou-sp:statefulset/ds-idrepo: unable to determine current service state of pod "ds-idrepo-2"
- xlou-sp:pod/ds-idrepo-2: unable to determine current service state of pod "ds-idrepo-2"
- xlou-sp:deployment/idm is ready. [3/7 deployment(s) still pending]
- xlou-sp:statefulset/ds-cts is ready. [2/7 deployment(s) still pending]
- xlou-sp:deployment/am is ready. [1/7 deployment(s) still pending]
- xlou-sp:statefulset/ds-idrepo is ready.
Deployments stabilized in 2 minutes 0.965 second
There is a new version (1.39.1) of Skaffold available. Download it from:
https://github.com/GoogleContainerTools/skaffold/releases/tag/v1.39.1
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 2
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
am-7cb58cf89f-rhtt7 am-7cb58cf89f-ws2sj
--- stderr ---
-------------- Check pod am-7cb58cf89f-rhtt7 is running --------------
[loop_until]: kubectl --namespace=xlou-sp get pods am-7cb58cf89f-rhtt7 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods am-7cb58cf89f-rhtt7 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod am-7cb58cf89f-rhtt7 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:32Z
--- stderr ---
------- Check pod am-7cb58cf89f-rhtt7 filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou-sp exec am-7cb58cf89f-rhtt7 -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-7cb58cf89f-rhtt7 restart count -------------
[loop_until]: kubectl --namespace=xlou-sp get pod am-7cb58cf89f-rhtt7 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-7cb58cf89f-rhtt7 has been restarted 0 times.
-------------- Check pod am-7cb58cf89f-ws2sj is running --------------
[loop_until]: kubectl --namespace=xlou-sp get pods am-7cb58cf89f-ws2sj -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods am-7cb58cf89f-ws2sj -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod am-7cb58cf89f-ws2sj -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:32Z
--- stderr ---
------- Check pod am-7cb58cf89f-ws2sj filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou-sp exec am-7cb58cf89f-ws2sj -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-7cb58cf89f-ws2sj restart count -------------
[loop_until]: kubectl --namespace=xlou-sp get pod am-7cb58cf89f-ws2sj -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-7cb58cf89f-ws2sj has been restarted 0 times.
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-49df8
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
idm-6f64c5998c-s8d65 idm-6f64c5998c-wklqg
--- stderr ---
-------------- Check pod idm-6f64c5998c-s8d65 is running --------------
[loop_until]: kubectl --namespace=xlou-sp get pods idm-6f64c5998c-s8d65 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods idm-6f64c5998c-s8d65 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod idm-6f64c5998c-s8d65 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:32Z
--- stderr ---
------- Check pod idm-6f64c5998c-s8d65 filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou-sp exec idm-6f64c5998c-s8d65 -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-6f64c5998c-s8d65 restart count ------------
[loop_until]: kubectl --namespace=xlou-sp get pod idm-6f64c5998c-s8d65 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-6f64c5998c-s8d65 has been restarted 0 times.
-------------- Check pod idm-6f64c5998c-wklqg is running --------------
[loop_until]: kubectl --namespace=xlou-sp get pods idm-6f64c5998c-wklqg -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods idm-6f64c5998c-wklqg -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod idm-6f64c5998c-wklqg -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:32Z
--- stderr ---
------- Check pod idm-6f64c5998c-wklqg filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou-sp exec idm-6f64c5998c-wklqg -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-6f64c5998c-wklqg restart count ------------
[loop_until]: kubectl --namespace=xlou-sp get pod idm-6f64c5998c-wklqg -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-6f64c5998c-wklqg has been restarted 0 times.
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
-------------------- Check pod ds-cts-0 is running --------------------
[loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:36Z
--- stderr ---
------------- Check pod ds-cts-0 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou-sp exec ds-cts-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-0 restart count ------------------
[loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-0 has been restarted 0 times.
-------------------- Check pod ds-cts-1 is running --------------------
[loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:19:08Z
--- stderr ---
------------- Check pod ds-cts-1 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou-sp exec ds-cts-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-1 restart count ------------------
[loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-1 has been restarted 0 times.
-------------------- Check pod ds-cts-2 is running --------------------
[loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:19:41Z
--- stderr ---
------------- Check pod ds-cts-2 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou-sp exec ds-cts-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-2 restart count ------------------
[loop_until]: kubectl --namespace=xlou-sp get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-2 has been restarted 0 times.
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
------------------ Check pod ds-idrepo-0 is running ------------------
[loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:37Z
--- stderr ---
----------- Check pod ds-idrepo-0 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou-sp exec ds-idrepo-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-0 restart count -----------------
[loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-0 has been restarted 0 times.
------------------ Check pod ds-idrepo-1 is running ------------------
[loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:19:17Z
--- stderr ---
----------- Check pod ds-idrepo-1 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou-sp exec ds-idrepo-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-1 restart count -----------------
[loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-1 has been restarted 0 times.
------------------ Check pod ds-idrepo-2 is running ------------------
[loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:19:58Z
--- stderr ---
----------- Check pod ds-idrepo-2 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou-sp exec ds-idrepo-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-2 restart count -----------------
[loop_until]: kubectl --namespace=xlou-sp get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-2 has been restarted 0 times.
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-7c69fbf965-ddpwd
--- stderr ---
---------- Check pod end-user-ui-7c69fbf965-ddpwd is running ----------
[loop_until]: kubectl --namespace=xlou-sp get pods end-user-ui-7c69fbf965-ddpwd -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods end-user-ui-7c69fbf965-ddpwd -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod end-user-ui-7c69fbf965-ddpwd -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:32Z
--- stderr ---
--- Check pod end-user-ui-7c69fbf965-ddpwd filesystem is accessible ---
[loop_until]: kubectl --namespace=xlou-sp exec end-user-ui-7c69fbf965-ddpwd -c end-user-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
-------- Check pod end-user-ui-7c69fbf965-ddpwd restart count --------
[loop_until]: kubectl --namespace=xlou-sp get pod end-user-ui-7c69fbf965-ddpwd -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod end-user-ui-7c69fbf965-ddpwd has been restarted 0 times.
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-75d4dbc487-dz885
--- stderr ---
----------- Check pod login-ui-75d4dbc487-dz885 is running -----------
[loop_until]: kubectl --namespace=xlou-sp get pods login-ui-75d4dbc487-dz885 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods login-ui-75d4dbc487-dz885 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod login-ui-75d4dbc487-dz885 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:32Z
--- stderr ---
---- Check pod login-ui-75d4dbc487-dz885 filesystem is accessible ----
[loop_until]: kubectl --namespace=xlou-sp exec login-ui-75d4dbc487-dz885 -c login-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod login-ui-75d4dbc487-dz885 restart count ----------
[loop_until]: kubectl --namespace=xlou-sp get pod login-ui-75d4dbc487-dz885 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod login-ui-75d4dbc487-dz885 has been restarted 0 times.
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-75d4dcddd9-gn6hx
--- stderr ---
----------- Check pod admin-ui-75d4dcddd9-gn6hx is running -----------
[loop_until]: kubectl --namespace=xlou-sp get pods admin-ui-75d4dcddd9-gn6hx -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pods admin-ui-75d4dcddd9-gn6hx -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp get pod admin-ui-75d4dcddd9-gn6hx -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-08-07T16:18:31Z
--- stderr ---
---- Check pod admin-ui-75d4dcddd9-gn6hx filesystem is accessible ----
[loop_until]: kubectl --namespace=xlou-sp exec admin-ui-75d4dcddd9-gn6hx -c admin-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod admin-ui-75d4dcddd9-gn6hx restart count ----------
[loop_until]: kubectl --namespace=xlou-sp get pod admin-ui-75d4dcddd9-gn6hx -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod admin-ui-75d4dcddd9-gn6hx has been restarted 0 times.
******************************* Checking AM component is running *******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
-------------- Waiting for 2 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-sp get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:2 replicas:2
--- stderr ---
***************************** Checking AMSTER component is running *****************************
------------------ Waiting for Amster job to finish ------------------
--------------------- Get expected number of pods ---------------------
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-sp get jobs amster -o jsonpath="{.status.succeeded}" | grep "1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
****************************** Checking IDM component is running ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
-------------- Waiting for 2 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-sp get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:2 replicas:2
--- stderr ---
***************************** Checking DS-CTS component is running *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-sp get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
*************************** Checking DS-IDREPO component is running ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-sp get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
************************** Checking END-USER-UI component is running **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-sp get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking LOGIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-sp get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking ADMIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-sp get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
****************************** Livecheck stage: After deployment ******************************
------------------------ Running AM livecheck ------------------------
Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/am/json/health/ready
[http_cmd]: curl --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/am/json/health/ready"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
[loop_until]: kubectl --namespace=xlou-sp get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
bzFBcDd0c1FORW1XRTRndkkyZWFORUZU
--- stderr ---
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: o1Ap7tsQNEmWE4gvI2eaNEFT" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" --insecure -L -X POST "https://xlou-sp.xlou-cdm.perf.freng.org/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "OEwB3HW50AhuDtiPeQLcpKBs3rk.*AAJTSQACMDIAAlNLABxUNjJyV3NLUkduY2tsT3FpaUdEMHcvb0NSZU09AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
---------------------- Running AMSTER livecheck ----------------------
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-49df8
--- stderr ---
Amster import completed. AM is now configured
Amster livecheck is passed
------------------------ Running IDM livecheck ------------------------
Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/openidm/info/ping
[loop_until]: kubectl --namespace=xlou-sp get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
T2ZmUzBPUGtWU3F0OFhxcHg2ZGl2enEw
--- stderr ---
Set admin password: OffS0OPkVSqt8Xqpx6divzq0
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/openidm/info/ping"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "",
"_rev": "",
"shortDesc": "OpenIDM ready",
"state": "ACTIVE_READY"
}
---------------------- Running DS-CTS livecheck ----------------------
Livecheck to ds-cts-0
[loop_until]: kubectl --namespace=xlou-sp get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
TXM4VnpSckF3UlVtWEhPMEVqdEkzMHRRNU5jSUFWVlE=
--- stderr ---
[run_command]: kubectl --namespace=xlou-sp exec ds-cts-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Ms8VzRrAwRUmXHO0EjtI30tQ5NcIAVVQ" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-cts-1
[run_command]: kubectl --namespace=xlou-sp exec ds-cts-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Ms8VzRrAwRUmXHO0EjtI30tQ5NcIAVVQ" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-cts-2
[run_command]: kubectl --namespace=xlou-sp exec ds-cts-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Ms8VzRrAwRUmXHO0EjtI30tQ5NcIAVVQ" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
--------------------- Running DS-IDREPO livecheck ---------------------
Livecheck to ds-idrepo-0
[loop_until]: kubectl --namespace=xlou-sp get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
TXM4VnpSckF3UlVtWEhPMEVqdEkzMHRRNU5jSUFWVlE=
--- stderr ---
[run_command]: kubectl --namespace=xlou-sp exec ds-idrepo-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Ms8VzRrAwRUmXHO0EjtI30tQ5NcIAVVQ" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-idrepo-1
[run_command]: kubectl --namespace=xlou-sp exec ds-idrepo-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Ms8VzRrAwRUmXHO0EjtI30tQ5NcIAVVQ" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-idrepo-2
[run_command]: kubectl --namespace=xlou-sp exec ds-idrepo-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "Ms8VzRrAwRUmXHO0EjtI30tQ5NcIAVVQ" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
-------------------- Running END-USER-UI livecheck --------------------
Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/enduser
[http_cmd]: curl --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/enduser"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Identity Management
[]
--------------------- Running LOGIN-UI livecheck ---------------------
Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/am/XUI
[http_cmd]: curl --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/am/XUI"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Login
[]
--------------------- Running ADMIN-UI livecheck ---------------------
Livecheck to https://xlou-sp.xlou-cdm.perf.freng.org/platform
[http_cmd]: curl --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/platform"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Platform Admin
[]
LIVECHECK SUCCEEDED
****************************** Initializing component pods for AM ******************************
----------------------- Get AM software version -----------------------
Getting product version from https://xlou-sp.xlou-cdm.perf.freng.org/am/json/serverinfo/version
- Login amadmin to get token
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: o1Ap7tsQNEmWE4gvI2eaNEFT" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" --insecure -L -X POST "https://xlou-sp.xlou-cdm.perf.freng.org/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "mGKPFlOZ6b1cH-6OX5W49SbZShI.*AAJTSQACMDIAAlNLABxaelp1Z2V0bG92Tk14YTBqeW5HUWcwUUU1VTQ9AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
[http_cmd]: curl --insecure -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=mGKPFlOZ6b1cH-6OX5W49SbZShI.*AAJTSQACMDIAAlNLABxaelp1Z2V0bG92Tk14YTBqeW5HUWcwUUU1VTQ9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1659889288.889.70981.966614|95d24137157607aab620392fd4bfbc15" "https://xlou-sp.xlou-cdm.perf.freng.org/am/json/serverinfo/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"_rev": "-340041102",
"version": "7.3.0-SNAPSHOT",
"fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build 7c2ce51c1b0baf635933e3d41b38c25ee535de24 (2022-August-05 08:39)",
"revision": "7c2ce51c1b0baf635933e3d41b38c25ee535de24",
"date": "2022-August-05 08:39"
}
**************************** Initializing component pods for AMSTER ****************************
***************************** Initializing component pods for IDM *****************************
---------------------- Get IDM software version ----------------------
Getting product version from https://xlou-sp.xlou-cdm.perf.freng.org/openidm/info/version
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" --insecure -L -X GET "https://xlou-sp.xlou-cdm.perf.freng.org/openidm/info/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"productVersion": "7.3.0-SNAPSHOT",
"productBuildDate": "20220806104653",
"productRevision": "7edcaf2"
}
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get DS-CTS software version ---------------------
[loop_until]: kubectl --namespace=xlou-sp exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
------------------- Get DS-IDREPO software version -------------------
[loop_until]: kubectl --namespace=xlou-sp exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
------------------ Get END-USER-UI software version ------------------
[loop_until]: kubectl --namespace=xlou-sp exec end-user-ui-7c69fbf965-ddpwd -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.62e17808.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp cp end-user-ui-7c69fbf965-ddpwd:/usr/share/nginx/html/js/chunk-vendors.62e17808.js /tmp/end-user-ui_info/chunk-vendors.62e17808.js -c end-user-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
-------------------- Get LOGIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou-sp exec login-ui-75d4dbc487-dz885 -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.ab096f69.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp cp login-ui-75d4dbc487-dz885:/usr/share/nginx/html/js/chunk-vendors.ab096f69.js /tmp/login-ui_info/chunk-vendors.ab096f69.js -c login-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
-------------------- Get ADMIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou-sp exec admin-ui-75d4dcddd9-gn6hx -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.ee0fc829.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou-sp cp admin-ui-75d4dcddd9-gn6hx:/usr/share/nginx/html/js/chunk-vendors.ee0fc829.js /tmp/admin-ui_info/chunk-vendors.ee0fc829.js -c admin-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
====================================================================================================
====================== Admin password for AM is: o1Ap7tsQNEmWE4gvI2eaNEFT ======================
====================================================================================================
====================================================================================================
===================== Admin password for IDM is: OffS0OPkVSqt8Xqpx6divzq0 =====================
====================================================================================================
====================================================================================================
================ Admin password for DS-CTS is: Ms8VzRrAwRUmXHO0EjtI30tQ5NcIAVVQ ================
====================================================================================================
====================================================================================================
============== Admin password for DS-IDREPO is: Ms8VzRrAwRUmXHO0EjtI30tQ5NcIAVVQ ==============
====================================================================================================
*************************************** Dumping pod list ***************************************
Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/_pod-list.txt
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pods -l app=am -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
am-7cb58cf89f-rhtt7 am-7cb58cf89f-ws2sj
--- stderr ---
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-49df8
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pods -l app=idm -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm-6f64c5998c-s8d65 idm-6f64c5998c-wklqg
--- stderr ---
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-7c69fbf965-ddpwd
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-75d4dbc487-dz885
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-sp get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-sp get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-75d4dcddd9-gn6hx
--- stderr ---
*********************************** Dumping components logs ***********************************
------------------------- Dumping logs for AM -------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/am-7cb58cf89f-rhtt7.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/am-7cb58cf89f-ws2sj.txt
Check pod logs for errors
----------------------- Dumping logs for AMSTER -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/amster-49df8.txt
Check pod logs for errors
------------------------ Dumping logs for IDM ------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/idm-6f64c5998c-s8d65.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/idm-6f64c5998c-wklqg.txt
Check pod logs for errors
----------------------- Dumping logs for DS-CTS -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/ds-cts-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/ds-cts-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/ds-cts-2.txt
Check pod logs for errors
--------------------- Dumping logs for DS-IDREPO ---------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/ds-idrepo-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/ds-idrepo-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/ds-idrepo-2.txt
Check pod logs for errors
-------------------- Dumping logs for END-USER-UI --------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/end-user-ui-7c69fbf965-ddpwd.txt
Check pod logs for errors
---------------------- Dumping logs for LOGIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/login-ui-75d4dbc487-dz885.txt
Check pod logs for errors
---------------------- Dumping logs for ADMIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/saml2/pod-logs/sp/20220807-162138-after-deployment/admin-ui-75d4dcddd9-gn6hx.txt
Check pod logs for errors
[07/Aug/2022 16:21:58] - INFO: Deployment successful
________________________________________________________________________________
[07/Aug/2022 16:21:58] Deploy_all_forgerock_components post : Post method
________________________________________________________________________________
Setting result to PASS
Task has been successfully stopped