--Task--
name: Deploy_all_forgerock_components
enabled: True
class_name: DeployComponentsTask
source_name: controller
source_namespace: >default<
target_name: controller
target_namespace: >default<
start: 0
stop: None
timeout: no timeout
loop: False
interval: None
dependencies: ['Enable_prometheus_admin_api']
wait_for: []
options: {}
group_name: None
Current dir: /mnt/disks/data/xslou/lodestar-fork/pyrock
________________________________________________________________________________
[20/Sep/2022 01:52:00] Deploy_all_forgerock_components pre : Initialising task parameters
________________________________________________________________________________
task will be executed on controller (localhost)
________________________________________________________________________________
[20/Sep/2022 01:52:00] Deploy_all_forgerock_components step1 : Deploy components
________________________________________________________________________________
******************************** Cleaning up existing namespace ********************************
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
pod "admin-ui-6445f46fb4-cj9ds" force deleted
pod "am-66bfc88d64-kps2n" force deleted
pod "am-66bfc88d64-sh5pr" force deleted
pod "am-66bfc88d64-vlpqw" force deleted
pod "amster-kkh59" force deleted
pod "ds-cts-0" force deleted
pod "ds-cts-1" force deleted
pod "ds-cts-2" force deleted
pod "ds-idrepo-0" force deleted
pod "ds-idrepo-1" force deleted
pod "ds-idrepo-2" force deleted
pod "end-user-ui-78f69bd8b6-prrmd" force deleted
pod "idm-56b6859478-kvhr6" force deleted
pod "idm-56b6859478-x556r" force deleted
pod "ldif-importer-5g9sb" force deleted
pod "login-ui-b8497c798-tccbm" force deleted
pod "overseer-0-5d7bcdc78c-hkx78" force deleted
service "admin-ui" force deleted
service "am" force deleted
service "ds-cts" force deleted
service "ds-idrepo" force deleted
service "end-user-ui" force deleted
service "idm" force deleted
service "login-ui" force deleted
service "overseer-0" force deleted
deployment.apps "admin-ui" force deleted
deployment.apps "am" force deleted
deployment.apps "end-user-ui" force deleted
deployment.apps "idm" force deleted
deployment.apps "login-ui" force deleted
deployment.apps "overseer-0" force deleted
statefulset.apps "ds-cts" force deleted
statefulset.apps "ds-idrepo" force deleted
job.batch "amster" force deleted
job.batch "ldif-importer" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 10s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 20s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 31s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 41s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm idm-logging-properties kube-root-ca.crt overseer-config-0 platform-config
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap idm --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap idm-logging-properties --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm-logging-properties" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap kube-root-ca.crt --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "kube-root-ca.crt" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap overseer-config-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "overseer-config-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap platform-config --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "platform-config" deleted
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
cloud-storage-credentials-cts cloud-storage-credentials-idrepo
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-cts --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-cts" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-idrepo --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-idrepo" deleted
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
forgerock ig-web overseer-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress forgerock --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "forgerock" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress ig-web --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "ig-web" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress overseer-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "overseer-0" deleted
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 overseer-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc overseer-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "overseer-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
----------------- Deleting admin clusterrolebindings -----------------
[loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}"
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
k8s-svc-acct-crb-xlou-0
--- stderr ---
Deleting clusterrolebinding k8s-svc-acct-crb-xlou-0 associated with xlou namespace
[loop_until]: kubectl delete clusterrolebinding k8s-svc-acct-crb-xlou-0
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
clusterrolebinding.rbac.authorization.k8s.io "k8s-svc-acct-crb-xlou-0" deleted
--- stderr ---
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace "xlou" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
************************************* Creating deployment *************************************
Creating normal (forgeops) type deployment for deployment: stack
------- Custom component configuration present. Loading values -------
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
----------------- Deleting admin clusterrolebindings -----------------
[loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}"
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl create namespace xlou
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou created
--- stderr ---
[loop_until]: kubectl label namespace xlou self-service=false timeout=48
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou labeled
--- stderr ---
************************************ Configuring components ************************************
Applying custom configuration, dockerfiles to deployment and custom lodestar component configuration
Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/session_timeout_3minutes/docker/am/config-profiles/cdk to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/config-profiles/cdk
No custom features provided. Nothing to do.
---- Updating components image tag/repo from platform-images repo ----
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --clean
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Cleaning up.
[WARNING] Found nothing to clean.
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Repo is at 2d91d091c3e30a5f0dfffe45e60fabc1a3e37882 on branch HEAD
[INFO] Updating products am
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at 2d91d091c3e30a5f0dfffe45e60fabc1a3e37882 on branch HEAD
[INFO] Updating products amster
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at 2d91d091c3e30a5f0dfffe45e60fabc1a3e37882 on branch HEAD
[INFO] Updating products idm
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ds
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at 2d91d091c3e30a5f0dfffe45e60fabc1a3e37882 on branch HEAD
[INFO] Updating products ds
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at 2d91d091c3e30a5f0dfffe45e60fabc1a3e37882 on branch HEAD
[INFO] Updating products ui
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-ec695d4e3aa8f56047ca515bcbf9d79a6bbf921c
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-ec695d4e3aa8f56047ca515bcbf9d79a6bbf921c
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-ec695d4e3aa8f56047ca515bcbf9d79a6bbf921c
--- stderr ---
- Checking if component Dockerfile/kustomize needs additional update -
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am
--- stderr ---
Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster
--- stderr ---
Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm
--- stderr ---
Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds cts
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts
--- stderr ---
Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds idrepo
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo
--- stderr ---
Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base end-user-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui
--- stderr ---
Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-ec695d4e3aa8f56047ca515bcbf9d79a6bbf921c
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base login-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui
--- stderr ---
Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-ec695d4e3aa8f56047ca515bcbf9d79a6bbf921c
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base admin-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui
--- stderr ---
Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-ec695d4e3aa8f56047ca515bcbf9d79a6bbf921c
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize overlay medium
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/medium
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete -f /tmp/tmpqscvbvtb
[loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
[loop_until]: OK (rc = 1)
--- stdout ---
--- stderr ---
Error from server (NotFound): error when deleting "/tmp/tmpqscvbvtb": secrets "sslcert" not found
[loop_until]: kubectl --namespace=xlou apply -f /tmp/tmpqscvbvtb
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret/sslcert created
--- stderr ---
The following components will be deployed:
- am (AM)
- amster (Amster)
- idm (IDM)
- ds-cts (DS)
- ds-idrepo (DS)
- end-user-ui (EndUserUi)
- login-ui (LoginUi)
- admin-ui (AdminUi)
Run create-secrets.sh to create passwords
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/create-secrets.sh xlou
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
certificate.cert-manager.io/ds-master-cert created
certificate.cert-manager.io/ds-ssl-cert created
issuer.cert-manager.io/selfsigned-issuer created
secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created
--- stderr ---
[loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=available deployment --all | grep "condition met"
[loop_until]: (max_time=300, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
deployment.apps/secret-agent-controller-manager condition met
--- stderr ---
[loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=ready pod --all | grep "condition met"
[loop_until]: (max_time=300, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
pod/secret-agent-controller-manager-59fcd58bbc-zc5tz condition met
--- stderr ---
[run_command]: skaffold build --file-output=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/medium.json --default-repo gcr.io/engineeringpit/lodestar-images --profile medium --config=/tmp/tmpocnj0s0i --cache-artifacts=false --tag xlou --namespace=xlou
[run_command]: env={'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'CONFIG_PROFILE': 'cdk'}
Generating tags...
- am -> gcr.io/engineeringpit/lodestar-images/am:xlou
- amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou
- idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou
- ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou
- ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou
- ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou
- ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou
Starting build...
Building [ds]...
Sending build context to Docker daemon 115.2kB
Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
7.3.0-954176524467342398ae1a6e1012191e730e09e5: Pulling from forgerock-io/ds/pit1
Digest: sha256:bf5e04a8d63e63c57a74344cc34917fce869206ade490f33f9fec5c347039065
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
---> d21188fc9a8a
Step 2/11 : USER root
---> Using cache
---> 4939a79ba489
Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils
---> Using cache
---> 94b1ffd3b8d9
Step 4/11 : USER forgerock
---> Using cache
---> 0802bb6b8a9f
Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data
---> Using cache
---> d4a04d077f03
Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds"
---> Using cache
---> bc28cb149063
Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore"
---> Using cache
---> 4d4ebc9cf7eb
Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts
---> Using cache
---> dbb4b0aeac3d
Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext
---> Using cache
---> 00ae613b2cee
Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/
---> Using cache
---> fd39f0316661
Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext
---> Using cache
---> 054e9ba32584
Successfully built 054e9ba32584
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou
Build [ds] succeeded
Building [am]...
Sending build context to Docker daemon 6.144kB
Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc
7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc: Pulling from forgerock-io/am-cdk/pit1
Digest: sha256:bf76d1e2b12bbfe61b7f7d961757761a505ba0b005324e2e41786dd60f8a47de
Status: Image is up to date for gcr.io/forgerock-io/am-cdk/pit1:7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc
---> b0c110d20ec7
Step 2/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> ac1c0014b80e
Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> afbc79ef9d46
Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/
---> Using cache
---> 9b1adaa88b51
Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/
---> Using cache
---> cd05f644809e
Step 6/6 : WORKDIR /home/forgerock
---> Using cache
---> d0907efec076
Successfully built d0907efec076
Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou
Build [am] succeeded
Building [amster]...
Sending build context to Docker daemon 54.27kB
Step 1/14 : FROM gcr.io/forgerock-io/amster/pit1:7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc
7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc: Pulling from forgerock-io/amster/pit1
Digest: sha256:0420f0359c5b4ca897eea55b1004944b78594a9905f744a93b54c35e94786e39
Status: Image is up to date for gcr.io/forgerock-io/amster/pit1:7.3.0-26bd5a1a61039a7def619d251469fd588a5c4dfc
---> ac8b2e72bc6a
Step 2/14 : USER root
---> Using cache
---> 5d944b80b80d
Step 3/14 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 8be0811f22b6
Step 4/14 : ENV DEBIAN_FRONTEND=noninteractive
---> Using cache
---> 0b1ce7ce0c7e
Step 5/14 : ENV APT_OPTS="--no-install-recommends --yes"
---> Using cache
---> db0ce17fba9f
Step 6/14 : RUN apt-get update && apt-get install -y openldap-utils jq inotify-tools && apt-get clean && rm -r /var/lib/apt/lists /var/cache/apt/archives
---> Using cache
---> 11cc24cbd1ec
Step 7/14 : USER forgerock
---> Using cache
---> 4320c7d119b7
Step 8/14 : ENV SERVER_URI /am
---> Using cache
---> 62ca7deb7677
Step 9/14 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> 9e9dd74f7ac8
Step 10/14 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 2df7ea6ed825
Step 11/14 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/amster
---> Using cache
---> 13c2b3c53617
Step 12/14 : COPY --chown=forgerock:root scripts /opt/amster
---> Using cache
---> a5e472c638af
Step 13/14 : RUN chmod 777 /opt/amster
---> Using cache
---> b3ac55c657e0
Step 14/14 : ENTRYPOINT [ "/opt/amster/docker-entrypoint.sh" ]
---> Using cache
---> 661f6c76fc9d
Successfully built 661f6c76fc9d
Successfully tagged gcr.io/engineeringpit/lodestar-images/amster:xlou
Build [amster] succeeded
Building [idm]...
Sending build context to Docker daemon 312.8kB
Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972
7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972: Pulling from forgerock-io/idm-cdk/pit1
Digest: sha256:8cae31faa10272657c5903849aa4e2f45f5cf599d0634f82e4d7621b1971e211
Status: Image is up to date for gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972
---> 7085673f39c5
Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 6883c7d2c7c1
Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar
---> Using cache
---> fcd331960e04
Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal
---> Using cache
---> 24269e4efac7
Step 5/8 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> e40f1629d749
Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> d5ce20005d29
Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm
---> Using cache
---> 5fa7bf701673
Step 8/8 : COPY --chown=forgerock:root . /opt/openidm
---> Using cache
---> 66648be71bd1
Successfully built 66648be71bd1
Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou
Build [idm] succeeded
Building [ds-cts]...
Sending build context to Docker daemon 78.85kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
7.3.0-954176524467342398ae1a6e1012191e730e09e5: Pulling from forgerock-io/ds/pit1
Digest: sha256:bf5e04a8d63e63c57a74344cc34917fce869206ade490f33f9fec5c347039065
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
---> d21188fc9a8a
Step 2/10 : USER root
---> Using cache
---> 4939a79ba489
Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 560841a2b411
Step 4/10 : RUN chown -R forgerock:root /opt/opendj
---> Using cache
---> 67f4f7e20101
Step 5/10 : USER forgerock
---> Using cache
---> b4947fbb998d
Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> b00586c00dc0
Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/
---> Using cache
---> 3382a07cda3d
Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> b98c30cffac3
Step 9/10 : ARG profile_version
---> Using cache
---> 1b24dd1bca85
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 28036c7e0be7
Successfully built 28036c7e0be7
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou
Build [ds-cts] succeeded
Building [ds-idrepo]...
Sending build context to Docker daemon 117.8kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
7.3.0-954176524467342398ae1a6e1012191e730e09e5: Pulling from forgerock-io/ds/pit1
Digest: sha256:bf5e04a8d63e63c57a74344cc34917fce869206ade490f33f9fec5c347039065
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
---> d21188fc9a8a
Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 6ca1ce5e5350
Step 3/10 : WORKDIR /opt/opendj
---> Using cache
---> 6177fd5eb549
Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> bda9e03ab4da
Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/
---> Using cache
---> fa5df5933d02
Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 6dc859d50e89
Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma
---> Using cache
---> f4c2b69c3f4f
Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/
---> Using cache
---> b91abf7cb2f1
Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif
---> Using cache
---> 49c8336da7d8
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 24564a22e097
Successfully built 24564a22e097
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou
Build [ds-idrepo] succeeded
Building [ig]...
Sending build context to Docker daemon 29.18kB
Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
7.3.0-latest-postcommit: Pulling from forgerock-io/ig/pit1
Digest: sha256:802f7b9a306b49351d0e9b31ba8161833b56615837a17e19f2bb3a94ba65f61f
Status: Image is up to date for gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
---> 91ea29e45e06
Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 966ad3fa78b1
Step 3/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> 1bcd83aaefbd
Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 067d117cd18b
Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig
---> Using cache
---> 5423f638f13c
Step 6/6 : COPY --chown=forgerock:root . /var/ig
---> Using cache
---> 51ea6de2f7b3
Successfully built 51ea6de2f7b3
Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou
Build [ig] succeeded
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
[run_command]: skaffold deploy --build-artifacts=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/medium.json --profile medium --config=/tmp/tmpegxuamyz --label skaffold.dev/profile=medium --label skaffold.dev/run-id=xlou --force=false --status-check=true --namespace=xlou
Tags used in deployment:
- am -> gcr.io/engineeringpit/lodestar-images/am:xlou@sha256:5d7ce8a70fd2e12167c404ce4da3bf4d49721d2e9c3847436343ddbfb7d4b4e7
- amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou@sha256:e16afdcd16914b6bf0b94d9d472079c094d8a1801f9b1a9127600aa6ba0f797c
- idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou@sha256:d3a47be8a66d395e2a0a039fc9a0dffe6efa7ae2cc93581d174fd7cb75a5b39c
- ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou@sha256:e06fbcc879ec2ad0185f85766beabedec1477089649d97f0fc8190d980a8433b
- ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou@sha256:4917982bd5e2cb55758c633ddbd817732b9ee769574ab960b6b097f459e61842
- ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou@sha256:af99fa440a524eedcdd95935861e1c1b88c1e2eb9f0069a62b01a671ebfa8009
- ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou@sha256:7547fb93e0a685e6f9cdb857b98cb1d601fc47d3c8f5bcfca936533ed0098039
Starting deploy...
- configmap/idm created
- configmap/idm-logging-properties created
- configmap/platform-config created
- secret/cloud-storage-credentials-cts created
- secret/cloud-storage-credentials-idrepo created
- service/admin-ui created
- service/am created
- service/ds-cts created
- service/ds-idrepo created
- service/end-user-ui created
- service/idm created
- service/login-ui created
- deployment.apps/admin-ui created
- deployment.apps/am created
- deployment.apps/end-user-ui created
- deployment.apps/idm created
- deployment.apps/login-ui created
- statefulset.apps/ds-cts created
- statefulset.apps/ds-idrepo created
- poddisruptionbudget.policy/am created
- poddisruptionbudget.policy/ds-idrepo created
- poddisruptionbudget.policy/idm created
- poddisruptionbudget.policy/ig created
- Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
- poddisruptionbudget.policy/ds-cts created
- job.batch/amster created
- job.batch/ldif-importer created
- ingress.networking.k8s.io/forgerock created
- ingress.networking.k8s.io/ig-web created
Waiting for deployments to stabilize...
- xlou:deployment/end-user-ui is ready. [6/7 deployment(s) still pending]
- xlou:deployment/admin-ui is ready. [5/7 deployment(s) still pending]
- xlou:deployment/am:
- xlou:deployment/idm: waiting for rollout to finish: 0 of 2 updated replicas are available...
- xlou:deployment/login-ui: waiting for rollout to finish: 0 of 1 updated replicas are available...
- xlou:statefulset/ds-cts: waiting for init container initialize to start
- xlou:pod/ds-cts-0: waiting for init container initialize to start
- xlou:statefulset/ds-idrepo: waiting for init container initialize to start
- xlou:pod/ds-idrepo-0: waiting for init container initialize to start
- xlou:deployment/login-ui is ready. [4/7 deployment(s) still pending]
- xlou:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-1"
- xlou:pod/ds-cts-1: unable to determine current service state of pod "ds-cts-1"
- xlou:statefulset/ds-idrepo:
- xlou:statefulset/ds-cts:
- xlou:statefulset/ds-idrepo: unable to determine current service state of pod "ds-idrepo-2"
- xlou:pod/ds-idrepo-2: unable to determine current service state of pod "ds-idrepo-2"
- xlou:deployment/idm is ready. [3/7 deployment(s) still pending]
- xlou:statefulset/ds-cts is ready. [2/7 deployment(s) still pending]
- xlou:deployment/am is ready. [1/7 deployment(s) still pending]
- xlou:statefulset/ds-idrepo is ready.
Deployments stabilized in 2 minutes 2.04 seconds
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
am-66bfc88d64-9ksxm am-66bfc88d64-g2n9b am-66bfc88d64-q2t85
--- stderr ---
-------------- Check pod am-66bfc88d64-9ksxm is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-66bfc88d64-9ksxm -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-66bfc88d64-9ksxm -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-66bfc88d64-9ksxm -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:50Z
--- stderr ---
------- Check pod am-66bfc88d64-9ksxm filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-66bfc88d64-9ksxm -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-66bfc88d64-9ksxm restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-66bfc88d64-9ksxm -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-66bfc88d64-9ksxm has been restarted 0 times.
-------------- Check pod am-66bfc88d64-g2n9b is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-66bfc88d64-g2n9b -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-66bfc88d64-g2n9b -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-66bfc88d64-g2n9b -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:50Z
--- stderr ---
------- Check pod am-66bfc88d64-g2n9b filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-66bfc88d64-g2n9b -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-66bfc88d64-g2n9b restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-66bfc88d64-g2n9b -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-66bfc88d64-g2n9b has been restarted 0 times.
-------------- Check pod am-66bfc88d64-q2t85 is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-66bfc88d64-q2t85 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-66bfc88d64-q2t85 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-66bfc88d64-q2t85 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:50Z
--- stderr ---
------- Check pod am-66bfc88d64-q2t85 filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-66bfc88d64-q2t85 -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-66bfc88d64-q2t85 restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-66bfc88d64-q2t85 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-66bfc88d64-q2t85 has been restarted 0 times.
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-868bw
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
idm-56b6859478-sgm4t idm-56b6859478-tgkhg
--- stderr ---
-------------- Check pod idm-56b6859478-sgm4t is running --------------
[loop_until]: kubectl --namespace=xlou get pods idm-56b6859478-sgm4t -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods idm-56b6859478-sgm4t -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod idm-56b6859478-sgm4t -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:50Z
--- stderr ---
------- Check pod idm-56b6859478-sgm4t filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec idm-56b6859478-sgm4t -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-56b6859478-sgm4t restart count ------------
[loop_until]: kubectl --namespace=xlou get pod idm-56b6859478-sgm4t -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-56b6859478-sgm4t has been restarted 0 times.
-------------- Check pod idm-56b6859478-tgkhg is running --------------
[loop_until]: kubectl --namespace=xlou get pods idm-56b6859478-tgkhg -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods idm-56b6859478-tgkhg -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod idm-56b6859478-tgkhg -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:50Z
--- stderr ---
------- Check pod idm-56b6859478-tgkhg filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec idm-56b6859478-tgkhg -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-56b6859478-tgkhg restart count ------------
[loop_until]: kubectl --namespace=xlou get pod idm-56b6859478-tgkhg -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-56b6859478-tgkhg has been restarted 0 times.
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
-------------------- Check pod ds-cts-0 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:54Z
--- stderr ---
------------- Check pod ds-cts-0 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-0 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-0 has been restarted 0 times.
-------------------- Check pod ds-cts-1 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:55:27Z
--- stderr ---
------------- Check pod ds-cts-1 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-1 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-1 has been restarted 0 times.
-------------------- Check pod ds-cts-2 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:55:58Z
--- stderr ---
------------- Check pod ds-cts-2 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-2 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-2 has been restarted 0 times.
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
------------------ Check pod ds-idrepo-0 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:55Z
--- stderr ---
----------- Check pod ds-idrepo-0 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-0 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-0 has been restarted 0 times.
------------------ Check pod ds-idrepo-1 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:55:36Z
--- stderr ---
----------- Check pod ds-idrepo-1 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-1 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-1 has been restarted 0 times.
------------------ Check pod ds-idrepo-2 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:56:17Z
--- stderr ---
----------- Check pod ds-idrepo-2 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-2 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-2 has been restarted 0 times.
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-78f69bd8b6-57dgq
--- stderr ---
---------- Check pod end-user-ui-78f69bd8b6-57dgq is running ----------
[loop_until]: kubectl --namespace=xlou get pods end-user-ui-78f69bd8b6-57dgq -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods end-user-ui-78f69bd8b6-57dgq -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod end-user-ui-78f69bd8b6-57dgq -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:50Z
--- stderr ---
--- Check pod end-user-ui-78f69bd8b6-57dgq filesystem is accessible ---
[loop_until]: kubectl --namespace=xlou exec end-user-ui-78f69bd8b6-57dgq -c end-user-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
-------- Check pod end-user-ui-78f69bd8b6-57dgq restart count --------
[loop_until]: kubectl --namespace=xlou get pod end-user-ui-78f69bd8b6-57dgq -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod end-user-ui-78f69bd8b6-57dgq has been restarted 0 times.
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-b8497c798-ktf4r
--- stderr ---
------------ Check pod login-ui-b8497c798-ktf4r is running ------------
[loop_until]: kubectl --namespace=xlou get pods login-ui-b8497c798-ktf4r -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods login-ui-b8497c798-ktf4r -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod login-ui-b8497c798-ktf4r -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:50Z
--- stderr ---
----- Check pod login-ui-b8497c798-ktf4r filesystem is accessible -----
[loop_until]: kubectl --namespace=xlou exec login-ui-b8497c798-ktf4r -c login-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod login-ui-b8497c798-ktf4r restart count ----------
[loop_until]: kubectl --namespace=xlou get pod login-ui-b8497c798-ktf4r -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod login-ui-b8497c798-ktf4r has been restarted 0 times.
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-6445f46fb4-lgkkp
--- stderr ---
----------- Check pod admin-ui-6445f46fb4-lgkkp is running -----------
[loop_until]: kubectl --namespace=xlou get pods admin-ui-6445f46fb4-lgkkp -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods admin-ui-6445f46fb4-lgkkp -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod admin-ui-6445f46fb4-lgkkp -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T01:54:49Z
--- stderr ---
---- Check pod admin-ui-6445f46fb4-lgkkp filesystem is accessible ----
[loop_until]: kubectl --namespace=xlou exec admin-ui-6445f46fb4-lgkkp -c admin-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod admin-ui-6445f46fb4-lgkkp restart count ----------
[loop_until]: kubectl --namespace=xlou get pod admin-ui-6445f46fb4-lgkkp -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod admin-ui-6445f46fb4-lgkkp has been restarted 0 times.
******************************* Checking AM component is running *******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:3 replicas:3
--- stderr ---
***************************** Checking AMSTER component is running *****************************
------------------ Waiting for Amster job to finish ------------------
--------------------- Get expected number of pods ---------------------
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get jobs amster -o jsonpath="{.status.succeeded}" | grep "1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
****************************** Checking IDM component is running ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
-------------- Waiting for 2 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:2 replicas:2
--- stderr ---
***************************** Checking DS-CTS component is running *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
*************************** Checking DS-IDREPO component is running ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
************************** Checking END-USER-UI component is running **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking LOGIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking ADMIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
****************************** Livecheck stage: After deployment ******************************
------------------------ Running AM livecheck ------------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
[loop_until]: kubectl --namespace=xlou get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
R1NwOXNFczVjbG1KMFB2eGVPQlV4aFdV
--- stderr ---
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: GSp9sEs5clmJ0PvxeOBUxhWU" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "Zr8POI0D6cQa2tDIC2Uoc8RXPQo.*AAJTSQACMDIAAlNLABwzQW1oOGVuRjFoa1ZpUnZCR3AwcU1PRHZvdTg9AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
---------------------- Running AMSTER livecheck ----------------------
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-868bw
--- stderr ---
Amster import completed. AM is now configured
Amster livecheck is passed
------------------------ Running IDM livecheck ------------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping
[loop_until]: kubectl --namespace=xlou get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
a1k1S0l0R1d5M0gyWURRM3lWb01NT1hP
--- stderr ---
Set admin password: kY5KItGWy3H2YDQ3yVoMMOXO
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "",
"_rev": "",
"shortDesc": "OpenIDM ready",
"state": "ACTIVE_READY"
}
---------------------- Running DS-CTS livecheck ----------------------
Livecheck to ds-cts-0
[loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
eDdQODhnZVprUHlNZHVTVFVEMzJudFg4WVlzVzd5MGU=
--- stderr ---
[run_command]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "x7P88geZkPyMduSTUD32ntX8YYsW7y0e" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-cts-1
[run_command]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "x7P88geZkPyMduSTUD32ntX8YYsW7y0e" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-cts-2
[run_command]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "x7P88geZkPyMduSTUD32ntX8YYsW7y0e" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
--------------------- Running DS-IDREPO livecheck ---------------------
Livecheck to ds-idrepo-0
[loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
eDdQODhnZVprUHlNZHVTVFVEMzJudFg4WVlzVzd5MGU=
--- stderr ---
[run_command]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "x7P88geZkPyMduSTUD32ntX8YYsW7y0e" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-idrepo-1
[run_command]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "x7P88geZkPyMduSTUD32ntX8YYsW7y0e" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-idrepo-2
[run_command]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "x7P88geZkPyMduSTUD32ntX8YYsW7y0e" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
-------------------- Running END-USER-UI livecheck --------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/enduser
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/enduser"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Identity Management
[]
--------------------- Running LOGIN-UI livecheck ---------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Login
[]
--------------------- Running ADMIN-UI livecheck ---------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/platform
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/platform"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Platform Admin
[]
LIVECHECK SUCCEEDED
****************************** Initializing component pods for AM ******************************
----------------------- Get AM software version -----------------------
Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version
- Login amadmin to get token
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: GSp9sEs5clmJ0PvxeOBUxhWU" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "SZepJC4DpLCGXIgGkHb08cmCRS0.*AAJTSQACMDIAAlNLABxUZWgyWVE0bnVldk40TEw0Y3pwMU1MR3h6RlE9AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
[http_cmd]: curl -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=SZepJC4DpLCGXIgGkHb08cmCRS0.*AAJTSQACMDIAAlNLABxUZWgyWVE0bnVldk40TEw0Y3pwMU1MR3h6RlE9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1663639070.949.9641.969897|f60edb382037eb2df1e800d563ad78a7" "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"_rev": "-59589352",
"version": "7.3.0-SNAPSHOT",
"fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build 26bd5a1a61039a7def619d251469fd588a5c4dfc (2022-September-14 16:17)",
"revision": "26bd5a1a61039a7def619d251469fd588a5c4dfc",
"date": "2022-September-14 16:17"
}
**************************** Initializing component pods for AMSTER ****************************
***************************** Initializing component pods for IDM *****************************
---------------------- Get IDM software version ----------------------
Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"productVersion": "7.3.0-SNAPSHOT",
"productBuildDate": "20220912204640",
"productRevision": "0382f37"
}
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get DS-CTS software version ---------------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
------------------- Get DS-IDREPO software version -------------------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
------------------ Get END-USER-UI software version ------------------
[loop_until]: kubectl --namespace=xlou exec end-user-ui-78f69bd8b6-57dgq -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.9837f05d.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp end-user-ui-78f69bd8b6-57dgq:/usr/share/nginx/html/js/chunk-vendors.9837f05d.js /tmp/end-user-ui_info/chunk-vendors.9837f05d.js -c end-user-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
-------------------- Get LOGIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou exec login-ui-b8497c798-ktf4r -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.3129d5ef.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp login-ui-b8497c798-ktf4r:/usr/share/nginx/html/js/chunk-vendors.3129d5ef.js /tmp/login-ui_info/chunk-vendors.3129d5ef.js -c login-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
-------------------- Get ADMIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou exec admin-ui-6445f46fb4-lgkkp -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.8aa4d844.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp admin-ui-6445f46fb4-lgkkp:/usr/share/nginx/html/js/chunk-vendors.8aa4d844.js /tmp/admin-ui_info/chunk-vendors.8aa4d844.js -c admin-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
====================================================================================================
====================== Admin password for AM is: GSp9sEs5clmJ0PvxeOBUxhWU ======================
====================================================================================================
====================================================================================================
===================== Admin password for IDM is: kY5KItGWy3H2YDQ3yVoMMOXO =====================
====================================================================================================
====================================================================================================
================ Admin password for DS-CTS is: x7P88geZkPyMduSTUD32ntX8YYsW7y0e ================
====================================================================================================
====================================================================================================
============== Admin password for DS-IDREPO is: x7P88geZkPyMduSTUD32ntX8YYsW7y0e ==============
====================================================================================================
*************************************** Dumping pod list ***************************************
Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/_pod-list.txt
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
am-66bfc88d64-9ksxm am-66bfc88d64-g2n9b am-66bfc88d64-q2t85
--- stderr ---
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-868bw
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm-56b6859478-sgm4t idm-56b6859478-tgkhg
--- stderr ---
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-78f69bd8b6-57dgq
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-b8497c798-ktf4r
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-6445f46fb4-lgkkp
--- stderr ---
*********************************** Dumping components logs ***********************************
------------------------- Dumping logs for AM -------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/am-66bfc88d64-9ksxm.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/am-66bfc88d64-g2n9b.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/am-66bfc88d64-q2t85.txt
Check pod logs for errors
----------------------- Dumping logs for AMSTER -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/amster-868bw.txt
Check pod logs for errors
------------------------ Dumping logs for IDM ------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/idm-56b6859478-sgm4t.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/idm-56b6859478-tgkhg.txt
Check pod logs for errors
----------------------- Dumping logs for DS-CTS -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/ds-cts-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/ds-cts-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/ds-cts-2.txt
Check pod logs for errors
--------------------- Dumping logs for DS-IDREPO ---------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/ds-idrepo-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/ds-idrepo-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/ds-idrepo-2.txt
Check pod logs for errors
-------------------- Dumping logs for END-USER-UI --------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/end-user-ui-78f69bd8b6-57dgq.txt
Check pod logs for errors
---------------------- Dumping logs for LOGIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/login-ui-b8497c798-ktf4r.txt
Check pod logs for errors
---------------------- Dumping logs for ADMIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-015801-after-deployment/admin-ui-6445f46fb4-lgkkp.txt
Check pod logs for errors
[20/Sep/2022 01:58:22] - INFO: Deployment successful
________________________________________________________________________________
[20/Sep/2022 01:58:22] Deploy_all_forgerock_components post : Post method
________________________________________________________________________________
Setting result to PASS
Task has been successfully stopped