--Task--
name: Deploy_all_forgerock_components
enabled: True
class_name: DeployComponentsTask
source_name: controller
source_namespace: >default<
target_name: controller
target_namespace: >default<
start: 0
stop: None
timeout: no timeout
loop: False
interval: None
dependencies: ['Enable_prometheus_admin_api']
wait_for: []
options: {}
group_name: None
Current dir: /mnt/disks/data/xslou/lodestar-fork/pyrock
________________________________________________________________________________
[20/Sep/2022 21:13:19] Deploy_all_forgerock_components pre : Initialising task parameters
________________________________________________________________________________
task will be executed on controller (localhost)
________________________________________________________________________________
[20/Sep/2022 21:13:19] Deploy_all_forgerock_components step1 : Deploy components
________________________________________________________________________________
******************************** Cleaning up existing namespace ********************************
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
pod "admin-ui-9d465f5b9-rwwj7" force deleted
pod "am-66bfc88d64-fqkgv" force deleted
pod "am-66bfc88d64-gc6kp" force deleted
pod "am-66bfc88d64-j69t9" force deleted
pod "amster-q9dvk" force deleted
pod "ds-cts-0" force deleted
pod "ds-cts-1" force deleted
pod "ds-cts-2" force deleted
pod "ds-idrepo-0" force deleted
pod "ds-idrepo-1" force deleted
pod "ds-idrepo-2" force deleted
pod "end-user-ui-58f7b744b5-f2gxw" force deleted
pod "idm-56b6859478-gsjqz" force deleted
pod "idm-56b6859478-hshfn" force deleted
pod "ldif-importer-srsbb" force deleted
pod "login-ui-7678dc66c-j2glg" force deleted
pod "overseer-0-7f58648764-rsp2p" force deleted
service "admin-ui" force deleted
service "am" force deleted
service "ds-cts" force deleted
service "ds-idrepo" force deleted
service "end-user-ui" force deleted
service "idm" force deleted
service "login-ui" force deleted
service "overseer-0" force deleted
deployment.apps "admin-ui" force deleted
deployment.apps "am" force deleted
deployment.apps "end-user-ui" force deleted
deployment.apps "idm" force deleted
deployment.apps "login-ui" force deleted
deployment.apps "overseer-0" force deleted
statefulset.apps "ds-cts" force deleted
statefulset.apps "ds-idrepo" force deleted
job.batch "amster" force deleted
job.batch "ldif-importer" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 10s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 21s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 31s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm idm-logging-properties kube-root-ca.crt overseer-config-0 platform-config
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap idm --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap idm-logging-properties --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm-logging-properties" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap kube-root-ca.crt --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "kube-root-ca.crt" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap overseer-config-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "overseer-config-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap platform-config --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "platform-config" deleted
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
cloud-storage-credentials-cts cloud-storage-credentials-idrepo
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-cts --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-cts" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-idrepo --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-idrepo" deleted
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
forgerock ig-web overseer-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress forgerock --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "forgerock" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress ig-web --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "ig-web" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress overseer-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "overseer-0" deleted
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 overseer-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc overseer-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "overseer-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
----------------- Deleting admin clusterrolebindings -----------------
[loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}"
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
k8s-svc-acct-crb-xlou-0
--- stderr ---
Deleting clusterrolebinding k8s-svc-acct-crb-xlou-0 associated with xlou namespace
[loop_until]: kubectl delete clusterrolebinding k8s-svc-acct-crb-xlou-0
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
clusterrolebinding.rbac.authorization.k8s.io "k8s-svc-acct-crb-xlou-0" deleted
--- stderr ---
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace "xlou" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
************************************* Creating deployment *************************************
Creating normal (forgeops) type deployment for deployment: stack
------- Custom component configuration present. Loading values -------
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
----------------- Deleting admin clusterrolebindings -----------------
[loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}"
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl create namespace xlou
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou created
--- stderr ---
[loop_until]: kubectl label namespace xlou self-service=false timeout=48
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou labeled
--- stderr ---
************************************ Configuring components ************************************
Applying custom configuration, dockerfiles to deployment and custom lodestar component configuration
Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/session_timeout_3minutes/docker/am/config-profiles/cdk to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/config-profiles/cdk
No custom features provided. Nothing to do.
---- Updating components image tag/repo from platform-images repo ----
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --clean
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Cleaning up.
[WARNING] Found nothing to clean.
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Repo is at c29284ae8ee1c69312cf738cfc7186e2264dafdd on branch HEAD
[INFO] Updating products am
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at c29284ae8ee1c69312cf738cfc7186e2264dafdd on branch HEAD
[INFO] Updating products amster
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at c29284ae8ee1c69312cf738cfc7186e2264dafdd on branch HEAD
[INFO] Updating products idm
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ds
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at c29284ae8ee1c69312cf738cfc7186e2264dafdd on branch HEAD
[INFO] Updating products ds
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone
[INFO] Repo is at c29284ae8ee1c69312cf738cfc7186e2264dafdd on branch HEAD
[INFO] Updating products ui
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-d196e3355c0bba2d6b2ea37f9bfcee23a61da319
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-d196e3355c0bba2d6b2ea37f9bfcee23a61da319
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-d196e3355c0bba2d6b2ea37f9bfcee23a61da319
--- stderr ---
- Checking if component Dockerfile/kustomize needs additional update -
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am
--- stderr ---
Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster
--- stderr ---
Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm
--- stderr ---
Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds cts
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts
--- stderr ---
Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds idrepo
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo
--- stderr ---
Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base end-user-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui
--- stderr ---
Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-d196e3355c0bba2d6b2ea37f9bfcee23a61da319
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base login-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui
--- stderr ---
Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-d196e3355c0bba2d6b2ea37f9bfcee23a61da319
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base admin-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui
--- stderr ---
Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-d196e3355c0bba2d6b2ea37f9bfcee23a61da319
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize overlay medium
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/medium
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete -f /tmp/tmpp5xwjuiv
[loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
[loop_until]: OK (rc = 1)
--- stdout ---
--- stderr ---
Error from server (NotFound): error when deleting "/tmp/tmpp5xwjuiv": secrets "sslcert" not found
[loop_until]: kubectl --namespace=xlou apply -f /tmp/tmpp5xwjuiv
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret/sslcert created
--- stderr ---
The following components will be deployed:
- am (AM)
- amster (Amster)
- idm (IDM)
- ds-cts (DS)
- ds-idrepo (DS)
- end-user-ui (EndUserUi)
- login-ui (LoginUi)
- admin-ui (AdminUi)
Run create-secrets.sh to create passwords
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/create-secrets.sh xlou
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
certificate.cert-manager.io/ds-master-cert created
certificate.cert-manager.io/ds-ssl-cert created
issuer.cert-manager.io/selfsigned-issuer created
secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created
--- stderr ---
[loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=available deployment --all | grep "condition met"
[loop_until]: (max_time=300, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
deployment.apps/secret-agent-controller-manager condition met
--- stderr ---
[loop_until]: kubectl --namespace=secret-agent-system wait --for=condition=ready pod --all | grep "condition met"
[loop_until]: (max_time=300, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
pod/secret-agent-controller-manager-59fcd58bbc-zc5tz condition met
--- stderr ---
[run_command]: skaffold build --file-output=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/medium.json --default-repo gcr.io/engineeringpit/lodestar-images --profile medium --config=/tmp/tmpsm_ht6ay --cache-artifacts=false --tag xlou --namespace=xlou
[run_command]: env={'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'CONFIG_PROFILE': 'cdk'}
Generating tags...
- am -> gcr.io/engineeringpit/lodestar-images/am:xlou
- amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou
- idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou
- ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou
- ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou
- ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou
- ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou
Starting build...
Building [ds]...
Sending build context to Docker daemon 115.2kB
Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
7.3.0-954176524467342398ae1a6e1012191e730e09e5: Pulling from forgerock-io/ds/pit1
Digest: sha256:bf5e04a8d63e63c57a74344cc34917fce869206ade490f33f9fec5c347039065
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
---> d21188fc9a8a
Step 2/11 : USER root
---> Using cache
---> 4939a79ba489
Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils
---> Using cache
---> 94b1ffd3b8d9
Step 4/11 : USER forgerock
---> Using cache
---> 0802bb6b8a9f
Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data
---> Using cache
---> d4a04d077f03
Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds"
---> Using cache
---> bc28cb149063
Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore"
---> Using cache
---> 4d4ebc9cf7eb
Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts
---> Using cache
---> dbb4b0aeac3d
Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext
---> Using cache
---> 00ae613b2cee
Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/
---> Using cache
---> fd39f0316661
Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext
---> Using cache
---> 054e9ba32584
Successfully built 054e9ba32584
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou
Build [ds] succeeded
Building [am]...
Sending build context to Docker daemon 6.144kB
Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b
7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b: Pulling from forgerock-io/am-cdk/pit1
1efc276f4ff9: Already exists
a2f2f93da482: Already exists
12cca292b13c: Already exists
69e15dccd787: Already exists
33a6812dfc07: Already exists
1fd877fc4ddc: Already exists
e615ee06031e: Already exists
5303b8fc8bb0: Pulling fs layer
66de3050a34e: Pulling fs layer
bddd24b552c2: Pulling fs layer
91b842839cf7: Pulling fs layer
60675a7e85fa: Pulling fs layer
b4a39b43f4fb: Pulling fs layer
09407dc93c9e: Pulling fs layer
2f4017f0c486: Pulling fs layer
efcfd83482db: Pulling fs layer
91b842839cf7: Waiting
60675a7e85fa: Waiting
b4a39b43f4fb: Waiting
09407dc93c9e: Waiting
2f4017f0c486: Waiting
476b89dab2f3: Pulling fs layer
1f1dfb243c30: Pulling fs layer
1a458357f577: Pulling fs layer
f08ee8fe7b21: Pulling fs layer
5d9fd210f8ff: Pulling fs layer
5dc3929d0a7b: Pulling fs layer
9e75810db971: Pulling fs layer
e01b02b2fa16: Pulling fs layer
5d5ce5c78f4f: Pulling fs layer
ce81b4d3f11c: Pulling fs layer
835bdeae3658: Pulling fs layer
a4dd1f5a49fb: Pulling fs layer
0363d35c46f8: Pulling fs layer
f86d4ce4f574: Pulling fs layer
6b4bec2796a2: Pulling fs layer
1f1dfb243c30: Waiting
1a458357f577: Waiting
f08ee8fe7b21: Waiting
5d9fd210f8ff: Waiting
5dc3929d0a7b: Waiting
efcfd83482db: Waiting
9e75810db971: Waiting
e01b02b2fa16: Waiting
5d5ce5c78f4f: Waiting
ce81b4d3f11c: Waiting
835bdeae3658: Waiting
a4dd1f5a49fb: Waiting
0363d35c46f8: Waiting
f86d4ce4f574: Waiting
6b4bec2796a2: Waiting
476b89dab2f3: Waiting
5303b8fc8bb0: Verifying Checksum
5303b8fc8bb0: Download complete
5303b8fc8bb0: Pull complete
91b842839cf7: Verifying Checksum
91b842839cf7: Download complete
60675a7e85fa: Download complete
b4a39b43f4fb: Verifying Checksum
b4a39b43f4fb: Download complete
bddd24b552c2: Verifying Checksum
bddd24b552c2: Download complete
2f4017f0c486: Verifying Checksum
2f4017f0c486: Download complete
66de3050a34e: Verifying Checksum
66de3050a34e: Download complete
09407dc93c9e: Verifying Checksum
09407dc93c9e: Download complete
476b89dab2f3: Verifying Checksum
476b89dab2f3: Download complete
1f1dfb243c30: Verifying Checksum
1f1dfb243c30: Download complete
1a458357f577: Download complete
f08ee8fe7b21: Verifying Checksum
f08ee8fe7b21: Download complete
5dc3929d0a7b: Verifying Checksum
5dc3929d0a7b: Download complete
5d9fd210f8ff: Verifying Checksum
5d9fd210f8ff: Download complete
9e75810db971: Verifying Checksum
9e75810db971: Download complete
5d5ce5c78f4f: Verifying Checksum
5d5ce5c78f4f: Download complete
ce81b4d3f11c: Verifying Checksum
ce81b4d3f11c: Download complete
66de3050a34e: Pull complete
e01b02b2fa16: Verifying Checksum
e01b02b2fa16: Download complete
a4dd1f5a49fb: Verifying Checksum
a4dd1f5a49fb: Download complete
0363d35c46f8: Verifying Checksum
0363d35c46f8: Download complete
835bdeae3658: Verifying Checksum
835bdeae3658: Download complete
bddd24b552c2: Pull complete
f86d4ce4f574: Verifying Checksum
f86d4ce4f574: Download complete
6b4bec2796a2: Verifying Checksum
6b4bec2796a2: Download complete
91b842839cf7: Pull complete
60675a7e85fa: Pull complete
b4a39b43f4fb: Pull complete
efcfd83482db: Verifying Checksum
efcfd83482db: Download complete
09407dc93c9e: Pull complete
2f4017f0c486: Pull complete
efcfd83482db: Pull complete
476b89dab2f3: Pull complete
1f1dfb243c30: Pull complete
1a458357f577: Pull complete
f08ee8fe7b21: Pull complete
5d9fd210f8ff: Pull complete
5dc3929d0a7b: Pull complete
9e75810db971: Pull complete
e01b02b2fa16: Pull complete
5d5ce5c78f4f: Pull complete
ce81b4d3f11c: Pull complete
835bdeae3658: Pull complete
a4dd1f5a49fb: Pull complete
0363d35c46f8: Pull complete
f86d4ce4f574: Pull complete
6b4bec2796a2: Pull complete
Digest: sha256:f0dcb1e84d244b01ccbbbd18537496455e30e0e07a4df823d3c92c33753ac945
Status: Downloaded newer image for gcr.io/forgerock-io/am-cdk/pit1:7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b
---> aacd6678394a
Step 2/6 : ARG CONFIG_PROFILE=cdk
---> Running in a9b0b6e913b0
Removing intermediate container a9b0b6e913b0
---> eca578f3178f
Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Running in 23b532925b9a
[0;36m*** Building 'cdk' profile ***[0m
Removing intermediate container 23b532925b9a
---> 817b0ffb2e88
Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/
---> 6dde99ad89d0
Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/
---> a86a629877a8
Step 6/6 : WORKDIR /home/forgerock
---> Running in 9fe8cfb5c9cb
Removing intermediate container 9fe8cfb5c9cb
---> a51b8db45926
Successfully built a51b8db45926
Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/am]
ceaf1b77696d: Preparing
bb9f06edbbe9: Preparing
975cbc4a03b3: Preparing
7d847b9535cd: Preparing
8528b24e906a: Preparing
ff47f88726b1: Preparing
e0184767b921: Preparing
11bbf4baa4c0: Preparing
e58a50e427cf: Preparing
ba0f185661b7: Preparing
922be1268ed2: Preparing
77aefb9976a4: Preparing
fdeca77fb4f8: Preparing
33a666846c35: Preparing
ff47f88726b1: Waiting
6aa506678ba2: Preparing
e0184767b921: Waiting
4f1caf870d4f: Preparing
847de492efbd: Preparing
6faa540f0589: Preparing
e502021e0658: Preparing
cdcd43a2d636: Preparing
4c1a7da0fab8: Preparing
bd03570426c1: Preparing
4ea488ed4421: Preparing
21ad03d1c8b2: Preparing
2f59d34b13b4: Preparing
467495773f42: Preparing
8878ab435c3c: Preparing
4f5f6b573582: Preparing
71b38085acd2: Preparing
eb6ee5b9581f: Preparing
e3abdc2e9252: Preparing
eafe6e032dbd: Preparing
92a4e8a3140f: Preparing
e58a50e427cf: Waiting
ba0f185661b7: Waiting
922be1268ed2: Waiting
77aefb9976a4: Waiting
fdeca77fb4f8: Waiting
33a666846c35: Waiting
6aa506678ba2: Waiting
4f1caf870d4f: Waiting
847de492efbd: Waiting
6faa540f0589: Waiting
e502021e0658: Waiting
cdcd43a2d636: Waiting
4c1a7da0fab8: Waiting
bd03570426c1: Waiting
4ea488ed4421: Waiting
21ad03d1c8b2: Waiting
2f59d34b13b4: Waiting
467495773f42: Waiting
8878ab435c3c: Waiting
4f5f6b573582: Waiting
71b38085acd2: Waiting
eb6ee5b9581f: Waiting
e3abdc2e9252: Waiting
eafe6e032dbd: Waiting
92a4e8a3140f: Waiting
11bbf4baa4c0: Waiting
8528b24e906a: Layer already exists
7d847b9535cd: Layer already exists
975cbc4a03b3: Layer already exists
ff47f88726b1: Layer already exists
e0184767b921: Layer already exists
11bbf4baa4c0: Layer already exists
ba0f185661b7: Layer already exists
e58a50e427cf: Layer already exists
922be1268ed2: Layer already exists
77aefb9976a4: Layer already exists
fdeca77fb4f8: Layer already exists
33a666846c35: Layer already exists
4f1caf870d4f: Layer already exists
6aa506678ba2: Layer already exists
847de492efbd: Layer already exists
e502021e0658: Layer already exists
6faa540f0589: Layer already exists
cdcd43a2d636: Layer already exists
4c1a7da0fab8: Layer already exists
bd03570426c1: Layer already exists
4ea488ed4421: Layer already exists
21ad03d1c8b2: Layer already exists
2f59d34b13b4: Layer already exists
467495773f42: Layer already exists
4f5f6b573582: Layer already exists
8878ab435c3c: Layer already exists
71b38085acd2: Layer already exists
eb6ee5b9581f: Layer already exists
eafe6e032dbd: Layer already exists
e3abdc2e9252: Layer already exists
92a4e8a3140f: Layer already exists
ceaf1b77696d: Pushed
bb9f06edbbe9: Pushed
xlou: digest: sha256:3887eb83f22aeca4948dd94b2a06825bdf3cdb7b4a1b315a444de33c52604747 size: 7221
Build [am] succeeded
Building [amster]...
Sending build context to Docker daemon 54.27kB
Step 1/14 : FROM gcr.io/forgerock-io/amster/pit1:7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b
7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b: Pulling from forgerock-io/amster/pit1
4b7b4a8876e2: Already exists
f379e6a65787: Pulling fs layer
40d682b09279: Pulling fs layer
5cc81aa988b6: Pulling fs layer
18e179981974: Pulling fs layer
b42303c6c943: Pulling fs layer
6c6def4fb7b2: Pulling fs layer
9b87f665c0f6: Pulling fs layer
705125a7d60c: Pulling fs layer
fb8b1c9490bd: Pulling fs layer
6c6def4fb7b2: Waiting
9b87f665c0f6: Waiting
705125a7d60c: Waiting
fb8b1c9490bd: Waiting
18e179981974: Waiting
b42303c6c943: Waiting
40d682b09279: Verifying Checksum
40d682b09279: Download complete
5cc81aa988b6: Verifying Checksum
5cc81aa988b6: Download complete
f379e6a65787: Verifying Checksum
f379e6a65787: Download complete
18e179981974: Verifying Checksum
18e179981974: Download complete
b42303c6c943: Verifying Checksum
b42303c6c943: Download complete
9b87f665c0f6: Verifying Checksum
9b87f665c0f6: Download complete
705125a7d60c: Verifying Checksum
705125a7d60c: Download complete
f379e6a65787: Pull complete
40d682b09279: Pull complete
6c6def4fb7b2: Verifying Checksum
6c6def4fb7b2: Download complete
5cc81aa988b6: Pull complete
fb8b1c9490bd: Verifying Checksum
fb8b1c9490bd: Download complete
18e179981974: Pull complete
b42303c6c943: Pull complete
6c6def4fb7b2: Pull complete
9b87f665c0f6: Pull complete
705125a7d60c: Pull complete
fb8b1c9490bd: Pull complete
Digest: sha256:9d053cc83285c4816e1efe31520c6ba1ffd7e411d9920457519eafd666303ced
Status: Downloaded newer image for gcr.io/forgerock-io/amster/pit1:7.3.0-88b44ce08784f4cc4fb13a9daa02952942b9b12b
---> 312f2c064df1
Step 2/14 : USER root
---> Running in 5d9ef8b22145
Removing intermediate container 5d9ef8b22145
---> 8f38e6f59794
Step 3/14 : COPY debian-buster-sources.list /etc/apt/sources.list
---> c16281509b95
Step 4/14 : ENV DEBIAN_FRONTEND=noninteractive
---> Running in ee72b9007c2c
Removing intermediate container ee72b9007c2c
---> cd2c5add3312
Step 5/14 : ENV APT_OPTS="--no-install-recommends --yes"
---> Running in c10d1524697e
Removing intermediate container c10d1524697e
---> baffa231e8c1
Step 6/14 : RUN apt-get update && apt-get install -y openldap-utils jq inotify-tools && apt-get clean && rm -r /var/lib/apt/lists /var/cache/apt/archives
---> Running in fecd0e5cf290
Get:1 http://security.debian.org/debian-security buster/updates InRelease [34.8 kB]
Hit:2 http://deb.debian.org/debian buster InRelease
Get:3 http://deb.debian.org/debian buster-updates InRelease [56.6 kB]
Get:4 http://security.debian.org/debian-security buster/updates/main amd64 Packages [357 kB]
Fetched 449 kB in 0s (1063 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
libinotifytools0 libjq1 libonig5
Suggested packages:
libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal
The following NEW packages will be installed:
inotify-tools jq ldap-utils libinotifytools0 libjq1 libonig5
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 598 kB of archives.
After this operation, 1945 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 libinotifytools0 amd64 3.14-7 [18.7 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 inotify-tools amd64 3.14-7 [25.5 kB]
Get:3 http://deb.debian.org/debian buster/main amd64 libonig5 amd64 6.9.1-1 [171 kB]
Get:4 http://deb.debian.org/debian buster/main amd64 libjq1 amd64 1.5+dfsg-2+b1 [124 kB]
Get:5 http://deb.debian.org/debian buster/main amd64 jq amd64 1.5+dfsg-2+b1 [59.4 kB]
Get:6 http://deb.debian.org/debian buster/main amd64 ldap-utils amd64 2.4.47+dfsg-3+deb10u7 [199 kB]
[91mdebconf: delaying package configuration, since apt-utils is not installed
[0mFetched 598 kB in 0s (1246 kB/s)
Selecting previously unselected package libinotifytools0:amd64.
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 9503 files and directories currently installed.)
Preparing to unpack .../0-libinotifytools0_3.14-7_amd64.deb ...
Unpacking libinotifytools0:amd64 (3.14-7) ...
Selecting previously unselected package inotify-tools.
Preparing to unpack .../1-inotify-tools_3.14-7_amd64.deb ...
Unpacking inotify-tools (3.14-7) ...
Selecting previously unselected package libonig5:amd64.
Preparing to unpack .../2-libonig5_6.9.1-1_amd64.deb ...
Unpacking libonig5:amd64 (6.9.1-1) ...
Selecting previously unselected package libjq1:amd64.
Preparing to unpack .../3-libjq1_1.5+dfsg-2+b1_amd64.deb ...
Unpacking libjq1:amd64 (1.5+dfsg-2+b1) ...
Selecting previously unselected package jq.
Preparing to unpack .../4-jq_1.5+dfsg-2+b1_amd64.deb ...
Unpacking jq (1.5+dfsg-2+b1) ...
Selecting previously unselected package ldap-utils.
Preparing to unpack .../5-ldap-utils_2.4.47+dfsg-3+deb10u7_amd64.deb ...
Unpacking ldap-utils (2.4.47+dfsg-3+deb10u7) ...
Setting up libinotifytools0:amd64 (3.14-7) ...
Setting up ldap-utils (2.4.47+dfsg-3+deb10u7) ...
Setting up inotify-tools (3.14-7) ...
Setting up libonig5:amd64 (6.9.1-1) ...
Setting up libjq1:amd64 (1.5+dfsg-2+b1) ...
Setting up jq (1.5+dfsg-2+b1) ...
Processing triggers for libc-bin (2.28-10+deb10u1) ...
Removing intermediate container fecd0e5cf290
---> 2a80d4a361b7
Step 7/14 : USER forgerock
---> Running in d4b01b061cba
Removing intermediate container d4b01b061cba
---> be99ad056582
Step 8/14 : ENV SERVER_URI /am
---> Running in abf2e9dfbbc4
Removing intermediate container abf2e9dfbbc4
---> 2b17a2b949fd
Step 9/14 : ARG CONFIG_PROFILE=cdk
---> Running in 5607f5bcdc37
Removing intermediate container 5607f5bcdc37
---> b01a428393cb
Step 10/14 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Running in 4d771ecb11cd
[0;36m*** Building 'cdk' profile ***[0m
Removing intermediate container 4d771ecb11cd
---> 2d2b8616604b
Step 11/14 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/amster
---> cbab08cca8e8
Step 12/14 : COPY --chown=forgerock:root scripts /opt/amster
---> 07050089bc01
Step 13/14 : RUN chmod 777 /opt/amster
---> Running in d4bbce056214
Removing intermediate container d4bbce056214
---> 030593266afc
Step 14/14 : ENTRYPOINT [ "/opt/amster/docker-entrypoint.sh" ]
---> Running in eca2d73e1b10
Removing intermediate container eca2d73e1b10
---> bc5790844e69
Successfully built bc5790844e69
Successfully tagged gcr.io/engineeringpit/lodestar-images/amster:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/amster]
e2b22426210b: Preparing
ddaa61fce2cf: Preparing
fcae140f5d6d: Preparing
78b373abde9b: Preparing
9b089be6cbd2: Preparing
e8a7d9964e1f: Preparing
424d66f623a6: Preparing
79d1ba81bfd3: Preparing
afa3bf930e73: Preparing
337b618c8b71: Preparing
c17ae0832e9d: Preparing
b3f4f87ee50b: Preparing
277b581a7efc: Preparing
8e1d59b7b421: Preparing
e06e631d87d6: Preparing
e8a7d9964e1f: Waiting
424d66f623a6: Waiting
79d1ba81bfd3: Waiting
afa3bf930e73: Waiting
337b618c8b71: Waiting
c17ae0832e9d: Waiting
b3f4f87ee50b: Waiting
277b581a7efc: Waiting
8e1d59b7b421: Waiting
e06e631d87d6: Waiting
fcae140f5d6d: Pushed
78b373abde9b: Pushed
9b089be6cbd2: Pushed
e8a7d9964e1f: Layer already exists
ddaa61fce2cf: Pushed
afa3bf930e73: Layer already exists
e2b22426210b: Pushed
424d66f623a6: Layer already exists
79d1ba81bfd3: Layer already exists
337b618c8b71: Layer already exists
c17ae0832e9d: Layer already exists
b3f4f87ee50b: Layer already exists
277b581a7efc: Layer already exists
e06e631d87d6: Layer already exists
8e1d59b7b421: Layer already exists
xlou: digest: sha256:1c1bd4de39ddf6d204e22bccdc332864962702834bcba090084a6408ecb9fb15 size: 3465
Build [amster] succeeded
Building [idm]...
Sending build context to Docker daemon 312.8kB
Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972
7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972: Pulling from forgerock-io/idm-cdk/pit1
Digest: sha256:8cae31faa10272657c5903849aa4e2f45f5cf599d0634f82e4d7621b1971e211
Status: Image is up to date for gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-0382f3752dde05542858b37bb329fb76ce8fc972
---> 7085673f39c5
Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 6883c7d2c7c1
Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar
---> Using cache
---> fcd331960e04
Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal
---> Using cache
---> 24269e4efac7
Step 5/8 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> e40f1629d749
Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> d5ce20005d29
Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm
---> Using cache
---> 5fa7bf701673
Step 8/8 : COPY --chown=forgerock:root . /opt/openidm
---> Using cache
---> 66648be71bd1
Successfully built 66648be71bd1
Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou
Build [idm] succeeded
Building [ds-cts]...
Sending build context to Docker daemon 78.85kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
7.3.0-954176524467342398ae1a6e1012191e730e09e5: Pulling from forgerock-io/ds/pit1
Digest: sha256:bf5e04a8d63e63c57a74344cc34917fce869206ade490f33f9fec5c347039065
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
---> d21188fc9a8a
Step 2/10 : USER root
---> Using cache
---> 4939a79ba489
Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 560841a2b411
Step 4/10 : RUN chown -R forgerock:root /opt/opendj
---> Using cache
---> 67f4f7e20101
Step 5/10 : USER forgerock
---> Using cache
---> b4947fbb998d
Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> b00586c00dc0
Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/
---> Using cache
---> 3382a07cda3d
Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> b98c30cffac3
Step 9/10 : ARG profile_version
---> Using cache
---> 1b24dd1bca85
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 28036c7e0be7
Successfully built 28036c7e0be7
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou
Build [ds-cts] succeeded
Building [ds-idrepo]...
Sending build context to Docker daemon 117.8kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
7.3.0-954176524467342398ae1a6e1012191e730e09e5: Pulling from forgerock-io/ds/pit1
Digest: sha256:bf5e04a8d63e63c57a74344cc34917fce869206ade490f33f9fec5c347039065
Status: Image is up to date for gcr.io/forgerock-io/ds/pit1:7.3.0-954176524467342398ae1a6e1012191e730e09e5
---> d21188fc9a8a
Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 6ca1ce5e5350
Step 3/10 : WORKDIR /opt/opendj
---> Using cache
---> 6177fd5eb549
Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> bda9e03ab4da
Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/
---> Using cache
---> fa5df5933d02
Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 6dc859d50e89
Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma
---> Using cache
---> f4c2b69c3f4f
Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/
---> Using cache
---> b91abf7cb2f1
Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif
---> Using cache
---> 49c8336da7d8
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 24564a22e097
Successfully built 24564a22e097
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou
Build [ds-idrepo] succeeded
Building [ig]...
Sending build context to Docker daemon 29.18kB
Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
7.3.0-latest-postcommit: Pulling from forgerock-io/ig/pit1
Digest: sha256:802f7b9a306b49351d0e9b31ba8161833b56615837a17e19f2bb3a94ba65f61f
Status: Image is up to date for gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
---> 91ea29e45e06
Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 966ad3fa78b1
Step 3/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> 1bcd83aaefbd
Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 067d117cd18b
Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig
---> Using cache
---> 5423f638f13c
Step 6/6 : COPY --chown=forgerock:root . /var/ig
---> Using cache
---> 51ea6de2f7b3
Successfully built 51ea6de2f7b3
Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou
Build [ig] succeeded
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
[run_command]: skaffold deploy --build-artifacts=/mnt/disks/data/xslou/lodestar-fork/ext/pre-built-images/medium.json --profile medium --config=/tmp/tmphqi1ph0d --label skaffold.dev/profile=medium --label skaffold.dev/run-id=xlou --force=false --status-check=true --namespace=xlou
Tags used in deployment:
- am -> gcr.io/engineeringpit/lodestar-images/am:xlou@sha256:3887eb83f22aeca4948dd94b2a06825bdf3cdb7b4a1b315a444de33c52604747
- amster -> gcr.io/engineeringpit/lodestar-images/amster:xlou@sha256:1c1bd4de39ddf6d204e22bccdc332864962702834bcba090084a6408ecb9fb15
- idm -> gcr.io/engineeringpit/lodestar-images/idm:xlou@sha256:d3a47be8a66d395e2a0a039fc9a0dffe6efa7ae2cc93581d174fd7cb75a5b39c
- ds-cts -> gcr.io/engineeringpit/lodestar-images/ds-cts:xlou@sha256:e06fbcc879ec2ad0185f85766beabedec1477089649d97f0fc8190d980a8433b
- ds-idrepo -> gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou@sha256:4917982bd5e2cb55758c633ddbd817732b9ee769574ab960b6b097f459e61842
- ig -> gcr.io/engineeringpit/lodestar-images/ig:xlou@sha256:af99fa440a524eedcdd95935861e1c1b88c1e2eb9f0069a62b01a671ebfa8009
- ds -> gcr.io/engineeringpit/lodestar-images/ds:xlou@sha256:7547fb93e0a685e6f9cdb857b98cb1d601fc47d3c8f5bcfca936533ed0098039
Starting deploy...
- configmap/idm created
- configmap/idm-logging-properties created
- configmap/platform-config created
- secret/cloud-storage-credentials-cts created
- secret/cloud-storage-credentials-idrepo created
- service/admin-ui created
- service/am created
- service/ds-cts created
- service/ds-idrepo created
- service/end-user-ui created
- service/idm created
- service/login-ui created
- deployment.apps/admin-ui created
- deployment.apps/am created
- deployment.apps/end-user-ui created
- deployment.apps/idm created
- deployment.apps/login-ui created
- statefulset.apps/ds-cts created
- statefulset.apps/ds-idrepo created
- poddisruptionbudget.policy/am created
- poddisruptionbudget.policy/ds-idrepo created
- poddisruptionbudget.policy/idm created
- poddisruptionbudget.policy/ig created
- Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
- poddisruptionbudget.policy/ds-cts created
- job.batch/amster created
- job.batch/ldif-importer created
- ingress.networking.k8s.io/forgerock created
- ingress.networking.k8s.io/ig-web created
Waiting for deployments to stabilize...
- xlou:deployment/login-ui is ready. [6/7 deployment(s) still pending]
- xlou:deployment/end-user-ui is ready. [5/7 deployment(s) still pending]
- xlou:deployment/admin-ui is ready. [4/7 deployment(s) still pending]
- xlou:deployment/am: waiting for init container fbc-init to start
- xlou:pod/am-bb5f7795c-26mmt: waiting for init container fbc-init to start
- xlou:pod/am-bb5f7795c-bzfpt: waiting for init container fbc-init to start
- xlou:pod/am-bb5f7795c-wfmph: waiting for init container fbc-init to start
- xlou:deployment/idm:
- xlou:statefulset/ds-cts: waiting for init container initialize to start
- xlou:pod/ds-cts-0: waiting for init container initialize to start
- xlou:statefulset/ds-idrepo: waiting for init container initialize to start
- xlou:pod/ds-idrepo-0: waiting for init container initialize to start
- xlou:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-1"
- xlou:pod/ds-cts-1: unable to determine current service state of pod "ds-cts-1"
- xlou:statefulset/ds-cts: waiting for init container initialize to start
- xlou:pod/ds-cts-1: waiting for init container initialize to start
- xlou:statefulset/ds-idrepo:
- xlou:statefulset/ds-idrepo: waiting for init container initialize to start
- xlou:pod/ds-idrepo-1: waiting for init container initialize to start
- xlou:statefulset/ds-cts: unable to determine current service state of pod "ds-cts-2"
- xlou:pod/ds-cts-2: unable to determine current service state of pod "ds-cts-2"
- xlou:statefulset/ds-idrepo: unable to determine current service state of pod "ds-idrepo-2"
- xlou:pod/ds-idrepo-2: unable to determine current service state of pod "ds-idrepo-2"
- xlou:deployment/idm is ready. [3/7 deployment(s) still pending]
- xlou:statefulset/ds-cts: Waiting for 1 pods to be ready...
- xlou:statefulset/ds-cts is ready. [2/7 deployment(s) still pending]
- xlou:deployment/am is ready. [1/7 deployment(s) still pending]
- xlou:statefulset/ds-idrepo is ready.
Deployments stabilized in 2 minutes 1.127 second
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
am-bb5f7795c-26mmt am-bb5f7795c-bzfpt am-bb5f7795c-wfmph
--- stderr ---
--------------- Check pod am-bb5f7795c-26mmt is running ---------------
[loop_until]: kubectl --namespace=xlou get pods am-bb5f7795c-26mmt -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-bb5f7795c-26mmt -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-bb5f7795c-26mmt -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:43Z
--- stderr ---
-------- Check pod am-bb5f7795c-26mmt filesystem is accessible --------
[loop_until]: kubectl --namespace=xlou exec am-bb5f7795c-26mmt -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-bb5f7795c-26mmt restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-bb5f7795c-26mmt -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-bb5f7795c-26mmt has been restarted 0 times.
--------------- Check pod am-bb5f7795c-bzfpt is running ---------------
[loop_until]: kubectl --namespace=xlou get pods am-bb5f7795c-bzfpt -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-bb5f7795c-bzfpt -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-bb5f7795c-bzfpt -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:43Z
--- stderr ---
-------- Check pod am-bb5f7795c-bzfpt filesystem is accessible --------
[loop_until]: kubectl --namespace=xlou exec am-bb5f7795c-bzfpt -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-bb5f7795c-bzfpt restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-bb5f7795c-bzfpt -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-bb5f7795c-bzfpt has been restarted 0 times.
--------------- Check pod am-bb5f7795c-wfmph is running ---------------
[loop_until]: kubectl --namespace=xlou get pods am-bb5f7795c-wfmph -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-bb5f7795c-wfmph -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-bb5f7795c-wfmph -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:43Z
--- stderr ---
-------- Check pod am-bb5f7795c-wfmph filesystem is accessible --------
[loop_until]: kubectl --namespace=xlou exec am-bb5f7795c-wfmph -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-bb5f7795c-wfmph restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-bb5f7795c-wfmph -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-bb5f7795c-wfmph has been restarted 0 times.
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-fw654
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
idm-56b6859478-24l6h idm-56b6859478-xzhz5
--- stderr ---
-------------- Check pod idm-56b6859478-24l6h is running --------------
[loop_until]: kubectl --namespace=xlou get pods idm-56b6859478-24l6h -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods idm-56b6859478-24l6h -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod idm-56b6859478-24l6h -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:43Z
--- stderr ---
------- Check pod idm-56b6859478-24l6h filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec idm-56b6859478-24l6h -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-56b6859478-24l6h restart count ------------
[loop_until]: kubectl --namespace=xlou get pod idm-56b6859478-24l6h -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-56b6859478-24l6h has been restarted 0 times.
-------------- Check pod idm-56b6859478-xzhz5 is running --------------
[loop_until]: kubectl --namespace=xlou get pods idm-56b6859478-xzhz5 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods idm-56b6859478-xzhz5 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod idm-56b6859478-xzhz5 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:43Z
--- stderr ---
------- Check pod idm-56b6859478-xzhz5 filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec idm-56b6859478-xzhz5 -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-56b6859478-xzhz5 restart count ------------
[loop_until]: kubectl --namespace=xlou get pod idm-56b6859478-xzhz5 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-56b6859478-xzhz5 has been restarted 0 times.
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
-------------------- Check pod ds-cts-0 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:48Z
--- stderr ---
------------- Check pod ds-cts-0 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-0 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-0 has been restarted 0 times.
-------------------- Check pod ds-cts-1 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:17:20Z
--- stderr ---
------------- Check pod ds-cts-1 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-1 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-1 has been restarted 0 times.
-------------------- Check pod ds-cts-2 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:17:52Z
--- stderr ---
------------- Check pod ds-cts-2 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-2 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-2 has been restarted 0 times.
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
------------------ Check pod ds-idrepo-0 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:48Z
--- stderr ---
----------- Check pod ds-idrepo-0 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-0 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-0 has been restarted 0 times.
------------------ Check pod ds-idrepo-1 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:17:29Z
--- stderr ---
----------- Check pod ds-idrepo-1 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-1 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-1 has been restarted 0 times.
------------------ Check pod ds-idrepo-2 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:18:10Z
--- stderr ---
----------- Check pod ds-idrepo-2 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-2 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-2 has been restarted 0 times.
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-58f7b744b5-lw6lc
--- stderr ---
---------- Check pod end-user-ui-58f7b744b5-lw6lc is running ----------
[loop_until]: kubectl --namespace=xlou get pods end-user-ui-58f7b744b5-lw6lc -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods end-user-ui-58f7b744b5-lw6lc -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod end-user-ui-58f7b744b5-lw6lc -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:43Z
--- stderr ---
--- Check pod end-user-ui-58f7b744b5-lw6lc filesystem is accessible ---
[loop_until]: kubectl --namespace=xlou exec end-user-ui-58f7b744b5-lw6lc -c end-user-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
-------- Check pod end-user-ui-58f7b744b5-lw6lc restart count --------
[loop_until]: kubectl --namespace=xlou get pod end-user-ui-58f7b744b5-lw6lc -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod end-user-ui-58f7b744b5-lw6lc has been restarted 0 times.
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-7678dc66c-ld6xb
--- stderr ---
------------ Check pod login-ui-7678dc66c-ld6xb is running ------------
[loop_until]: kubectl --namespace=xlou get pods login-ui-7678dc66c-ld6xb -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods login-ui-7678dc66c-ld6xb -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod login-ui-7678dc66c-ld6xb -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:44Z
--- stderr ---
----- Check pod login-ui-7678dc66c-ld6xb filesystem is accessible -----
[loop_until]: kubectl --namespace=xlou exec login-ui-7678dc66c-ld6xb -c login-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod login-ui-7678dc66c-ld6xb restart count ----------
[loop_until]: kubectl --namespace=xlou get pod login-ui-7678dc66c-ld6xb -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod login-ui-7678dc66c-ld6xb has been restarted 0 times.
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-9d465f5b9-wkdt8
--- stderr ---
------------ Check pod admin-ui-9d465f5b9-wkdt8 is running ------------
[loop_until]: kubectl --namespace=xlou get pods admin-ui-9d465f5b9-wkdt8 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods admin-ui-9d465f5b9-wkdt8 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod admin-ui-9d465f5b9-wkdt8 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2022-09-20T21:16:43Z
--- stderr ---
----- Check pod admin-ui-9d465f5b9-wkdt8 filesystem is accessible -----
[loop_until]: kubectl --namespace=xlou exec admin-ui-9d465f5b9-wkdt8 -c admin-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod admin-ui-9d465f5b9-wkdt8 restart count ----------
[loop_until]: kubectl --namespace=xlou get pod admin-ui-9d465f5b9-wkdt8 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod admin-ui-9d465f5b9-wkdt8 has been restarted 0 times.
******************************* Checking AM component is running *******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:3 replicas:3
--- stderr ---
***************************** Checking AMSTER component is running *****************************
------------------ Waiting for Amster job to finish ------------------
--------------------- Get expected number of pods ---------------------
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get jobs amster -o jsonpath="{.status.succeeded}" | grep "1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
****************************** Checking IDM component is running ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
-------------- Waiting for 2 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:2 replicas:2
--- stderr ---
***************************** Checking DS-CTS component is running *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
*************************** Checking DS-IDREPO component is running ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
************************** Checking END-USER-UI component is running **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking LOGIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking ADMIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
****************************** Livecheck stage: After deployment ******************************
------------------------ Running AM livecheck ------------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
[loop_until]: kubectl --namespace=xlou get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ZzlNZWR6Z2JnSVZwSk13NnFPNXJFQm51
--- stderr ---
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: g9MedzgbgIVpJMw6qO5rEBnu" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "sWY8WTRCeUMsHqpjA30PsDvwVAA.*AAJTSQACMDIAAlNLABxTdFgzZzkwc2phZVJnRW4xUUUxWmQ5aDM2YUU9AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
---------------------- Running AMSTER livecheck ----------------------
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-fw654
--- stderr ---
Amster import completed. AM is now configured
Amster livecheck is passed
------------------------ Running IDM livecheck ------------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping
[loop_until]: kubectl --namespace=xlou get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
dXo0MGp4OUpXdVlFSW9QbkJ5d1ozdDRo
--- stderr ---
Set admin password: uz40jx9JWuYEIoPnBywZ3t4h
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "",
"_rev": "",
"shortDesc": "OpenIDM ready",
"state": "ACTIVE_READY"
}
---------------------- Running DS-CTS livecheck ----------------------
Livecheck to ds-cts-0
[loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ZVdRblN6bkFIaTRnUHNKODhtdHpmS3AxRFBNc2xRdXg=
--- stderr ---
[run_command]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "eWQnSznAHi4gPsJ88mtzfKp1DPMslQux" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-cts-1
[run_command]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "eWQnSznAHi4gPsJ88mtzfKp1DPMslQux" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-cts-2
[run_command]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "eWQnSznAHi4gPsJ88mtzfKp1DPMslQux" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
--------------------- Running DS-IDREPO livecheck ---------------------
Livecheck to ds-idrepo-0
[loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ZVdRblN6bkFIaTRnUHNKODhtdHpmS3AxRFBNc2xRdXg=
--- stderr ---
[run_command]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "eWQnSznAHi4gPsJ88mtzfKp1DPMslQux" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-idrepo-1
[run_command]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "eWQnSznAHi4gPsJ88mtzfKp1DPMslQux" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
Livecheck to ds-idrepo-2
[run_command]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ldapsearch --noPropertiesFile -p 1389 --useStartTls --trustAll -D "uid=admin" -w "eWQnSznAHi4gPsJ88mtzfKp1DPMslQux" -b "" -s base "(&)" alive
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
dn:
alive: true
--- stderr ---
-------------------- Running END-USER-UI livecheck --------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/enduser
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/enduser"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Identity Management
[]
--------------------- Running LOGIN-UI livecheck ---------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Login
[]
--------------------- Running ADMIN-UI livecheck ---------------------
Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/platform
[http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/platform"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
Platform Admin
[]
LIVECHECK SUCCEEDED
****************************** Initializing component pods for AM ******************************
----------------------- Get AM software version -----------------------
Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version
- Login amadmin to get token
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: g9MedzgbgIVpJMw6qO5rEBnu" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "YmkSpA1bIxS7fnHmWa5VW1D8a5A.*AAJTSQACMDIAAlNLABxncmFqaSthUmxNbDZZaVNuVXNpN013ZWU4bXc9AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
[http_cmd]: curl -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=YmkSpA1bIxS7fnHmWa5VW1D8a5A.*AAJTSQACMDIAAlNLABxncmFqaSthUmxNbDZZaVNuVXNpN013ZWU4bXc9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1663708784.048.20251.183706|f60edb382037eb2df1e800d563ad78a7" "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"_rev": "-189581226",
"version": "7.3.0-SNAPSHOT",
"fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build 88b44ce08784f4cc4fb13a9daa02952942b9b12b (2022-September-20 13:43)",
"revision": "88b44ce08784f4cc4fb13a9daa02952942b9b12b",
"date": "2022-September-20 13:43"
}
**************************** Initializing component pods for AMSTER ****************************
***************************** Initializing component pods for IDM *****************************
---------------------- Get IDM software version ----------------------
Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"productVersion": "7.3.0-SNAPSHOT",
"productBuildDate": "20220912204640",
"productRevision": "0382f37"
}
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get DS-CTS software version ---------------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
------------------- Get DS-IDREPO software version -------------------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
------------------ Get END-USER-UI software version ------------------
[loop_until]: kubectl --namespace=xlou exec end-user-ui-58f7b744b5-lw6lc -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.f08372bd.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp end-user-ui-58f7b744b5-lw6lc:/usr/share/nginx/html/js/chunk-vendors.f08372bd.js /tmp/end-user-ui_info/chunk-vendors.f08372bd.js -c end-user-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
-------------------- Get LOGIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou exec login-ui-7678dc66c-ld6xb -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.a2f4657e.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp login-ui-7678dc66c-ld6xb:/usr/share/nginx/html/js/chunk-vendors.a2f4657e.js /tmp/login-ui_info/chunk-vendors.a2f4657e.js -c login-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
-------------------- Get ADMIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou exec admin-ui-9d465f5b9-wkdt8 -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.90a388ad.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp admin-ui-9d465f5b9-wkdt8:/usr/share/nginx/html/js/chunk-vendors.90a388ad.js /tmp/admin-ui_info/chunk-vendors.90a388ad.js -c admin-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
====================================================================================================
====================== Admin password for AM is: g9MedzgbgIVpJMw6qO5rEBnu ======================
====================================================================================================
====================================================================================================
===================== Admin password for IDM is: uz40jx9JWuYEIoPnBywZ3t4h =====================
====================================================================================================
====================================================================================================
================ Admin password for DS-CTS is: eWQnSznAHi4gPsJ88mtzfKp1DPMslQux ================
====================================================================================================
====================================================================================================
============== Admin password for DS-IDREPO is: eWQnSznAHi4gPsJ88mtzfKp1DPMslQux ==============
====================================================================================================
*************************************** Dumping pod list ***************************************
Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/_pod-list.txt
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
am-bb5f7795c-26mmt am-bb5f7795c-bzfpt am-bb5f7795c-wfmph
--- stderr ---
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-fw654
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm-56b6859478-24l6h idm-56b6859478-xzhz5
--- stderr ---
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-58f7b744b5-lw6lc
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-7678dc66c-ld6xb
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-9d465f5b9-wkdt8
--- stderr ---
*********************************** Dumping components logs ***********************************
------------------------- Dumping logs for AM -------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/am-bb5f7795c-26mmt.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/am-bb5f7795c-bzfpt.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/am-bb5f7795c-wfmph.txt
Check pod logs for errors
----------------------- Dumping logs for AMSTER -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/amster-fw654.txt
Check pod logs for errors
------------------------ Dumping logs for IDM ------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/idm-56b6859478-24l6h.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/idm-56b6859478-xzhz5.txt
Check pod logs for errors
----------------------- Dumping logs for DS-CTS -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/ds-cts-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/ds-cts-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/ds-cts-2.txt
Check pod logs for errors
--------------------- Dumping logs for DS-IDREPO ---------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/ds-idrepo-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/ds-idrepo-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/ds-idrepo-2.txt
Check pod logs for errors
-------------------- Dumping logs for END-USER-UI --------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/end-user-ui-58f7b744b5-lw6lc.txt
Check pod logs for errors
---------------------- Dumping logs for LOGIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/login-ui-7678dc66c-ld6xb.txt
Check pod logs for errors
---------------------- Dumping logs for ADMIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/authn_rest/pod-logs/stack/20220920-211953-after-deployment/admin-ui-9d465f5b9-wkdt8.txt
Check pod logs for errors
[20/Sep/2022 21:20:15] - INFO: Deployment successful
________________________________________________________________________________
[20/Sep/2022 21:20:15] Deploy_all_forgerock_components post : Post method
________________________________________________________________________________
Setting result to PASS
Task has been successfully stopped