--Task--
name: Deploy_all_forgerock_components
enabled: True
class_name: DeployComponentsTask
source_name: controller
source_namespace: >default<
target_name: controller
target_namespace: >default<
start: 0
stop: None
timeout: no timeout
loop: False
interval: None
dependencies: ['Enable_prometheus_admin_api']
wait_for: []
options: {}
group_name: None
Current dir: /mnt/disks/data/xslou/lodestar-fork/pyrock
________________________________________________________________________________
[21/Feb/2023 00:34:34] Deploy_all_forgerock_components pre : Initialising task parameters
________________________________________________________________________________
task will be executed on controller (localhost)
________________________________________________________________________________
[21/Feb/2023 00:34:34] Deploy_all_forgerock_components step1 : Deploy components
________________________________________________________________________________
******************************** Cleaning up existing namespace ********************************
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
pod "admin-ui-796fdc7d9d-4b582" force deleted
pod "am-685d4f4864-87fzd" force deleted
pod "am-685d4f4864-cnzx5" force deleted
pod "am-685d4f4864-rlt4n" force deleted
pod "amster-fcrhl" force deleted
pod "ds-cts-0" force deleted
pod "ds-cts-1" force deleted
pod "ds-cts-2" force deleted
pod "ds-idrepo-0" force deleted
pod "ds-idrepo-1" force deleted
pod "ds-idrepo-2" force deleted
pod "end-user-ui-5bd969d66b-zh7nn" force deleted
pod "idm-6ddf478c88-2zbbh" force deleted
pod "idm-6ddf478c88-df6x9" force deleted
pod "ldif-importer-phb4r" force deleted
pod "login-ui-6799664bf6-b9dd2" force deleted
pod "overseer-0-66fbf64c6d-rd2s6" force deleted
service "admin-ui" force deleted
service "am" force deleted
service "ds-cts" force deleted
service "ds-idrepo" force deleted
service "end-user-ui" force deleted
service "idm" force deleted
service "login-ui" force deleted
service "overseer-0" force deleted
deployment.apps "admin-ui" force deleted
deployment.apps "am" force deleted
deployment.apps "end-user-ui" force deleted
deployment.apps "idm" force deleted
deployment.apps "login-ui" force deleted
deployment.apps "overseer-0" force deleted
statefulset.apps "ds-cts" force deleted
statefulset.apps "ds-idrepo" force deleted
job.batch "amster" force deleted
job.batch "ldif-importer" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 10s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 21s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 31s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 41s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 52s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 02s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 13s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 23s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 33s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 44s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 54s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 2m 04s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-files amster-retain dev-utils idm idm-logging-properties kube-root-ca.crt overseer-config-0 platform-config
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap amster-files --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "amster-files" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap amster-retain --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "amster-retain" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap dev-utils --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "dev-utils" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap idm --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap idm-logging-properties --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "idm-logging-properties" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap kube-root-ca.crt --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "kube-root-ca.crt" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap overseer-config-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "overseer-config-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete configmap platform-config --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "platform-config" deleted
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-env-secrets cloud-storage-credentials-cts cloud-storage-credentials-idrepo
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret amster-env-secrets --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "amster-env-secrets" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-cts --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-cts" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-idrepo --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-idrepo" deleted
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
forgerock ig overseer-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress forgerock --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "forgerock" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress ig --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "ig" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete ingress overseer-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "overseer-0" deleted
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 overseer-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-1 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-1" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-2 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-2" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pvc overseer-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "overseer-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
----------------- Deleting admin clusterrolebindings -----------------
[loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}"
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
k8s-svc-acct-crb-xlou-0
--- stderr ---
Deleting clusterrolebinding k8s-svc-acct-crb-xlou-0 associated with xlou namespace
[loop_until]: kubectl delete clusterrolebinding k8s-svc-acct-crb-xlou-0
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
clusterrolebinding.rbac.authorization.k8s.io "k8s-svc-acct-crb-xlou-0" deleted
--- stderr ---
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace "xlou" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou-rcs delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou-rcs delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
pod "ds-cts-0" force deleted
pod "ds-idrepo-0" force deleted
pod "ldif-importer-ghwgt" force deleted
pod "rcs-84d8994499-w5bvs" force deleted
service "ds-cts" force deleted
service "ds-idrepo" force deleted
service "rcs-service" force deleted
deployment.apps "rcs" force deleted
statefulset.apps "ds-cts" force deleted
statefulset.apps "ds-idrepo" force deleted
job.batch "ldif-importer" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou-rcs get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 10s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 20s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 31s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 41s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 51s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 02s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 12s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 22s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 33s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 43s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 1m 54s (rc=0) - failed to find expected output: No resources found - retry
[loop_until]: Function succeeded after 2m 04s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou-rcs namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou-rcs get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
dev-utils kube-root-ca.crt platform-config rcs-deployment-config-856hdhf5k4 rcsprops
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete configmap dev-utils --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "dev-utils" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete configmap kube-root-ca.crt --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "kube-root-ca.crt" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete configmap platform-config --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "platform-config" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete configmap rcs-deployment-config-856hdhf5k4 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "rcs-deployment-config-856hdhf5k4" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete configmap rcsprops --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
configmap "rcsprops" deleted
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou-rcs get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
cloud-storage-credentials-cts cloud-storage-credentials-idrepo ds-passwords
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete secret cloud-storage-credentials-cts --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-cts" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete secret cloud-storage-credentials-idrepo --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "cloud-storage-credentials-idrepo" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete secret ds-passwords --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret "ds-passwords" deleted
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou-rcs get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
forgerock ig rcs-ingress
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete ingress forgerock --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "forgerock" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete ingress ig --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "ig" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete ingress rcs-ingress --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ingress.networking.k8s.io "rcs-ingress" deleted
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou-rcs get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
data-ds-cts-0 data-ds-idrepo-0
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete pvc data-ds-cts-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-cts-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete pvc data-ds-idrepo-0 --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
persistentvolumeclaim "data-ds-idrepo-0" deleted
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete pv ds-backup-xlou-rcs --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
----------------- Deleting admin clusterrolebindings -----------------
[loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou-rcs')]}{.metadata.name} {end}"
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou-rcs --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace "xlou-rcs" force deleted
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou-rcs --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
************************************* Creating deployment *************************************
Creating normal (forgeops) type deployment for deployment: stack
------- Custom component configuration present. Loading values -------
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
----------------- Deleting admin clusterrolebindings -----------------
[loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}"
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl create namespace xlou
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou created
--- stderr ---
[loop_until]: kubectl label namespace xlou self-service=false timeout=48
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou labeled
--- stderr ---
************************************ Configuring components ************************************
Applying custom configuration, dockerfiles to deployment and custom lodestar component configuration
Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/pta_cdm/docker/am to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am
Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/pta_cdm/docker/idm to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm
Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/pta_cdm/docker/amster to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster
Copying /mnt/disks/data/xslou/lodestar-fork/shared/config/custom/pta_cdm/kustomize/base to /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base
No custom features provided. Nothing to do.
---- Updating components image tag/repo from platform-images repo ----
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --clean
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Cleaning up.
[WARNING] Found nothing to clean.
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ds
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Repo is at c447dfd1ca0f291f841da8677b4eafb41c901c2f on branch HEAD
[INFO] Updating products ds
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone.
[INFO] Repo is at c447dfd1ca0f291f841da8677b4eafb41c901c2f on branch HEAD
[INFO] Updating products am
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.0-b4ef885d337fd02a1123f20359a23fe51f7131c8
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone.
[INFO] Repo is at c447dfd1ca0f291f841da8677b4eafb41c901c2f on branch HEAD
[INFO] Updating products amster
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.0-b4ef885d337fd02a1123f20359a23fe51f7131c8
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone.
[INFO] Repo is at c447dfd1ca0f291f841da8677b4eafb41c901c2f on branch HEAD
[INFO] Updating products idm
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-1c6dbcf77830663a27ee09c2d7a47bb991ce195a
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref master-ready-for-dev-pipelines --products ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images.
[INFO] Found existing files, attempting to not clone.
[INFO] Repo is at c447dfd1ca0f291f841da8677b4eafb41c901c2f on branch HEAD
[INFO] Updating products ui
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-d04e0042b22789615a62d45f68cb8da6f82e986a
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-d04e0042b22789615a62d45f68cb8da6f82e986a
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-d04e0042b22789615a62d45f68cb8da6f82e986a
--- stderr ---
- Checking if component Dockerfile/kustomize needs additional update -
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds cts
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts
--- stderr ---
Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds idrepo
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo
--- stderr ---
Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker am
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am
--- stderr ---
Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.0-b4ef885d337fd02a1123f20359a23fe51f7131c8
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker amster
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster
--- stderr ---
Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.0-b4ef885d337fd02a1123f20359a23fe51f7131c8
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker idm
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm
--- stderr ---
Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-1c6dbcf77830663a27ee09c2d7a47bb991ce195a
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base end-user-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui
--- stderr ---
Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.2.0-d04e0042b22789615a62d45f68cb8da6f82e986a
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base login-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui
--- stderr ---
Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.2.0-d04e0042b22789615a62d45f68cb8da6f82e986a
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base admin-ui
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui
--- stderr ---
Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution)
Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.2.0-d04e0042b22789615a62d45f68cb8da6f82e986a
No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize overlay medium
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/medium
--- stderr ---
[loop_until]: kubectl --namespace=xlou delete -f /tmp/tmps3at1ikc
[loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
[loop_until]: OK (rc = 1)
--- stdout ---
--- stderr ---
Error from server (NotFound): error when deleting "/tmp/tmps3at1ikc": secrets "sslcert" not found
[loop_until]: kubectl --namespace=xlou apply -f /tmp/tmps3at1ikc
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret/sslcert created
--- stderr ---
************************************* Creating deployment *************************************
Creating normal (forgeops) type deployment for deployment: rcs
------- Custom component configuration present. Loading values -------
------------------ Deleting secret agent controller ------------------
[loop_until]: kubectl --namespace=xlou-rcs delete sac --all
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
----------------------- Deleting all resources -----------------------
[loop_until]: kubectl --namespace=xlou-rcs delete all --all --grace-period=0 --force
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
No resources found
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: kubectl -n xlou-rcs get pods | grep "No resources found"
[loop_until]: (max_time=360, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
No resources found in xlou-rcs namespace.
------------------------- Deleting configmap -------------------------
[loop_until]: kubectl --namespace=xlou-rcs get configmap -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
--------------------------- Deleting secret ---------------------------
[loop_until]: kubectl --namespace=xlou-rcs get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}'
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
-------------------------- Deleting ingress --------------------------
[loop_until]: kubectl --namespace=xlou-rcs get ingress -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
---------------------------- Deleting pvc ----------------------------
[loop_until]: kubectl --namespace=xlou-rcs get pvc -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete pv ds-backup-xlou-rcs --ignore-not-found
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: deleting cluster-scoped resources, not scoped to the provided namespace
----------------- Deleting admin clusterrolebindings -----------------
[loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou-rcs')]}{.metadata.name} {end}"
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
------------------------- Deleting namespace -------------------------
[loop_until]: kubectl delete namespaces xlou-rcs --ignore-not-found --grace-period=0 --force
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
[loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou-rcs --ignore-not-found` | grep 0
[loop_until]: (max_time=600, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
--- stderr ---
[loop_until]: kubectl create namespace xlou-rcs
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou-rcs created
--- stderr ---
[loop_until]: kubectl label namespace xlou-rcs self-service=false timeout=48
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
namespace/xlou-rcs labeled
--- stderr ---
************************************ Configuring components ************************************
No custom config provided. Nothing to do.
No custom features provided. Nothing to do.
---- Updating components image tag/repo from platform-images repo ----
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/set-images --clean
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Cleaning up.
[WARNING] Found nothing to clean.
--- stderr ---
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/set-images --ref master-ready-for-dev-pipelines --products ds
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
[INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/build/platform-images.
[INFO] Repo is at c447dfd1ca0f291f841da8677b4eafb41c901c2f on branch HEAD
[INFO] Updating products ds
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
[INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
--- stderr ---
- Checking if component Dockerfile/kustomize needs additional update -
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/config path docker ds cts
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/cts
--- stderr ---
Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/cts/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/config path docker ds idrepo
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/idrepo
--- stderr ---
Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution)
Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/docker/ds/idrepo/Dockerfile
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/config path kustomize overlay internal-profiles/ds-only
[run_command]: OK (rc = 0 - expected to be in [0])
--- stdout ---
/mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/kustomize/overlay/internal-profiles/ds-only
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs delete -f /tmp/tmp14qkgula
[loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
[loop_until]: OK (rc = 1)
--- stdout ---
--- stderr ---
Error from server (NotFound): error when deleting "/tmp/tmp14qkgula": secrets "sslcert" not found
[loop_until]: kubectl --namespace=xlou-rcs apply -f /tmp/tmp14qkgula
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret/sslcert created
--- stderr ---
Creating Amster secrets in CDM namespace
[loop_until]: kubectl --namespace=xlou delete -f /mnt/disks/data/xslou/lodestar-fork/shared/keys/amster-secret.yaml
[loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
[loop_until]: OK (rc = 1)
--- stdout ---
--- stderr ---
Error from server (NotFound): error when deleting "/mnt/disks/data/xslou/lodestar-fork/shared/keys/amster-secret.yaml": secrets "amster-env-secrets" not found
[loop_until]: kubectl --namespace=xlou create -f /mnt/disks/data/xslou/lodestar-fork/shared/keys/amster-secret.yaml
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret/amster-env-secrets created
--- stderr ---
Creating RCS instance and adding it into RCS namespace
Creating DS secret in RCS namespace
[loop_until]: kubectl --namespace=xlou-rcs delete -f /mnt/disks/data/xslou/lodestar-fork/shared/keys/ds-rcs-secret.yaml
[loop_until]: (max_time=180, interval=5, expected_rc=[0, 1]
[loop_until]: OK (rc = 1)
--- stdout ---
--- stderr ---
Error from server (NotFound): error when deleting "/mnt/disks/data/xslou/lodestar-fork/shared/keys/ds-rcs-secret.yaml": secrets "ds-passwords" not found
[loop_until]: kubectl --namespace=xlou-rcs create -f /mnt/disks/data/xslou/lodestar-fork/shared/keys/ds-rcs-secret.yaml
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
secret/ds-passwords created
--- stderr ---
Modifying rcs yaml files
The following components will be deployed:
- ds-cts (DS)
- ds-idrepo (DS)
- am (AM)
- amster (Amster)
- idm (IDM)
- end-user-ui (EndUserUi)
- login-ui (LoginUi)
- admin-ui (AdminUi)
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/forgeops build all --config-profile=cdk --push-to gcr.io/engineeringpit/lodestar-images --tag=xlou
[run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'}
Sending build context to Docker daemon 49.15kB
Step 1/6 : FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.0-b4ef885d337fd02a1123f20359a23fe51f7131c8
---> e55d812c2651
Step 2/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> b3d744c83061
Step 3/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 958aa2395816
Step 4/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /home/forgerock/openam/
---> Using cache
---> b95fb14f7c83
Step 5/6 : COPY --chown=forgerock:root *.sh /home/forgerock/
---> Using cache
---> 36cc5f5f337d
Step 6/6 : WORKDIR /home/forgerock
---> Using cache
---> b58ec37037cb
Successfully built b58ec37037cb
Successfully tagged gcr.io/engineeringpit/lodestar-images/am:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/am]
3bc39bbdac58: Preparing
a59adb894edc: Preparing
9a37e0a3c706: Preparing
b5f32481a57d: Preparing
d73947d3b778: Preparing
f7df2a68a894: Preparing
9ad671647b63: Preparing
0e6ce930e26e: Preparing
11a75fb91db3: Preparing
42be0c963b4f: Preparing
3d3fe57df701: Preparing
8f43106719af: Preparing
9b7bb923fd6c: Preparing
d4ab02e938e2: Preparing
57aa91529615: Preparing
ae4b0bb9bf8f: Preparing
ca327693031d: Preparing
52da242e3d96: Preparing
2ca090905e90: Preparing
42ea7a549632: Preparing
8a793c5d0ef4: Preparing
1b0ec256b7d8: Preparing
4ea488ed4421: Preparing
21ad03d1c8b2: Preparing
add8abb20da6: Preparing
948b01392fa5: Preparing
8878ab435c3c: Preparing
4f5f6b573582: Preparing
71b38085acd2: Preparing
eb6ee5b9581f: Preparing
e3abdc2e9252: Preparing
eafe6e032dbd: Preparing
92a4e8a3140f: Preparing
f7df2a68a894: Waiting
9ad671647b63: Waiting
0e6ce930e26e: Waiting
11a75fb91db3: Waiting
42be0c963b4f: Waiting
8f43106719af: Waiting
9b7bb923fd6c: Waiting
d4ab02e938e2: Waiting
57aa91529615: Waiting
ae4b0bb9bf8f: Waiting
ca327693031d: Waiting
52da242e3d96: Waiting
2ca090905e90: Waiting
42ea7a549632: Waiting
8a793c5d0ef4: Waiting
1b0ec256b7d8: Waiting
4ea488ed4421: Waiting
21ad03d1c8b2: Waiting
add8abb20da6: Waiting
948b01392fa5: Waiting
8878ab435c3c: Waiting
4f5f6b573582: Waiting
71b38085acd2: Waiting
eb6ee5b9581f: Waiting
e3abdc2e9252: Waiting
eafe6e032dbd: Waiting
92a4e8a3140f: Waiting
3d3fe57df701: Waiting
d73947d3b778: Layer already exists
b5f32481a57d: Layer already exists
a59adb894edc: Layer already exists
3bc39bbdac58: Layer already exists
9a37e0a3c706: Layer already exists
f7df2a68a894: Layer already exists
0e6ce930e26e: Layer already exists
9ad671647b63: Layer already exists
42be0c963b4f: Layer already exists
11a75fb91db3: Layer already exists
8f43106719af: Layer already exists
9b7bb923fd6c: Layer already exists
57aa91529615: Layer already exists
3d3fe57df701: Layer already exists
d4ab02e938e2: Layer already exists
ca327693031d: Layer already exists
52da242e3d96: Layer already exists
ae4b0bb9bf8f: Layer already exists
2ca090905e90: Layer already exists
42ea7a549632: Layer already exists
8a793c5d0ef4: Layer already exists
1b0ec256b7d8: Layer already exists
4ea488ed4421: Layer already exists
21ad03d1c8b2: Layer already exists
add8abb20da6: Layer already exists
948b01392fa5: Layer already exists
eb6ee5b9581f: Layer already exists
71b38085acd2: Layer already exists
8878ab435c3c: Layer already exists
4f5f6b573582: Layer already exists
e3abdc2e9252: Layer already exists
eafe6e032dbd: Layer already exists
92a4e8a3140f: Layer already exists
xlou: digest: sha256:928eda365ef1cead91d586bbc0a893158d095d3444455ddb4bfe6b2b910358b5 size: 7222
Sending build context to Docker daemon 337.9kB
Step 1/8 : FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-1c6dbcf77830663a27ee09c2d7a47bb991ce195a
7.3.0-1c6dbcf77830663a27ee09c2d7a47bb991ce195a: Pulling from forgerock-io/idm-cdk/pit1
29cd48154c03: Already exists
689f65be3144: Pulling fs layer
268c04fc188f: Pulling fs layer
672c42d6d183: Pulling fs layer
bcdf54c535eb: Pulling fs layer
7f9026da1513: Pulling fs layer
0078e578aac6: Pulling fs layer
4f4fb700ef54: Pulling fs layer
7f9026da1513: Waiting
0078e578aac6: Waiting
4f4fb700ef54: Waiting
bcdf54c535eb: Waiting
268c04fc188f: Verifying Checksum
268c04fc188f: Download complete
672c42d6d183: Download complete
689f65be3144: Verifying Checksum
689f65be3144: Download complete
0078e578aac6: Verifying Checksum
0078e578aac6: Download complete
bcdf54c535eb: Verifying Checksum
bcdf54c535eb: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
689f65be3144: Pull complete
268c04fc188f: Pull complete
672c42d6d183: Pull complete
bcdf54c535eb: Pull complete
7f9026da1513: Verifying Checksum
7f9026da1513: Download complete
7f9026da1513: Pull complete
0078e578aac6: Pull complete
4f4fb700ef54: Pull complete
Digest: sha256:fbba92ab503c096407d09eb75523ea153b6b5d1215dfc91c294da4c141300edf
Status: Downloaded newer image for gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-1c6dbcf77830663a27ee09c2d7a47bb991ce195a
---> e23619a582b3
Step 2/8 : COPY debian-buster-sources.list /etc/apt/sources.list
---> 91dc9bc1f57b
Step 3/8 : RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar
---> Running in 05b290fbe0d8
Removing intermediate container 05b290fbe0d8
---> 683de1d997df
Step 4/8 : ENV JAVA_OPTS -XX:MaxRAMPercentage=65 -XX:InitialRAMPercentage=65 -XX:MaxTenuringThreshold=1 -Djava.security.egd=file:/dev/urandom -XshowSettings:vm -XX:+PrintFlagsFinal
---> Running in 50a0570cf550
Removing intermediate container 50a0570cf550
---> 923ddca74869
Step 5/8 : ARG CONFIG_PROFILE=cdk
---> Running in c873597888d4
Removing intermediate container c873597888d4
---> 2c19520f3309
Step 6/8 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Running in 250153138cea
[0;36m*** Building 'cdk' profile ***[0m
Removing intermediate container 250153138cea
---> b2970ebbfab1
Step 7/8 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /opt/openidm
---> 3f2af8cac975
Step 8/8 : COPY --chown=forgerock:root . /opt/openidm
---> fd793793ef1c
Successfully built fd793793ef1c
Successfully tagged gcr.io/engineeringpit/lodestar-images/idm:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/idm]
cf0125d7380b: Preparing
c81ffe751c35: Preparing
d1a210f2b35b: Preparing
5f70bf18a086: Preparing
cdbfc109ee3a: Preparing
8739142ffd3a: Preparing
23a9e180a94a: Preparing
0450d4d54cdc: Preparing
8a3484bc10b0: Preparing
694b49f0d8b6: Preparing
63b3cf45ece8: Preparing
23a9e180a94a: Waiting
8739142ffd3a: Waiting
0450d4d54cdc: Waiting
8a3484bc10b0: Waiting
694b49f0d8b6: Waiting
63b3cf45ece8: Waiting
5f70bf18a086: Layer already exists
cdbfc109ee3a: Layer already exists
23a9e180a94a: Layer already exists
8739142ffd3a: Layer already exists
0450d4d54cdc: Layer already exists
8a3484bc10b0: Layer already exists
694b49f0d8b6: Layer already exists
63b3cf45ece8: Layer already exists
c81ffe751c35: Pushed
cf0125d7380b: Pushed
d1a210f2b35b: Pushed
xlou: digest: sha256:b8eab6e098204c00c17beb230d42ec9e3d1eb2977428bdebf739284c406cd294 size: 2622
Sending build context to Docker daemon 129kB
Step 1/11 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
---> f71df78fcefd
Step 2/11 : USER root
---> Using cache
---> d974e47edf59
Step 3/11 : RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils
---> Using cache
---> 049062855dc8
Step 4/11 : USER forgerock
---> Using cache
---> 6942823be0eb
Step 5/11 : ENV DS_DATA_DIR /opt/opendj/data
---> Using cache
---> 164b4ee3cb5e
Step 6/11 : ENV PEM_KEYS_DIRECTORY "/var/run/secrets/keys/ds"
---> Using cache
---> 6a7d9ba7d3f5
Step 7/11 : ENV PEM_TRUSTSTORE_DIRECTORY "/var/run/secrets/keys/truststore"
---> Using cache
---> 8438989a8f62
Step 8/11 : COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts
---> Using cache
---> 36c7f618cb0b
Step 9/11 : COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext
---> Using cache
---> b9141c9b6535
Step 10/11 : COPY --chown=forgerock:root *.sh /opt/opendj/
---> Using cache
---> a8edc087ad87
Step 11/11 : RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext
---> Using cache
---> 7ca830a08f29
[Warning] One or more build-args [CONFIG_PROFILE] were not consumed
Successfully built 7ca830a08f29
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds]
f99e45e8e7b8: Preparing
6b5e793daa5e: Preparing
f16e3aacea1f: Preparing
87bd7c624328: Preparing
9b387f959806: Preparing
3346cc91698b: Preparing
5f70bf18a086: Preparing
3d7213eaa37a: Preparing
a3a748981c9a: Preparing
3fbde426b270: Preparing
d5bb1c7df85f: Preparing
ddb448a88819: Preparing
4695cdfb426a: Preparing
3d7213eaa37a: Waiting
a3a748981c9a: Waiting
3fbde426b270: Waiting
d5bb1c7df85f: Waiting
ddb448a88819: Waiting
4695cdfb426a: Waiting
3346cc91698b: Waiting
5f70bf18a086: Waiting
9b387f959806: Layer already exists
87bd7c624328: Layer already exists
f99e45e8e7b8: Layer already exists
6b5e793daa5e: Layer already exists
f16e3aacea1f: Layer already exists
5f70bf18a086: Layer already exists
3d7213eaa37a: Layer already exists
a3a748981c9a: Layer already exists
3346cc91698b: Layer already exists
3fbde426b270: Layer already exists
ddb448a88819: Layer already exists
4695cdfb426a: Layer already exists
d5bb1c7df85f: Layer already exists
xlou: digest: sha256:9434ac60613ec15ba479d8b23d78a64f9f4f7a2061ab9a35661d4c6e8be9526a size: 3046
Sending build context to Docker daemon 293.4kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
---> f71df78fcefd
Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 1bc560ee265d
Step 3/10 : WORKDIR /opt/opendj
---> Using cache
---> ea16b179d707
Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> 2a7fd3877f17
Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/
---> Using cache
---> 879400bc305f
Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 2c6b925db1cb
Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma
---> Using cache
---> 17842a34fd1b
Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/
---> Using cache
---> 52a555642a4d
Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif
---> Using cache
---> 48cb2de10bc9
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 53aefd12e9ee
[Warning] One or more build-args [CONFIG_PROFILE] were not consumed
Successfully built 53aefd12e9ee
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-idrepo]
1bc8d64f5ab0: Preparing
dafef41b09f0: Preparing
7cfee2b44ff4: Preparing
236901dbde0f: Preparing
fe7ff685f93b: Preparing
b43c83e57e72: Preparing
466107172135: Preparing
f449b914c2d8: Preparing
3346cc91698b: Preparing
5f70bf18a086: Preparing
3d7213eaa37a: Preparing
a3a748981c9a: Preparing
3fbde426b270: Preparing
d5bb1c7df85f: Preparing
ddb448a88819: Preparing
4695cdfb426a: Preparing
3346cc91698b: Waiting
5f70bf18a086: Waiting
3d7213eaa37a: Waiting
a3a748981c9a: Waiting
3fbde426b270: Waiting
d5bb1c7df85f: Waiting
ddb448a88819: Waiting
4695cdfb426a: Waiting
b43c83e57e72: Waiting
466107172135: Waiting
f449b914c2d8: Waiting
fe7ff685f93b: Layer already exists
7cfee2b44ff4: Layer already exists
dafef41b09f0: Layer already exists
236901dbde0f: Layer already exists
1bc8d64f5ab0: Layer already exists
b43c83e57e72: Layer already exists
f449b914c2d8: Layer already exists
5f70bf18a086: Layer already exists
3346cc91698b: Layer already exists
466107172135: Layer already exists
3d7213eaa37a: Layer already exists
a3a748981c9a: Layer already exists
3fbde426b270: Layer already exists
d5bb1c7df85f: Layer already exists
ddb448a88819: Layer already exists
4695cdfb426a: Layer already exists
xlou: digest: sha256:e10a242e226236f88d2a666b7ee2ab1954b1284ca7aee6d7eaa1280bcf600946 size: 3662
Sending build context to Docker daemon 293.4kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
---> f71df78fcefd
Step 2/10 : USER root
---> Using cache
---> d974e47edf59
Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> ce4fcbd2aa50
Step 4/10 : RUN chown -R forgerock:root /opt/opendj
---> Using cache
---> 7c26a7f5848d
Step 5/10 : USER forgerock
---> Using cache
---> d10e935a345c
Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> bd296766967a
Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/
---> Using cache
---> c99aa3ad534d
Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 46091432c5ce
Step 9/10 : ARG profile_version
---> Using cache
---> 7d678469c9a4
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 3e99157076d6
[Warning] One or more build-args [CONFIG_PROFILE] were not consumed
Successfully built 3e99157076d6
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-cts]
b1acdfea606e: Preparing
d77dae138d1c: Preparing
03c38b905ad3: Preparing
b25f9af393a0: Preparing
d474d8757c07: Preparing
60e56eb0fae5: Preparing
3346cc91698b: Preparing
5f70bf18a086: Preparing
3d7213eaa37a: Preparing
a3a748981c9a: Preparing
3fbde426b270: Preparing
d5bb1c7df85f: Preparing
ddb448a88819: Preparing
4695cdfb426a: Preparing
5f70bf18a086: Waiting
3d7213eaa37a: Waiting
a3a748981c9a: Waiting
3fbde426b270: Waiting
d5bb1c7df85f: Waiting
ddb448a88819: Waiting
4695cdfb426a: Waiting
60e56eb0fae5: Waiting
3346cc91698b: Waiting
b25f9af393a0: Layer already exists
d474d8757c07: Layer already exists
b1acdfea606e: Layer already exists
03c38b905ad3: Layer already exists
d77dae138d1c: Layer already exists
60e56eb0fae5: Layer already exists
3346cc91698b: Layer already exists
a3a748981c9a: Layer already exists
5f70bf18a086: Layer already exists
3d7213eaa37a: Layer already exists
3fbde426b270: Layer already exists
d5bb1c7df85f: Layer already exists
ddb448a88819: Layer already exists
4695cdfb426a: Layer already exists
xlou: digest: sha256:1d6266ba93f7b4e51b54d944476ff1a7b6a102bb0f30b52f1c49f3079ff5bc44 size: 3251
Sending build context to Docker daemon 34.3kB
Step 1/6 : FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit
---> b97c2a3010cf
Step 2/6 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 6f1f8dfb827f
Step 3/6 : ARG CONFIG_PROFILE=cdk
---> Using cache
---> 1ea3d9405c1c
Step 4/6 : RUN echo "\033[0;36m*** Building '${CONFIG_PROFILE}' profile ***\033[0m"
---> Using cache
---> 2c46bb490dfd
Step 5/6 : COPY --chown=forgerock:root config-profiles/${CONFIG_PROFILE}/ /var/ig
---> Using cache
---> 1fa17c930470
Step 6/6 : COPY --chown=forgerock:root . /var/ig
---> Using cache
---> f91a15d0eb36
Successfully built f91a15d0eb36
Successfully tagged gcr.io/engineeringpit/lodestar-images/ig:xlou
The push refers to repository [gcr.io/engineeringpit/lodestar-images/ig]
902bdba5abb8: Preparing
9de06495f434: Preparing
bf96000f7582: Preparing
4fb17506c7d6: Preparing
5696f243e6cc: Preparing
964c1eecc7f5: Preparing
ab8038891451: Preparing
c6f8bfcecf05: Preparing
315cd8c5da97: Preparing
d456513ae67c: Preparing
67a4178b7d47: Preparing
ab8038891451: Waiting
c6f8bfcecf05: Waiting
315cd8c5da97: Waiting
d456513ae67c: Waiting
67a4178b7d47: Waiting
964c1eecc7f5: Waiting
9de06495f434: Layer already exists
902bdba5abb8: Layer already exists
bf96000f7582: Layer already exists
4fb17506c7d6: Layer already exists
5696f243e6cc: Layer already exists
c6f8bfcecf05: Layer already exists
964c1eecc7f5: Layer already exists
315cd8c5da97: Layer already exists
d456513ae67c: Layer already exists
ab8038891451: Layer already exists
67a4178b7d47: Layer already exists
xlou: digest: sha256:91c570f2b277b1f7f2f6dee6d07e94121a7f9703dc7964abd12ed996abf848b5 size: 2621
[1;95mUpdated the image_defaulter with your new image for am: "gcr.io/engineeringpit/lodestar-images/am:xlou".[0m
[1;95mUpdated the image_defaulter with your new image for idm: "gcr.io/engineeringpit/lodestar-images/idm:xlou".[0m
[1;95mUpdated the image_defaulter with your new image for ds: "gcr.io/engineeringpit/lodestar-images/ds:xlou".[0m
[1;95mUpdated the image_defaulter with your new image for ds-idrepo: "gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou".[0m
[1;95mUpdated the image_defaulter with your new image for ds-cts: "gcr.io/engineeringpit/lodestar-images/ds-cts:xlou".[0m
[1;95mUpdated the image_defaulter with your new image for ig: "gcr.io/engineeringpit/lodestar-images/ig:xlou".[0m
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/forgeops install --namespace=xlou --fqdn xlou.iam.xlou-bsln.engineeringpit.com --custom /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/internal-profiles/medium-old --legacy all
[run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'}
customresourcedefinition.apiextensions.k8s.io/secretagentconfigurations.secret-agent.secrets.forgerock.io condition met
deployment.apps/secret-agent-controller-manager condition met
NAME READY STATUS RESTARTS AGE
secret-agent-controller-manager-75c755487b-ftnr6 2/2 Running 0 9d
configmap/dev-utils created
configmap/platform-config created
ingress.networking.k8s.io/forgerock created
ingress.networking.k8s.io/ig created
certificate.cert-manager.io/ds-master-cert created
certificate.cert-manager.io/ds-ssl-cert created
issuer.cert-manager.io/selfsigned-issuer created
secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created
secret/cloud-storage-credentials-cts created
secret/cloud-storage-credentials-idrepo created
service/ds-cts created
service/ds-idrepo created
statefulset.apps/ds-cts created
statefulset.apps/ds-idrepo created
job.batch/ldif-importer created
Checking cert-manager and related CRDs: [1;96mcert-manager CRD found in cluster.[0m
Checking secret-agent operator and related CRDs: [1;96msecret-agent CRD found in cluster.[0m
[1;96m
Checking secret-agent operator is running...[0m
[1;96msecret-agent operator is running[0m
[1;96mInstalling component(s): ['all'] platform: "custom-old" in namespace: "xlou".
[0m
[1;96mDeploying base.yaml. This is a one time activity.[0m
[1;96m
Deploying ds.yaml. This includes all directory resources.[0m
[1;96m
Waiting for DS deployment. This can take a few minutes. First installation takes longer.[0m
Waiting for statefulset "ds-idrepo" to exist in the cluster: Waiting for 3 pods to be ready...
Waiting for 2 pods to be ready...
Waiting for 1 pods to be ready...
statefulset rolling update complete 3 pods at revision ds-idrepo-7b446fff4d...
done
Waiting for Service Account Password Update: done
Waiting for statefulset "ds-cts" to exist in the cluster: statefulset rolling update complete 3 pods at revision ds-cts-87b85b6bd...
done
Waiting for Service Account Password Update: configmap/amster-files created
configmap/idm created
configmap/idm-logging-properties created
service/am created
service/idm created
deployment.apps/am created
deployment.apps/idm created
job.batch/amster created
done
[1;96mCleaning up amster components.[0m
[1;96m
Deploying apps.[0m
[1;96m
Waiting for AM deployment. This can take a few minutes. First installation takes longer.[0m
Waiting for deployment "am" to exist in the cluster: deployment.apps/am condition met
configmap/amster-retain created
done
[1;96m
Waiting for amster job to complete. This can take several minutes.[0m
Waiting for job "amster" to exist in the cluster: job.batch/amster condition met
done
[1;96m
Waiting for IDM deployment. This can take a few minutes. First installation takes longer.[0m
Waiting for deployment "idm" to exist in the cluster: pod/idm-6ddf478c88-gtnkp condition met
pod/idm-6ddf478c88-sg5n9 condition met
service/admin-ui created
service/end-user-ui created
service/login-ui created
deployment.apps/admin-ui created
deployment.apps/end-user-ui created
deployment.apps/login-ui created
done
[1;96m
Deploying UI.[0m
[1;96m
Waiting for K8s secrets.[0m
Waiting for secret "am-env-secrets" to exist in the cluster: done
Waiting for secret "idm-env-secrets" to exist in the cluster: done
Waiting for secret "ds-passwords" to exist in the cluster: done
Waiting for secret "ds-env-secrets" to exist in the cluster: done
[1;96m
Relevant passwords:[0m
WqvW4NP4iYpBdoVOxlGe51yP (amadmin user)
GSjqN2wc7hyl16OQj13R3XXWwFTWMC0K (uid=admin user)
jiL3gtHBGAI9cHKa8RHv2yM9maOOlqrK (App str svc acct (uid=am-config,ou=admins,ou=am-config))
vFg6UyffWMQTaf2QB6HhkcgHBQ44kihP (CTS svc acct (uid=openam_cts,ou=admins,ou=famrecords,ou=openam-session,ou=tokens))
oVCY5tF7iqNKeyUJ2nqOW6Yw7tJcd8Av (ID repo svc acct (uid=am-identity-bind-account,ou=admins,ou=identities))
[1;96m
Relevant URLs:[0m
https://xlou.iam.xlou-bsln.engineeringpit.com/platform
https://xlou.iam.xlou-bsln.engineeringpit.com/admin
https://xlou.iam.xlou-bsln.engineeringpit.com/am
https://xlou.iam.xlou-bsln.engineeringpit.com/enduser
[1;96m
Enjoy your deployment![0m
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
-------------------- Check pod ds-cts-0 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:43:08Z
--- stderr ---
------------- Check pod ds-cts-0 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-0 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-0 has been restarted 0 times.
-------------------- Check pod ds-cts-1 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:43:30Z
--- stderr ---
------------- Check pod ds-cts-1 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-1 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-1 has been restarted 0 times.
-------------------- Check pod ds-cts-2 is running --------------------
[loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:43:53Z
--- stderr ---
------------- Check pod ds-cts-2 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-2 restart count ------------------
[loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-2 has been restarted 0 times.
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
------------------ Check pod ds-idrepo-0 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:43:08Z
--- stderr ---
----------- Check pod ds-idrepo-0 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-0 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-0 has been restarted 0 times.
------------------ Check pod ds-idrepo-1 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:43:45Z
--- stderr ---
----------- Check pod ds-idrepo-1 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-1 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-1 has been restarted 0 times.
------------------ Check pod ds-idrepo-2 is running ------------------
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:44:22Z
--- stderr ---
----------- Check pod ds-idrepo-2 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-2 restart count -----------------
[loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-2 has been restarted 0 times.
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 3
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
am-685d4f4864-frq8d am-685d4f4864-h58rx am-685d4f4864-kgqpx
--- stderr ---
-------------- Check pod am-685d4f4864-frq8d is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-frq8d -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-frq8d -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-frq8d -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:44:57Z
--- stderr ---
------- Check pod am-685d4f4864-frq8d filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-685d4f4864-frq8d -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-685d4f4864-frq8d restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-frq8d -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-685d4f4864-frq8d has been restarted 0 times.
-------------- Check pod am-685d4f4864-h58rx is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-h58rx -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-h58rx -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-h58rx -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:44:57Z
--- stderr ---
------- Check pod am-685d4f4864-h58rx filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-685d4f4864-h58rx -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-685d4f4864-h58rx restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-h58rx -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-685d4f4864-h58rx has been restarted 0 times.
-------------- Check pod am-685d4f4864-kgqpx is running --------------
[loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-kgqpx -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods am-685d4f4864-kgqpx -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-kgqpx -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:44:57Z
--- stderr ---
------- Check pod am-685d4f4864-kgqpx filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec am-685d4f4864-kgqpx -c openam -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------- Check pod am-685d4f4864-kgqpx restart count -------------
[loop_until]: kubectl --namespace=xlou get pod am-685d4f4864-kgqpx -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod am-685d4f4864-kgqpx has been restarted 0 times.
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-k2tq8
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
idm-6ddf478c88-gtnkp idm-6ddf478c88-sg5n9
--- stderr ---
-------------- Check pod idm-6ddf478c88-gtnkp is running --------------
[loop_until]: kubectl --namespace=xlou get pods idm-6ddf478c88-gtnkp -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods idm-6ddf478c88-gtnkp -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod idm-6ddf478c88-gtnkp -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:44:57Z
--- stderr ---
------- Check pod idm-6ddf478c88-gtnkp filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec idm-6ddf478c88-gtnkp -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-6ddf478c88-gtnkp restart count ------------
[loop_until]: kubectl --namespace=xlou get pod idm-6ddf478c88-gtnkp -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-6ddf478c88-gtnkp has been restarted 0 times.
-------------- Check pod idm-6ddf478c88-sg5n9 is running --------------
[loop_until]: kubectl --namespace=xlou get pods idm-6ddf478c88-sg5n9 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods idm-6ddf478c88-sg5n9 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod idm-6ddf478c88-sg5n9 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:44:57Z
--- stderr ---
------- Check pod idm-6ddf478c88-sg5n9 filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou exec idm-6ddf478c88-sg5n9 -c openidm -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod idm-6ddf478c88-sg5n9 restart count ------------
[loop_until]: kubectl --namespace=xlou get pod idm-6ddf478c88-sg5n9 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod idm-6ddf478c88-sg5n9 has been restarted 0 times.
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-5bd969d66b-wf2ph
--- stderr ---
---------- Check pod end-user-ui-5bd969d66b-wf2ph is running ----------
[loop_until]: kubectl --namespace=xlou get pods end-user-ui-5bd969d66b-wf2ph -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods end-user-ui-5bd969d66b-wf2ph -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod end-user-ui-5bd969d66b-wf2ph -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:46:00Z
--- stderr ---
--- Check pod end-user-ui-5bd969d66b-wf2ph filesystem is accessible ---
[loop_until]: kubectl --namespace=xlou exec end-user-ui-5bd969d66b-wf2ph -c end-user-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
-------- Check pod end-user-ui-5bd969d66b-wf2ph restart count --------
[loop_until]: kubectl --namespace=xlou get pod end-user-ui-5bd969d66b-wf2ph -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod end-user-ui-5bd969d66b-wf2ph has been restarted 0 times.
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-6799664bf6-pxn7l
--- stderr ---
----------- Check pod login-ui-6799664bf6-pxn7l is running -----------
[loop_until]: kubectl --namespace=xlou get pods login-ui-6799664bf6-pxn7l -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods login-ui-6799664bf6-pxn7l -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod login-ui-6799664bf6-pxn7l -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:46:01Z
--- stderr ---
---- Check pod login-ui-6799664bf6-pxn7l filesystem is accessible ----
[loop_until]: kubectl --namespace=xlou exec login-ui-6799664bf6-pxn7l -c login-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod login-ui-6799664bf6-pxn7l restart count ----------
[loop_until]: kubectl --namespace=xlou get pod login-ui-6799664bf6-pxn7l -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod login-ui-6799664bf6-pxn7l has been restarted 0 times.
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-796fdc7d9d-x59kp
--- stderr ---
----------- Check pod admin-ui-796fdc7d9d-x59kp is running -----------
[loop_until]: kubectl --namespace=xlou get pods admin-ui-796fdc7d9d-x59kp -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pods admin-ui-796fdc7d9d-x59kp -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou get pod admin-ui-796fdc7d9d-x59kp -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:46:00Z
--- stderr ---
---- Check pod admin-ui-796fdc7d9d-x59kp filesystem is accessible ----
[loop_until]: kubectl --namespace=xlou exec admin-ui-796fdc7d9d-x59kp -c admin-ui -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
bin
boot
dev
docker-entrypoint.d
docker-entrypoint.sh
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
variable_replacement.sh
--- stderr ---
---------- Check pod admin-ui-796fdc7d9d-x59kp restart count ----------
[loop_until]: kubectl --namespace=xlou get pod admin-ui-796fdc7d9d-x59kp -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod admin-ui-796fdc7d9d-x59kp has been restarted 0 times.
***************************** Checking DS-CTS component is running *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
*************************** Checking DS-IDREPO component is running ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:3 ready:3 replicas:3
--- stderr ---
******************************* Checking AM component is running *******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
-------------- Waiting for 3 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:3 replicas:3"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:3 replicas:3
--- stderr ---
***************************** Checking AMSTER component is running *****************************
------------------ Waiting for Amster job to finish ------------------
--------------------- Get expected number of pods ---------------------
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get jobs amster -o jsonpath="{.status.succeeded}" | grep "1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
****************************** Checking IDM component is running ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
-------------- Waiting for 2 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:2 replicas:2
--- stderr ---
************************** Checking END-USER-UI component is running **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking LOGIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Checking ADMIN-UI component is running ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get DS-CTS software version ---------------------
[loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
------------------- Get DS-IDREPO software version -------------------
[loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
****************************** Initializing component pods for AM ******************************
----------------------- Get AM software version -----------------------
Getting product version from https://xlou.iam.xlou-bsln.engineeringpit.com/am/json/serverinfo/version
- Login amadmin to get token
[loop_until]: kubectl --namespace=xlou get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
V3F2VzROUDRpWXBCZG9WT3hsR2U1MXlQ
--- stderr ---
Authenticate user via REST
[http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: WqvW4NP4iYpBdoVOxlGe51yP" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-bsln.engineeringpit.com/am/json/authenticate?realm=/"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"tokenId": "TSV1Ryei7wf4ljHqg47On8IUvGg.*AAJTSQACMDIAAlNLABw3NjRCL3FXWGVKQ3hsejh3TE9JUGFWS1Q5U2M9AAR0eXBlAANDVFMAAlMxAAIwMQ..*",
"successUrl": "/am/console",
"realm": "/"
}
[http_cmd]: curl -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=TSV1Ryei7wf4ljHqg47On8IUvGg.*AAJTSQACMDIAAlNLABw3NjRCL3FXWGVKQ3hsejh3TE9JUGFWS1Q5U2M9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1676940407.854.320728.901354|f60edb382037eb2df1e800d563ad78a7" "https://xlou.iam.xlou-bsln.engineeringpit.com/am/json/serverinfo/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"_rev": "-180086376",
"version": "7.3.0-SNAPSHOT",
"fullVersion": "ForgeRock Access Management 7.3.0-SNAPSHOT Build b4ef885d337fd02a1123f20359a23fe51f7131c8 (2023-February-17 01:01)",
"revision": "b4ef885d337fd02a1123f20359a23fe51f7131c8",
"date": "2023-February-17 01:01"
}
**************************** Initializing component pods for AMSTER ****************************
***************************** Initializing component pods for IDM *****************************
---------------------- Get IDM software version ----------------------
Getting product version from https://xlou.iam.xlou-bsln.engineeringpit.com/openidm/info/version
[http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-bsln.engineeringpit.com/openidm/info/version"
[http_cmd]: http status code OK
--- status code ---
http status code is 200 (expected 200)
--- http response ---
{
"_id": "version",
"productVersion": "7.3.0-SNAPSHOT",
"productBuildDate": "20230220124858",
"productRevision": "1c6dbcf"
}
************************* Initializing component pods for END-USER-UI *************************
------------------ Get END-USER-UI software version ------------------
[loop_until]: kubectl --namespace=xlou exec end-user-ui-5bd969d66b-wf2ph -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.4ef6655e.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp end-user-ui-5bd969d66b-wf2ph:/usr/share/nginx/html/js/chunk-vendors.4ef6655e.js /tmp/end-user-ui_info/chunk-vendors.4ef6655e.js -c end-user-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
-------------------- Get LOGIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou exec login-ui-6799664bf6-pxn7l -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.115c8fe0.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp login-ui-6799664bf6-pxn7l:/usr/share/nginx/html/js/chunk-vendors.115c8fe0.js /tmp/login-ui_info/chunk-vendors.115c8fe0.js -c login-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
-------------------- Get ADMIN-UI software version --------------------
[loop_until]: kubectl --namespace=xlou exec admin-ui-796fdc7d9d-x59kp -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/usr/share/nginx/html/js/chunk-vendors.5e44264c.js
--- stderr ---
[loop_until]: kubectl --namespace=xlou cp admin-ui-796fdc7d9d-x59kp:/usr/share/nginx/html/js/chunk-vendors.5e44264c.js /tmp/admin-ui_info/chunk-vendors.5e44264c.js -c admin-ui
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
[loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
R1NqcU4yd2M3aHlsMTZPUWoxM1IzWFhXd0ZUV01DMEs=
--- stderr ---
====================================================================================================
================ Admin password for DS-CTS is: GSjqN2wc7hyl16OQj13R3XXWwFTWMC0K ================
====================================================================================================
[loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
R1NqcU4yd2M3aHlsMTZPUWoxM1IzWFhXd0ZUV01DMEs=
--- stderr ---
====================================================================================================
============== Admin password for DS-IDREPO is: GSjqN2wc7hyl16OQj13R3XXWwFTWMC0K ==============
====================================================================================================
====================================================================================================
====================== Admin password for AM is: WqvW4NP4iYpBdoVOxlGe51yP ======================
====================================================================================================
[loop_until]: kubectl --namespace=xlou get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
QWJhTHVmN0ZXcVpFa1NjRHBYb29FYjZ2
--- stderr ---
====================================================================================================
===================== Admin password for IDM is: AbaLuf7FWqZEkScDpXooEb6v =====================
====================================================================================================
*************************************** Dumping pod list ***************************************
Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/_pod-list.txt
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0 ds-cts-1 ds-cts-2
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0 ds-idrepo-1 ds-idrepo-2
--- stderr ---
****************************** Initializing component pods for AM ******************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
3
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
am-685d4f4864-frq8d am-685d4f4864-h58rx am-685d4f4864-kgqpx
--- stderr ---
**************************** Initializing component pods for AMSTER ****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=amster -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
amster-k2tq8
--- stderr ---
***************************** Initializing component pods for IDM *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
idm-6ddf478c88-gtnkp idm-6ddf478c88-sg5n9
--- stderr ---
************************* Initializing component pods for END-USER-UI *************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
end-user-ui-5bd969d66b-wf2ph
--- stderr ---
*************************** Initializing component pods for LOGIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
login-ui-6799664bf6-pxn7l
--- stderr ---
*************************** Initializing component pods for ADMIN-UI ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
admin-ui-796fdc7d9d-x59kp
--- stderr ---
*********************************** Dumping components logs ***********************************
----------------------- Dumping logs for DS-CTS -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/ds-cts-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/ds-cts-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/ds-cts-2.txt
Check pod logs for errors
--------------------- Dumping logs for DS-IDREPO ---------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/ds-idrepo-0.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/ds-idrepo-1.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/ds-idrepo-2.txt
Check pod logs for errors
------------------------- Dumping logs for AM -------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/am-685d4f4864-frq8d.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/am-685d4f4864-h58rx.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/am-685d4f4864-kgqpx.txt
Check pod logs for errors
----------------------- Dumping logs for AMSTER -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/amster-k2tq8.txt
Check pod logs for errors
------------------------ Dumping logs for IDM ------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/idm-6ddf478c88-gtnkp.txt
Check pod logs for errors
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/idm-6ddf478c88-sg5n9.txt
Check pod logs for errors
-------------------- Dumping logs for END-USER-UI --------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/end-user-ui-5bd969d66b-wf2ph.txt
Check pod logs for errors
---------------------- Dumping logs for LOGIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/login-ui-6799664bf6-pxn7l.txt
Check pod logs for errors
---------------------- Dumping logs for ADMIN-UI ----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/stack/20230221-004654-after-deployment/admin-ui-796fdc7d9d-x59kp.txt
Check pod logs for errors
The following components will be deployed:
- ds-cts (DS)
- ds-idrepo (DS)
- rcs (Rcs)
Building docker image: -t gcr.io/engineeringpit/lodestar-images/rcs:xlou-rcs from dir: /mnt/disks/data/xslou/lodestar-fork/ext/rcs/docker
[run_command]: docker build --no-cache -t gcr.io/engineeringpit/lodestar-images/rcs:xlou-rcs .
[run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'}
Sending build context to Docker daemon 4.096kB
Step 1/7 : FROM gcr.io/forgerock-io/rcs/pit1:latest
---> 6df60fc6205c
Step 2/7 : USER root
---> Running in a4235ffe72c1
Removing intermediate container a4235ffe72c1
---> c9c7cc17a22f
Step 3/7 : ENV SCRIPT="./rcs-probe.sh"
---> Running in 2d97a6049f25
Removing intermediate container 2d97a6049f25
---> 1e053ed7aa75
Step 4/7 : RUN apt update && apt install curl -y
---> Running in dd1620a97f7f
[91m
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
[0mGet:1 http://deb.debian.org/debian buster InRelease [122 kB]
Get:2 http://deb.debian.org/debian-security buster/updates InRelease [34.8 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [56.6 kB]
Get:4 http://deb.debian.org/debian buster/main amd64 Packages [7909 kB]
Get:5 http://deb.debian.org/debian-security buster/updates/main amd64 Packages [433 kB]
Get:6 http://deb.debian.org/debian buster-updates/main amd64 Packages [8788 B]
Fetched 8564 kB in 2s (5630 kB/s)
Reading package lists...
Building dependency tree...
Reading state information...
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
[91m
WARNING: [0m[91mapt[0m[91m [0m[91mdoes not have a stable CLI interface. Use with caution in scripts.[0m[91m
[0m[91m
[0mReading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
krb5-locales libcurl4 libgssapi-krb5-2 libk5crypto3 libkeyutils1 libkrb5-3
libkrb5support0 libldap-2.4-2 libldap-common libnghttp2-14 libpsl5 librtmp1
libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1 publicsuffix
Suggested packages:
krb5-doc krb5-user libsasl2-modules-gssapi-mit
| libsasl2-modules-gssapi-heimdal libsasl2-modules-ldap libsasl2-modules-otp
libsasl2-modules-sql
The following NEW packages will be installed:
curl krb5-locales libcurl4 libgssapi-krb5-2 libk5crypto3 libkeyutils1
libkrb5-3 libkrb5support0 libldap-2.4-2 libldap-common libnghttp2-14 libpsl5
librtmp1 libsasl2-2 libsasl2-modules libsasl2-modules-db libssh2-1
publicsuffix
0 upgraded, 18 newly installed, 0 to remove and 3 not upgraded.
Need to get 2486 kB of archives.
After this operation, 5874 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian-security buster/updates/main amd64 krb5-locales all 1.17-3+deb10u5 [95.7 kB]
Get:2 http://deb.debian.org/debian buster/main amd64 libkeyutils1 amd64 1.6-6 [15.0 kB]
Get:3 http://deb.debian.org/debian-security buster/updates/main amd64 libkrb5support0 amd64 1.17-3+deb10u5 [66.0 kB]
Get:4 http://deb.debian.org/debian-security buster/updates/main amd64 libk5crypto3 amd64 1.17-3+deb10u5 [122 kB]
Get:5 http://deb.debian.org/debian-security buster/updates/main amd64 libkrb5-3 amd64 1.17-3+deb10u5 [369 kB]
Get:6 http://deb.debian.org/debian-security buster/updates/main amd64 libgssapi-krb5-2 amd64 1.17-3+deb10u5 [159 kB]
Get:7 http://deb.debian.org/debian buster/main amd64 libsasl2-modules-db amd64 2.1.27+dfsg-1+deb10u2 [69.2 kB]
Get:8 http://deb.debian.org/debian buster/main amd64 libsasl2-2 amd64 2.1.27+dfsg-1+deb10u2 [106 kB]
Get:9 http://deb.debian.org/debian buster/main amd64 libldap-common all 2.4.47+dfsg-3+deb10u7 [90.1 kB]
Get:10 http://deb.debian.org/debian buster/main amd64 libldap-2.4-2 amd64 2.4.47+dfsg-3+deb10u7 [224 kB]
Get:11 http://deb.debian.org/debian buster/main amd64 libnghttp2-14 amd64 1.36.0-2+deb10u1 [85.0 kB]
Get:12 http://deb.debian.org/debian buster/main amd64 libpsl5 amd64 0.20.2-2 [53.7 kB]
Get:13 http://deb.debian.org/debian buster/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d.1-2 [60.5 kB]
Get:14 http://deb.debian.org/debian buster/main amd64 libssh2-1 amd64 1.8.0-2.1 [140 kB]
Get:15 http://deb.debian.org/debian-security buster/updates/main amd64 libcurl4 amd64 7.64.0-4+deb10u4 [334 kB]
Get:16 http://deb.debian.org/debian-security buster/updates/main amd64 curl amd64 7.64.0-4+deb10u4 [265 kB]
Get:17 http://deb.debian.org/debian buster/main amd64 libsasl2-modules amd64 2.1.27+dfsg-1+deb10u2 [104 kB]
Get:18 http://deb.debian.org/debian buster/main amd64 publicsuffix all 20220811.1734-0+deb10u1 [127 kB]
[91mdebconf: delaying package configuration, since apt-utils is not installed
[0mFetched 2486 kB in 0s (30.4 MB/s)
Selecting previously unselected package krb5-locales.
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 6928 files and directories currently installed.)
Preparing to unpack .../00-krb5-locales_1.17-3+deb10u5_all.deb ...
Unpacking krb5-locales (1.17-3+deb10u5) ...
Selecting previously unselected package libkeyutils1:amd64.
Preparing to unpack .../01-libkeyutils1_1.6-6_amd64.deb ...
Unpacking libkeyutils1:amd64 (1.6-6) ...
Selecting previously unselected package libkrb5support0:amd64.
Preparing to unpack .../02-libkrb5support0_1.17-3+deb10u5_amd64.deb ...
Unpacking libkrb5support0:amd64 (1.17-3+deb10u5) ...
Selecting previously unselected package libk5crypto3:amd64.
Preparing to unpack .../03-libk5crypto3_1.17-3+deb10u5_amd64.deb ...
Unpacking libk5crypto3:amd64 (1.17-3+deb10u5) ...
Selecting previously unselected package libkrb5-3:amd64.
Preparing to unpack .../04-libkrb5-3_1.17-3+deb10u5_amd64.deb ...
Unpacking libkrb5-3:amd64 (1.17-3+deb10u5) ...
Selecting previously unselected package libgssapi-krb5-2:amd64.
Preparing to unpack .../05-libgssapi-krb5-2_1.17-3+deb10u5_amd64.deb ...
Unpacking libgssapi-krb5-2:amd64 (1.17-3+deb10u5) ...
Selecting previously unselected package libsasl2-modules-db:amd64.
Preparing to unpack .../06-libsasl2-modules-db_2.1.27+dfsg-1+deb10u2_amd64.deb ...
Unpacking libsasl2-modules-db:amd64 (2.1.27+dfsg-1+deb10u2) ...
Selecting previously unselected package libsasl2-2:amd64.
Preparing to unpack .../07-libsasl2-2_2.1.27+dfsg-1+deb10u2_amd64.deb ...
Unpacking libsasl2-2:amd64 (2.1.27+dfsg-1+deb10u2) ...
Selecting previously unselected package libldap-common.
Preparing to unpack .../08-libldap-common_2.4.47+dfsg-3+deb10u7_all.deb ...
Unpacking libldap-common (2.4.47+dfsg-3+deb10u7) ...
Selecting previously unselected package libldap-2.4-2:amd64.
Preparing to unpack .../09-libldap-2.4-2_2.4.47+dfsg-3+deb10u7_amd64.deb ...
Unpacking libldap-2.4-2:amd64 (2.4.47+dfsg-3+deb10u7) ...
Selecting previously unselected package libnghttp2-14:amd64.
Preparing to unpack .../10-libnghttp2-14_1.36.0-2+deb10u1_amd64.deb ...
Unpacking libnghttp2-14:amd64 (1.36.0-2+deb10u1) ...
Selecting previously unselected package libpsl5:amd64.
Preparing to unpack .../11-libpsl5_0.20.2-2_amd64.deb ...
Unpacking libpsl5:amd64 (0.20.2-2) ...
Selecting previously unselected package librtmp1:amd64.
Preparing to unpack .../12-librtmp1_2.4+20151223.gitfa8646d.1-2_amd64.deb ...
Unpacking librtmp1:amd64 (2.4+20151223.gitfa8646d.1-2) ...
Selecting previously unselected package libssh2-1:amd64.
Preparing to unpack .../13-libssh2-1_1.8.0-2.1_amd64.deb ...
Unpacking libssh2-1:amd64 (1.8.0-2.1) ...
Selecting previously unselected package libcurl4:amd64.
Preparing to unpack .../14-libcurl4_7.64.0-4+deb10u4_amd64.deb ...
Unpacking libcurl4:amd64 (7.64.0-4+deb10u4) ...
Selecting previously unselected package curl.
Preparing to unpack .../15-curl_7.64.0-4+deb10u4_amd64.deb ...
Unpacking curl (7.64.0-4+deb10u4) ...
Selecting previously unselected package libsasl2-modules:amd64.
Preparing to unpack .../16-libsasl2-modules_2.1.27+dfsg-1+deb10u2_amd64.deb ...
Unpacking libsasl2-modules:amd64 (2.1.27+dfsg-1+deb10u2) ...
Selecting previously unselected package publicsuffix.
Preparing to unpack .../17-publicsuffix_20220811.1734-0+deb10u1_all.deb ...
Unpacking publicsuffix (20220811.1734-0+deb10u1) ...
Setting up libkeyutils1:amd64 (1.6-6) ...
Setting up libpsl5:amd64 (0.20.2-2) ...
Setting up libsasl2-modules:amd64 (2.1.27+dfsg-1+deb10u2) ...
Setting up libnghttp2-14:amd64 (1.36.0-2+deb10u1) ...
Setting up krb5-locales (1.17-3+deb10u5) ...
Setting up libldap-common (2.4.47+dfsg-3+deb10u7) ...
Setting up libkrb5support0:amd64 (1.17-3+deb10u5) ...
Setting up libsasl2-modules-db:amd64 (2.1.27+dfsg-1+deb10u2) ...
Setting up librtmp1:amd64 (2.4+20151223.gitfa8646d.1-2) ...
Setting up libk5crypto3:amd64 (1.17-3+deb10u5) ...
Setting up libsasl2-2:amd64 (2.1.27+dfsg-1+deb10u2) ...
Setting up libssh2-1:amd64 (1.8.0-2.1) ...
Setting up libkrb5-3:amd64 (1.17-3+deb10u5) ...
Setting up publicsuffix (20220811.1734-0+deb10u1) ...
Setting up libldap-2.4-2:amd64 (2.4.47+dfsg-3+deb10u7) ...
Setting up libgssapi-krb5-2:amd64 (1.17-3+deb10u5) ...
Setting up libcurl4:amd64 (7.64.0-4+deb10u4) ...
Setting up curl (7.64.0-4+deb10u4) ...
Processing triggers for libc-bin (2.28-10+deb10u2) ...
Removing intermediate container dd1620a97f7f
---> 7bb6d9e7388c
Step 5/7 : WORKDIR /opt/openicf
---> Running in 257ef5980f7e
Removing intermediate container 257ef5980f7e
---> 07e1a6a4dd1a
Step 6/7 : COPY $SCRIPT .
---> 12091d11e024
Step 7/7 : RUN chmod +x $SCRIPT && chown forgerock $SCRIPT
---> Running in 33d587deb481
Removing intermediate container 33d587deb481
---> 9dff4aa4928d
Successfully built 9dff4aa4928d
Successfully tagged gcr.io/engineeringpit/lodestar-images/rcs:xlou-rcs
[run_command]: docker push gcr.io/engineeringpit/lodestar-images/rcs:xlou-rcs
The push refers to repository [gcr.io/engineeringpit/lodestar-images/rcs]
7536e20cfb41: Preparing
111673665b3a: Preparing
a5df37528161: Preparing
5e815022b9c7: Preparing
6f6866f0eced: Preparing
14155e5d1df6: Preparing
0b4bdaa4a165: Preparing
63b3cf45ece8: Preparing
14155e5d1df6: Waiting
0b4bdaa4a165: Waiting
63b3cf45ece8: Waiting
5e815022b9c7: Layer already exists
6f6866f0eced: Layer already exists
14155e5d1df6: Layer already exists
0b4bdaa4a165: Layer already exists
63b3cf45ece8: Layer already exists
111673665b3a: Pushed
7536e20cfb41: Pushed
a5df37528161: Pushed
xlou-rcs: digest: sha256:c94b9f22717ca8a42af45ada59163d7f468b8e9a50a7ab8a618c12a679e61231 size: 1998
----------------------------- Deploy rcs -----------------------------
[run_command]: skaffold deploy --status-check=true --config=/tmp/tmpluidl7bu --namespace=xlou-rcs --platform=linux/amd64
Tags used in deployment:
Starting deploy...
- configmap/rcs-deployment-config-856hdhf5k4 created
- configmap/rcsprops created
- service/rcs-service created
- deployment.apps/rcs created
- ingress.networking.k8s.io/rcs-ingress created
Waiting for deployments to stabilize...
- xlou-rcs:deployment/rcs is ready.
Deployments stabilized in 3.703 seconds
There is a new version (2.1.0) of Skaffold available. Download it from:
https://github.com/GoogleContainerTools/skaffold/releases/tag/v2.1.0
Help improve Skaffold with our 2-minute anonymous survey: run 'skaffold survey'
To help improve the quality of this product, we collect anonymized usage data for details on what is tracked and how we use this data visit . This data is handled in accordance with our privacy policy
You may choose to opt out of this collection by running the following command:
skaffold config set --global collect-metrics false
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-rcs get pods -l app=rcs -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
rcs-5bc979fb85-hslws
--- stderr ---
-------------- Check pod rcs-5bc979fb85-hslws is running --------------
[loop_until]: kubectl --namespace=xlou-rcs get pods rcs-5bc979fb85-hslws -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs get pods rcs-5bc979fb85-hslws -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs get pod rcs-5bc979fb85-hslws -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:47:47Z
--- stderr ---
------- Check pod rcs-5bc979fb85-hslws filesystem is accessible -------
[loop_until]: kubectl --namespace=xlou-rcs exec rcs-5bc979fb85-hslws -c rcs -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-11
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------ Check pod rcs-5bc979fb85-hslws restart count ------------
[loop_until]: kubectl --namespace=xlou-rcs get pod rcs-5bc979fb85-hslws -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod rcs-5bc979fb85-hslws has been restarted 0 times.
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/forgeops build ds-cts ds-idrepo --config-profile=ds-only --push-to gcr.io/engineeringpit/lodestar-images --tag=xlou-rcs
[run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'}
Sending build context to Docker daemon 293.4kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
---> f71df78fcefd
Step 2/10 : USER root
---> Using cache
---> d974e47edf59
Step 3/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> ce4fcbd2aa50
Step 4/10 : RUN chown -R forgerock:root /opt/opendj
---> Using cache
---> 7c26a7f5848d
Step 5/10 : USER forgerock
---> Using cache
---> d10e935a345c
Step 6/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> bd296766967a
Step 7/10 : COPY --chown=forgerock:root cts /opt/opendj/
---> Using cache
---> c99aa3ad534d
Step 8/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 46091432c5ce
Step 9/10 : ARG profile_version
---> Using cache
---> 7d678469c9a4
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 3e99157076d6
[Warning] One or more build-args [CONFIG_PROFILE] were not consumed
Successfully built 3e99157076d6
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-cts:xlou-rcs
The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-cts]
b1acdfea606e: Preparing
d77dae138d1c: Preparing
03c38b905ad3: Preparing
b25f9af393a0: Preparing
d474d8757c07: Preparing
60e56eb0fae5: Preparing
3346cc91698b: Preparing
5f70bf18a086: Preparing
3d7213eaa37a: Preparing
a3a748981c9a: Preparing
3fbde426b270: Preparing
d5bb1c7df85f: Preparing
ddb448a88819: Preparing
4695cdfb426a: Preparing
5f70bf18a086: Waiting
3d7213eaa37a: Waiting
a3a748981c9a: Waiting
3fbde426b270: Waiting
d5bb1c7df85f: Waiting
ddb448a88819: Waiting
4695cdfb426a: Waiting
60e56eb0fae5: Waiting
3346cc91698b: Waiting
03c38b905ad3: Layer already exists
b25f9af393a0: Layer already exists
d77dae138d1c: Layer already exists
d474d8757c07: Layer already exists
b1acdfea606e: Layer already exists
60e56eb0fae5: Layer already exists
5f70bf18a086: Layer already exists
3346cc91698b: Layer already exists
a3a748981c9a: Layer already exists
3d7213eaa37a: Layer already exists
3fbde426b270: Layer already exists
d5bb1c7df85f: Layer already exists
ddb448a88819: Layer already exists
4695cdfb426a: Layer already exists
xlou-rcs: digest: sha256:1d6266ba93f7b4e51b54d944476ff1a7b6a102bb0f30b52f1c49f3079ff5bc44 size: 3251
Sending build context to Docker daemon 293.4kB
Step 1/10 : FROM gcr.io/forgerock-io/ds/pit1:7.3.0-b492db3c2465a45eac7e7cbb8af094f4e03404cb
---> f71df78fcefd
Step 2/10 : COPY debian-buster-sources.list /etc/apt/sources.list
---> Using cache
---> 1bc560ee265d
Step 3/10 : WORKDIR /opt/opendj
---> Using cache
---> ea16b179d707
Step 4/10 : COPY --chown=forgerock:root common /opt/opendj/
---> Using cache
---> 2a7fd3877f17
Step 5/10 : COPY --chown=forgerock:root idrepo /opt/opendj/
---> Using cache
---> 879400bc305f
Step 6/10 : COPY --chown=forgerock:root scripts /opt/opendj/scripts
---> Using cache
---> 2c6b925db1cb
Step 7/10 : COPY --chown=forgerock:root uma /opt/opendj/uma
---> Using cache
---> 17842a34fd1b
Step 8/10 : COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/
---> Using cache
---> 52a555642a4d
Step 9/10 : RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif
---> Using cache
---> 48cb2de10bc9
Step 10/10 : RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh
---> Using cache
---> 53aefd12e9ee
[Warning] One or more build-args [CONFIG_PROFILE] were not consumed
Successfully built 53aefd12e9ee
Successfully tagged gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou-rcs
The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-idrepo]
1bc8d64f5ab0: Preparing
dafef41b09f0: Preparing
7cfee2b44ff4: Preparing
236901dbde0f: Preparing
fe7ff685f93b: Preparing
b43c83e57e72: Preparing
466107172135: Preparing
f449b914c2d8: Preparing
3346cc91698b: Preparing
5f70bf18a086: Preparing
3d7213eaa37a: Preparing
a3a748981c9a: Preparing
3fbde426b270: Preparing
d5bb1c7df85f: Preparing
ddb448a88819: Preparing
4695cdfb426a: Preparing
466107172135: Waiting
f449b914c2d8: Waiting
3346cc91698b: Waiting
5f70bf18a086: Waiting
3d7213eaa37a: Waiting
a3a748981c9a: Waiting
3fbde426b270: Waiting
d5bb1c7df85f: Waiting
ddb448a88819: Waiting
4695cdfb426a: Waiting
b43c83e57e72: Waiting
fe7ff685f93b: Layer already exists
7cfee2b44ff4: Layer already exists
dafef41b09f0: Layer already exists
236901dbde0f: Layer already exists
1bc8d64f5ab0: Layer already exists
b43c83e57e72: Layer already exists
466107172135: Layer already exists
3346cc91698b: Layer already exists
f449b914c2d8: Layer already exists
5f70bf18a086: Layer already exists
3d7213eaa37a: Layer already exists
3fbde426b270: Layer already exists
ddb448a88819: Layer already exists
a3a748981c9a: Layer already exists
d5bb1c7df85f: Layer already exists
4695cdfb426a: Layer already exists
xlou-rcs: digest: sha256:e10a242e226236f88d2a666b7ee2ab1954b1284ca7aee6d7eaa1280bcf600946 size: 3662
[1;95mUpdated the image_defaulter with your new image for ds-cts: "gcr.io/engineeringpit/lodestar-images/ds-cts:xlou-rcs".[0m
[1;95mUpdated the image_defaulter with your new image for ds-idrepo: "gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou-rcs".[0m
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/forgeops install --namespace=xlou-rcs --fqdn xlou-rcs.iam.xlou-bsln.engineeringpit.com --custom /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/kustomize/overlay/internal-profiles/ds-only --legacy base secrets
[run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'}
customresourcedefinition.apiextensions.k8s.io/secretagentconfigurations.secret-agent.secrets.forgerock.io condition met
deployment.apps/secret-agent-controller-manager condition met
NAME READY STATUS RESTARTS AGE
secret-agent-controller-manager-75c755487b-ftnr6 2/2 Running 0 9d
certificate.cert-manager.io/ds-master-cert created
certificate.cert-manager.io/ds-ssl-cert created
issuer.cert-manager.io/selfsigned-issuer created
secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created
configmap/dev-utils created
configmap/platform-config created
ingress.networking.k8s.io/forgerock created
ingress.networking.k8s.io/ig created
certificate.cert-manager.io/ds-master-cert configured
certificate.cert-manager.io/ds-ssl-cert configured
issuer.cert-manager.io/selfsigned-issuer unchanged
secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac configured
Checking cert-manager and related CRDs: [1;96mcert-manager CRD found in cluster.[0m
Checking secret-agent operator and related CRDs: [1;96msecret-agent CRD found in cluster.[0m
[1;96m
Checking secret-agent operator is running...[0m
[1;96msecret-agent operator is running[0m
[1;96mInstalling component(s): ['secrets', 'base', 'secrets'] platform: "custom-old" in namespace: "xlou-rcs".
[0m
[1;96m
Waiting for K8s secrets.[0m
Waiting for secret "am-env-secrets" to exist in the cluster: done
Waiting for secret "idm-env-secrets" to exist in the cluster: ...done
Waiting for secret "ds-passwords" to exist in the cluster: done
Waiting for secret "ds-env-secrets" to exist in the cluster: done
[1;96m
Relevant passwords:[0m
sYAeiAxoRpqfubIqBpZGujAu (amadmin user)
TnGs4INNRqsuvuBcvewyK9O1XFqpTmKK (uid=admin user)
8TppMU3Tmr0Ub4j3JcIegAxRhyUHEPAJ (App str svc acct (uid=am-config,ou=admins,ou=am-config))
mCI2veeithmdL8tc2rmOz5oLtomM6VcK (CTS svc acct (uid=openam_cts,ou=admins,ou=famrecords,ou=openam-session,ou=tokens))
tybZjlMFwLzFWYaqfBEovqpZ5U3sc0Tv (ID repo svc acct (uid=am-identity-bind-account,ou=admins,ou=identities))
[1;96m
Relevant URLs:[0m
https://xlou-rcs.iam.xlou-bsln.engineeringpit.com/platform
https://xlou-rcs.iam.xlou-bsln.engineeringpit.com/admin
https://xlou-rcs.iam.xlou-bsln.engineeringpit.com/am
https://xlou-rcs.iam.xlou-bsln.engineeringpit.com/enduser
[1;96m
Enjoy your deployment![0m
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/forgeops install --namespace=xlou-rcs --fqdn xlou-rcs.iam.xlou-bsln.engineeringpit.com --custom /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/kustomize/overlay/internal-profiles/ds-only --legacy ds
[run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'}
customresourcedefinition.apiextensions.k8s.io/secretagentconfigurations.secret-agent.secrets.forgerock.io condition met
deployment.apps/secret-agent-controller-manager condition met
NAME READY STATUS RESTARTS AGE
secret-agent-controller-manager-75c755487b-ftnr6 2/2 Running 0 9d
secret/cloud-storage-credentials-cts created
secret/cloud-storage-credentials-idrepo created
service/ds-cts created
service/ds-idrepo created
statefulset.apps/ds-cts created
statefulset.apps/ds-idrepo created
job.batch/ldif-importer created
Checking cert-manager and related CRDs: [1;96mcert-manager CRD found in cluster.[0m
Checking secret-agent operator and related CRDs: [1;96msecret-agent CRD found in cluster.[0m
[1;96m
Checking secret-agent operator is running...[0m
[1;96msecret-agent operator is running[0m
[1;96mInstalling component(s): ['ds'] platform: "custom-old" in namespace: "xlou-rcs".
[0m
[1;96m
Enjoy your deployment![0m
[run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_rcs/bin/forgeops wait --namespace=xlou-rcs --legacy ds
[run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'}
[1;96mWaiting for DS deployment.[0m
Waiting for statefulset "ds-idrepo" to exist in the cluster: Waiting for 1 pods to be ready...
statefulset rolling update complete 1 pods at revision ds-idrepo-747777bbb5...
done
Waiting for Service Account Password Update: done
Waiting for statefulset "ds-cts" to exist in the cluster: statefulset rolling update complete 1 pods at revision ds-cts-fb4d5c7b5...
done
Waiting for Service Account Password Update: done
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-rcs get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-rcs get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0
--- stderr ---
-------------------- Check pod ds-cts-0 is running --------------------
[loop_until]: kubectl --namespace=xlou-rcs get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs get pod ds-cts-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:48:41Z
--- stderr ---
------------- Check pod ds-cts-0 filesystem is accessible -------------
[loop_until]: kubectl --namespace=xlou-rcs exec ds-cts-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
------------------ Check pod ds-cts-0 restart count ------------------
[loop_until]: kubectl --namespace=xlou-rcs get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-cts-0 has been restarted 0 times.
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-rcs get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-rcs get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0
--- stderr ---
------------------ Check pod ds-idrepo-0 is running ------------------
[loop_until]: kubectl --namespace=xlou-rcs get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Running
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
true
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs get pod ds-idrepo-0 -o jsonpath={.status.startTime}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
2023-02-21T00:48:42Z
--- stderr ---
----------- Check pod ds-idrepo-0 filesystem is accessible -----------
[loop_until]: kubectl --namespace=xlou-rcs exec ds-idrepo-0 -c ds -- ls / | grep "bin"
[loop_until]: (max_time=360, interval=5, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
Dockerfile.java-17
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
--- stderr ---
----------------- Check pod ds-idrepo-0 restart count -----------------
[loop_until]: kubectl --namespace=xlou-rcs get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
0
--- stderr ---
Pod ds-idrepo-0 has been restarted 0 times.
***************************** Initializing component pods for RCS *****************************
---------------------------- Get pod list ----------------------------
[loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou-rcs get pods -l app=rcs -o jsonpath={.items[*].metadata.name}` | grep 1
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found
[loop_until]: OK (rc = 0)
--- stdout ---
rcs-5bc979fb85-hslws
--- stderr ---
***************************** Checking DS-CTS component is running *****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-rcs get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-rcs get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:1 ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:1 ready:1 replicas:1
--- stderr ---
*************************** Checking DS-IDREPO component is running ***************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-rcs get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-rcs get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:1 ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
current:1 ready:1 replicas:1
--- stderr ---
****************************** Checking RCS component is running ******************************
-------------- Waiting for 1 expected pod(s) to be ready --------------
[loop_until]: kubectl --namespace=xlou-rcs get deployments rcs -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1"
[loop_until]: (max_time=900, interval=30, expected_rc=[0]
[loop_until]: Function succeeded after 0s (rc=0) - expected pattern found
[loop_until]: OK (rc = 0)
--- stdout ---
ready:1 replicas:1
--- stderr ---
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get DS-CTS software version ---------------------
[loop_until]: kubectl --namespace=xlou-rcs exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
------------------- Get DS-IDREPO software version -------------------
[loop_until]: kubectl --namespace=xlou-rcs exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/opendj/lib/opendj-core.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
***************************** Initializing component pods for RCS *****************************
---------------------- Get RCS software version ----------------------
[loop_until]: kubectl --namespace=xlou-rcs exec rcs-5bc979fb85-hslws -c rcs -- find /opt/openicf/lib/framework -name connector-framework-server-*
[loop_until]: (max_time=30, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
/opt/openicf/lib/framework/connector-framework-server-1.5.20.14-SNAPSHOT.jar
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs cp rcs-5bc979fb85-hslws:/opt/openicf/lib/framework/connector-framework-server-1.5.20.14-SNAPSHOT.jar /tmp/rcs_info/connector-framework-server-1.5.20.14-SNAPSHOT.jar -c rcs
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
tar: Removing leading `/' from member names
--- stderr ---
[loop_until]: kubectl --namespace=xlou-rcs get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
VG5HczRJTk5ScXN1dnVCY3Zld3lLOU8xWEZxcFRtS0s=
--- stderr ---
====================================================================================================
================ Admin password for DS-CTS is: TnGs4INNRqsuvuBcvewyK9O1XFqpTmKK ================
====================================================================================================
[loop_until]: kubectl --namespace=xlou-rcs get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}"
[loop_until]: (max_time=60, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
VG5HczRJTk5ScXN1dnVCY3Zld3lLOU8xWEZxcFRtS0s=
--- stderr ---
====================================================================================================
============== Admin password for DS-IDREPO is: TnGs4INNRqsuvuBcvewyK9O1XFqpTmKK ==============
====================================================================================================
*************************************** Dumping pod list ***************************************
Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/rcs/20230221-004951-after-deployment/_pod-list.txt
**************************** Initializing component pods for DS-CTS ****************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-rcs get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-rcs get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-cts-0
--- stderr ---
************************** Initializing component pods for DS-IDREPO **************************
--------------------- Get expected number of pods ---------------------
[loop_until]: kubectl --namespace=xlou-rcs get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas}
[loop_until]: (max_time=180, interval=5, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
1
--- stderr ---
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-rcs get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
ds-idrepo-0
--- stderr ---
***************************** Initializing component pods for RCS *****************************
---------------------------- Get pod list ----------------------------
[loop_until]: kubectl --namespace=xlou-rcs get pods -l app=rcs -o jsonpath={.items[*].metadata.name}
[loop_until]: (max_time=180, interval=10, expected_rc=[0]
[loop_until]: OK (rc = 0)
--- stdout ---
rcs-5bc979fb85-hslws
--- stderr ---
*********************************** Dumping components logs ***********************************
----------------------- Dumping logs for DS-CTS -----------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/rcs/20230221-004951-after-deployment/ds-cts-0.txt
Check pod logs for errors
--------------------- Dumping logs for DS-IDREPO ---------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/rcs/20230221-004951-after-deployment/ds-idrepo-0.txt
Check pod logs for errors
------------------------ Dumping logs for RCS ------------------------
Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/platform_login_pta/pod-logs/rcs/20230221-004951-after-deployment/rcs-5bc979fb85-hslws.txt
Check pod logs for errors
[21/Feb/2023 00:49:56] - INFO: Deployment successful
________________________________________________________________________________
[21/Feb/2023 00:49:56] Deploy_all_forgerock_components post : Post method
________________________________________________________________________________
Setting result to PASS
Task has been successfully stopped