--Task-- name: Deploy_all_forgerock_components enabled: True class_name: DeployComponentsTask source_name: controller source_namespace: >default< target_name: controller target_namespace: >default< start: 0 stop: None timeout: no timeout loop: False interval: None dependencies: ['Enable_prometheus_admin_api'] wait_for: [] options: {} group_name: None Current dir: /mnt/disks/data/xslou/lodestar-fork/pyrock ________________________________________________________________________________ [18/May/2023 16:44:55] Deploy_all_forgerock_components pre : Initialising task parameters ________________________________________________________________________________ task will be executed on controller (localhost) ________________________________________________________________________________ [18/May/2023 16:44:55] Deploy_all_forgerock_components step1 : Deploy components ________________________________________________________________________________ ******************************** Cleaning up existing namespace ******************************** ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secretagentconfiguration.secret-agent.secrets.forgerock.io "forgerock-sac" deleted --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- pod "admin-ui-75d697b658-xfzlw" force deleted pod "am-7849cf7bdb-5dxkg" force deleted pod "am-7849cf7bdb-5hhzp" force deleted pod "am-7849cf7bdb-kj6wh" force deleted pod "amster-zkkfs" force deleted pod "ds-cts-0" force deleted pod "ds-cts-1" force deleted pod "ds-cts-2" force deleted pod "ds-idrepo-0" force deleted pod "ds-idrepo-1" force deleted pod "ds-idrepo-2" force deleted pod "end-user-ui-5cc9cc9c78-4w8z8" force deleted pod "idm-58dc667486-kbvnp" force deleted pod "idm-58dc667486-nw7v6" force deleted pod "ldif-importer-jkhz4" force deleted pod "lodemon-7ff6576895-7ljhm" force deleted pod "login-ui-5bf9475b44-xmlpt" force deleted pod "overseer-0-54ddf4cbd7-rrqbb" force deleted service "admin-ui" force deleted service "am" force deleted service "ds-cts" force deleted service "ds-idrepo" force deleted service "end-user-ui" force deleted service "idm" force deleted service "login-ui" force deleted service "overseer-0" force deleted deployment.apps "admin-ui" force deleted deployment.apps "am" force deleted deployment.apps "end-user-ui" force deleted deployment.apps "idm" force deleted deployment.apps "lodemon" force deleted deployment.apps "login-ui" force deleted deployment.apps "overseer-0" force deleted statefulset.apps "ds-cts" force deleted statefulset.apps "ds-idrepo" force deleted job.batch "amster" force deleted job.batch "ldif-importer" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 10s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 21s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 31s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 42s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 52s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 03s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 14s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 24s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 34s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 45s (rc=0) - failed to find expected output: No resources found - retry [loop_until]: Function succeeded after 1m 55s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-files amster-retain dev-utils idm idm-logging-properties kube-root-ca.crt lodemon-config overseer-config-0 platform-config --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap amster-files --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "amster-files" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap amster-retain --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "amster-retain" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap dev-utils --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "dev-utils" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap idm --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap idm-logging-properties --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "idm-logging-properties" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap kube-root-ca.crt --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "kube-root-ca.crt" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap lodemon-config --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "lodemon-config" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap overseer-config-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "overseer-config-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete configmap platform-config --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- configmap "platform-config" deleted --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- cloud-storage-credentials-cts cloud-storage-credentials-idrepo ds truststore truststore-pem --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-cts --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-cts" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret cloud-storage-credentials-idrepo --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "cloud-storage-credentials-idrepo" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret ds --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "ds" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret truststore --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "truststore" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete secret truststore-pem --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret "truststore-pem" deleted --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- forgerock ig overseer-0 --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress forgerock --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "forgerock" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress ig --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "ig" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete ingress overseer-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ingress.networking.k8s.io "overseer-0" deleted --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- data-ds-cts-0 data-ds-cts-1 data-ds-cts-2 data-ds-idrepo-0 data-ds-idrepo-1 data-ds-idrepo-2 overseer-0 --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-cts-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-cts-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-1 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-1" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc data-ds-idrepo-2 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "data-ds-idrepo-2" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pvc overseer-0 --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- persistentvolumeclaim "overseer-0" deleted --- stderr --- [loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ----------------- Deleting admin clusterrolebindings ----------------- [loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}" [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- k8s-svc-acct-crb-xlou k8s-svc-acct-crb-xlou-0 --- stderr --- Deleting clusterrolebinding k8s-svc-acct-crb-xlou associated with xlou namespace [loop_until]: kubectl delete clusterrolebinding k8s-svc-acct-crb-xlou [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- clusterrolebinding.rbac.authorization.k8s.io "k8s-svc-acct-crb-xlou" deleted --- stderr --- Deleting clusterrolebinding k8s-svc-acct-crb-xlou-0 associated with xlou namespace [loop_until]: kubectl delete clusterrolebinding k8s-svc-acct-crb-xlou-0 [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- clusterrolebinding.rbac.authorization.k8s.io "k8s-svc-acct-crb-xlou-0" deleted --- stderr --- ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace "xlou" force deleted --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ************************************* Creating deployment ************************************* Creating normal (forgeops) type deployment for deployment: stack ------- Custom component configuration present. Loading values ------- ------------------ Deleting secret agent controller ------------------ [loop_until]: kubectl --namespace=xlou delete sac --all [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- ----------------------- Deleting all resources ----------------------- [loop_until]: kubectl --namespace=xlou delete all --all --grace-period=0 --force [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- No resources found --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: kubectl -n xlou get pods | grep "No resources found" [loop_until]: (max_time=360, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- No resources found in xlou namespace. ------------------------- Deleting configmap ------------------------- [loop_until]: kubectl --namespace=xlou get configmap -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- --------------------------- Deleting secret --------------------------- [loop_until]: kubectl --namespace=xlou get secret -o jsonpath='{.items[?(@.type=="Opaque")].metadata.name}' [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- -------------------------- Deleting ingress -------------------------- [loop_until]: kubectl --namespace=xlou get ingress -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ---------------------------- Deleting pvc ---------------------------- [loop_until]: kubectl --namespace=xlou get pvc -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl --namespace=xlou delete pv ds-backup-xlou --ignore-not-found [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: deleting cluster-scoped resources, not scoped to the provided namespace ----------------- Deleting admin clusterrolebindings ----------------- [loop_until]: kubectl get clusterrolebinding -o jsonpath="{range .items[?(@.subjects[0].namespace=='xlou')]}{.metadata.name} {end}" [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- ------------------------- Deleting namespace ------------------------- [loop_until]: kubectl delete namespaces xlou --ignore-not-found --grace-period=0 --force [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. [loop_until]: awk -F" " "{print NF}" <<< `kubectl get namespace xlou --ignore-not-found` | grep 0 [loop_until]: (max_time=600, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- --- stderr --- [loop_until]: kubectl create namespace xlou [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou created --- stderr --- [loop_until]: kubectl label namespace xlou self-service=false timeout=48 [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- namespace/xlou labeled --- stderr --- ************************************ Configuring components ************************************ No custom config provided. Nothing to do. No custom features provided. Nothing to do. ---- Updating components image tag/repo from platform-images repo ---- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --clean [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Cleaning up. [WARNING] Found nothing to clean. --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products ds [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Repo is at 9f1e91a0c2f091a0cb3351da792a9ee14e61cd82 on branch sustaining/7.3.x [INFO] Updating products ds [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/dsutil/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/ds-new/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/proxy/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile to use gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 9f1e91a0c2f091a0cb3351da792a9ee14e61cd82 on branch sustaining/7.3.x [INFO] Updating products am [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile to use gcr.io/forgerock-io/am-cdk/pit1:7.3.1-2199bb185f3287050d915730f821400e00b2f8fe --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 9f1e91a0c2f091a0cb3351da792a9ee14e61cd82 on branch sustaining/7.3.x [INFO] Updating products amster [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile to use gcr.io/forgerock-io/amster/pit1:7.3.1-2199bb185f3287050d915730f821400e00b2f8fe --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 9f1e91a0c2f091a0cb3351da792a9ee14e61cd82 on branch sustaining/7.3.x [INFO] Updating products idm [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile to use gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9 --- stderr --- [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/set-images --ref sustaining/7.3.x --products ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- [INFO] Setting repo up in /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/build/platform-images. [INFO] Found existing files, attempting to not clone. [INFO] Repo is at 9f1e91a0c2f091a0cb3351da792a9ee14e61cd82 on branch sustaining/7.3.x [INFO] Updating products ui [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui to use gcr.io/forgerock-io/platform-admin-ui/docker-build:7.3.0-e96ad80e3d4cf78373be2e14905f8c4e4454e22d [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui to use gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.3.0-e96ad80e3d4cf78373be2e14905f8c4e4454e22d [INFO] Updating /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui to use gcr.io/forgerock-io/platform-login-ui/docker-build:7.3.0-e96ad80e3d4cf78373be2e14905f8c4e4454e22d --- stderr --- - Checking if component Dockerfile/kustomize needs additional update - [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds cts [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts --- stderr --- Checking if ds-cts dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/cts/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker ds idrepo [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo --- stderr --- Checking if ds-idrepo dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile: gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/ds/idrepo/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker am [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am --- stderr --- Checking if am dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile: gcr.io/forgerock-io/am-cdk/pit1:7.3.1-2199bb185f3287050d915730f821400e00b2f8fe No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/am/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker amster [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster --- stderr --- Checking if amster dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile: gcr.io/forgerock-io/amster/pit1:7.3.1-2199bb185f3287050d915730f821400e00b2f8fe No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/amster/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path docker idm [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm --- stderr --- Checking if idm dockerfile /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile needs additional update (custom tag/repo from config.yaml or image name resolution) Read FROM line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile: gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9 No need to update FROM line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/docker/idm/Dockerfile [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base end-user-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui --- stderr --- Checking if end-user-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml: gcr.io/forgerock-io/platform-enduser-ui/docker-build:7.3.0-e96ad80e3d4cf78373be2e14905f8c4e4454e22d No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/end-user-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base login-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui --- stderr --- Checking if login-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml: gcr.io/forgerock-io/platform-login-ui/docker-build:7.3.0-e96ad80e3d4cf78373be2e14905f8c4e4454e22d No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/login-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize base admin-ui [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui --- stderr --- Checking if admin-ui kustomize file /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml needs additional update (custom tag/repo from config.yaml or image name resolution) Read image line from /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml: gcr.io/forgerock-io/platform-admin-ui/docker-build:7.3.0-e96ad80e3d4cf78373be2e14905f8c4e4454e22d No need to update the image line for /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/base/admin-ui/kustomization.yaml [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/config path kustomize overlay medium [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/medium --- stderr --- [loop_until]: kubectl --namespace=xlou delete -f /tmp/tmpgvba2lag [loop_until]: (max_time=180, interval=5, expected_rc=[0, 1] [loop_until]: OK (rc = 1) --- stdout --- --- stderr --- Error from server (NotFound): error when deleting "/tmp/tmpgvba2lag": secrets "sslcert" not found [loop_until]: kubectl --namespace=xlou apply -f /tmp/tmpgvba2lag [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- secret/sslcert created --- stderr --- The following components will be deployed: - ds-cts (DS) - ds-idrepo (DS) - am (AM) - amster (Amster) - idm (IDM) - end-user-ui (EndUserUi) - login-ui (LoginUi) - admin-ui (AdminUi) [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/forgeops build all --config-profile=cdk --push-to gcr.io/engineeringpit/lodestar-images --tag=xlou [run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'} The push refers to repository [gcr.io/engineeringpit/lodestar-images/am] 5f70bf18a086: Preparing 3f97014c18d1: Preparing dc257f9adc01: Preparing 5f70bf18a086: Preparing 523d5e3623d8: Preparing 268480f6d22b: Preparing e17447a559f1: Preparing 99b6517dba47: Preparing 1c9096e9ec5e: Preparing 0ddd94d60f0d: Preparing 5cbd2cbda272: Preparing 5c6ae26aafd8: Preparing dbff58cb3ef2: Preparing be0e6014e150: Preparing 23717ad40607: Preparing 663f79a0b679: Preparing 4727058900f7: Preparing 3c440e9f6c08: Preparing 19256c5292c6: Preparing 7daa752a9117: Preparing 4a037d502638: Preparing ae09cda9eeec: Preparing a4cace215362: Preparing ddc861348ed2: Preparing 1ef0ca4f4f0e: Preparing 790f83844c03: Preparing 21491f979be2: Preparing a808c8d1e663: Preparing ab9c297d25c9: Preparing 141711e8ec34: Preparing fd64773fde86: Preparing 15e4b3240a56: Preparing 12c7083a16b4: Preparing a3c06f196aa3: Preparing d20ece5534b3: Preparing 5049b77a7211: Preparing c45c46033aa8: Preparing b8a36d10656a: Preparing e17447a559f1: Waiting 99b6517dba47: Waiting 1c9096e9ec5e: Waiting 0ddd94d60f0d: Waiting 5cbd2cbda272: Waiting 5c6ae26aafd8: Waiting dbff58cb3ef2: Waiting be0e6014e150: Waiting 23717ad40607: Waiting 663f79a0b679: Waiting 4727058900f7: Waiting 3c440e9f6c08: Waiting 19256c5292c6: Waiting 7daa752a9117: Waiting 4a037d502638: Waiting ae09cda9eeec: Waiting a4cace215362: Waiting ddc861348ed2: Waiting 1ef0ca4f4f0e: Waiting 790f83844c03: Waiting 21491f979be2: Waiting a808c8d1e663: Waiting ab9c297d25c9: Waiting 141711e8ec34: Waiting fd64773fde86: Waiting 15e4b3240a56: Waiting 12c7083a16b4: Waiting a3c06f196aa3: Waiting d20ece5534b3: Waiting 5049b77a7211: Waiting c45c46033aa8: Waiting b8a36d10656a: Waiting 523d5e3623d8: Layer already exists 5f70bf18a086: Layer already exists 268480f6d22b: Layer already exists 1c9096e9ec5e: Layer already exists 99b6517dba47: Layer already exists e17447a559f1: Layer already exists 0ddd94d60f0d: Layer already exists 5c6ae26aafd8: Layer already exists 5cbd2cbda272: Layer already exists dbff58cb3ef2: Layer already exists be0e6014e150: Layer already exists 23717ad40607: Layer already exists 663f79a0b679: Layer already exists 4727058900f7: Layer already exists 3c440e9f6c08: Layer already exists 19256c5292c6: Layer already exists 7daa752a9117: Layer already exists 4a037d502638: Layer already exists ae09cda9eeec: Layer already exists ddc861348ed2: Layer already exists a4cace215362: Layer already exists 1ef0ca4f4f0e: Layer already exists 790f83844c03: Layer already exists 21491f979be2: Layer already exists ab9c297d25c9: Layer already exists a808c8d1e663: Layer already exists 141711e8ec34: Layer already exists fd64773fde86: Layer already exists 15e4b3240a56: Layer already exists 12c7083a16b4: Layer already exists a3c06f196aa3: Layer already exists d20ece5534b3: Layer already exists 5049b77a7211: Layer already exists c45c46033aa8: Layer already exists b8a36d10656a: Layer already exists dc257f9adc01: Pushed 3f97014c18d1: Pushed xlou: digest: sha256:d6f0735f48fed35cb18fae0298c8d9a9283657bde26c5e0d0bdaba38c2303c83 size: 8255 The push refers to repository [gcr.io/engineeringpit/lodestar-images/idm] 7dec10041425: Preparing b2f4cb4b6ac8: Preparing c1494a331c26: Preparing 49920ff8904a: Preparing 4e0e9bca7040: Preparing 5f70bf18a086: Preparing 38fe551f2463: Preparing 68884240a8b2: Preparing 5f70bf18a086: Waiting 38fe551f2463: Waiting 1ae67d4978a1: Preparing 68884240a8b2: Waiting 1ae67d4978a1: Waiting 8d20f850e8e0: Preparing 09308431d152: Preparing fd535567db7f: Preparing 8d20f850e8e0: Waiting 09308431d152: Waiting c9182c130984: Preparing fd535567db7f: Waiting c9182c130984: Waiting 7dec10041425: Pushed b2f4cb4b6ac8: Pushed c1494a331c26: Pushed 4e0e9bca7040: Pushed 5f70bf18a086: Layer already exists 49920ff8904a: Pushed 68884240a8b2: Layer already exists 38fe551f2463: Layer already exists 8d20f850e8e0: Layer already exists 1ae67d4978a1: Layer already exists 09308431d152: Layer already exists fd535567db7f: Layer already exists c9182c130984: Layer already exists xlou: digest: sha256:7faaf37e641fd1cf2989bf13b10d2510429ed433228c2fd475cffa8e411e6d58 size: 3033 The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds] e82c532d94ae: Preparing aa18f9e60abc: Preparing f615b4197b38: Preparing 7f10273d069c: Preparing 48f581d44401: Preparing 8ff497f397d2: Preparing 5f70bf18a086: Preparing f3430397c16e: Preparing 64be7cf3f0e2: Preparing 6ddc7a0fef96: Preparing b463b6a867c7: Preparing ae45309184d4: Preparing 8553b91047da: Preparing 8ff497f397d2: Waiting 5f70bf18a086: Waiting f3430397c16e: Waiting 64be7cf3f0e2: Waiting 6ddc7a0fef96: Waiting b463b6a867c7: Waiting ae45309184d4: Waiting 8553b91047da: Waiting aa18f9e60abc: Pushed f615b4197b38: Pushed 5f70bf18a086: Layer already exists 8ff497f397d2: Layer already exists 7f10273d069c: Pushed f3430397c16e: Layer already exists 64be7cf3f0e2: Layer already exists 6ddc7a0fef96: Layer already exists ae45309184d4: Layer already exists b463b6a867c7: Layer already exists 8553b91047da: Layer already exists e82c532d94ae: Pushed 48f581d44401: Pushed xlou: digest: sha256:67c624e9ea7b34cce6db9047818c9bc07868323e5b963333950346113340bd7b size: 3046 The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-idrepo] 9228a3bf8326: Preparing 067e3b73a900: Preparing f63fc328d48b: Preparing c1b3b8ac0420: Preparing 9e0e4f858006: Preparing f993b8b80d0a: Preparing 9b785ce61580: Preparing 5f70bf18a086: Preparing a177d496af94: Preparing 8ff497f397d2: Preparing 5f70bf18a086: Preparing f3430397c16e: Preparing 64be7cf3f0e2: Preparing 6ddc7a0fef96: Preparing b463b6a867c7: Preparing f993b8b80d0a: Waiting 9b785ce61580: Waiting 5f70bf18a086: Waiting a177d496af94: Waiting 8ff497f397d2: Waiting f3430397c16e: Waiting 64be7cf3f0e2: Waiting 6ddc7a0fef96: Waiting b463b6a867c7: Waiting ae45309184d4: Preparing 8553b91047da: Preparing 8553b91047da: Waiting ae45309184d4: Waiting 067e3b73a900: Pushed c1b3b8ac0420: Pushed 9228a3bf8326: Pushed 9e0e4f858006: Pushed f63fc328d48b: Pushed 5f70bf18a086: Layer already exists f3430397c16e: Layer already exists 8ff497f397d2: Layer already exists 64be7cf3f0e2: Layer already exists 6ddc7a0fef96: Layer already exists b463b6a867c7: Layer already exists ae45309184d4: Layer already exists 8553b91047da: Layer already exists 9b785ce61580: Pushed f993b8b80d0a: Pushed a177d496af94: Pushed xlou: digest: sha256:3e36e587bd789a669d40e361caea544c0023fc13f2dfd669e110d7f842cd4d63 size: 3868 The push refers to repository [gcr.io/engineeringpit/lodestar-images/ds-cts] 89b0a02917d9: Preparing fe41464a8a13: Preparing 05a862bb9e31: Preparing 9b785ce61580: Preparing 8951dedf5b3a: Preparing a177d496af94: Preparing 8ff497f397d2: Preparing 5f70bf18a086: Preparing f3430397c16e: Preparing a177d496af94: Waiting 8ff497f397d2: Waiting 5f70bf18a086: Waiting 64be7cf3f0e2: Preparing 6ddc7a0fef96: Preparing b463b6a867c7: Preparing f3430397c16e: Waiting 64be7cf3f0e2: Waiting 6ddc7a0fef96: Waiting b463b6a867c7: Waiting ae45309184d4: Preparing 8553b91047da: Preparing ae45309184d4: Waiting 8553b91047da: Waiting 9b785ce61580: Layer already exists a177d496af94: Layer already exists 8ff497f397d2: Layer already exists 5f70bf18a086: Layer already exists f3430397c16e: Layer already exists 64be7cf3f0e2: Layer already exists 6ddc7a0fef96: Layer already exists b463b6a867c7: Layer already exists ae45309184d4: Layer already exists 8553b91047da: Layer already exists fe41464a8a13: Pushed 05a862bb9e31: Pushed 89b0a02917d9: Pushed 8951dedf5b3a: Pushed xlou: digest: sha256:6b231abc26a026df67ee4786c023696c74cc9ebb0d0896e65435a63e950a7dc3 size: 3251 The push refers to repository [gcr.io/engineeringpit/lodestar-images/ig] f32ddc8d674f: Preparing e1e64a249e0a: Preparing 74eeec92b392: Preparing 10a523df8618: Preparing 4fb17506c7d6: Preparing 5696f243e6cc: Preparing 964c1eecc7f5: Preparing ab8038891451: Preparing c6f8bfcecf05: Preparing 315cd8c5da97: Preparing d456513ae67c: Preparing 67a4178b7d47: Preparing 964c1eecc7f5: Waiting ab8038891451: Waiting c6f8bfcecf05: Waiting 315cd8c5da97: Waiting d456513ae67c: Waiting 67a4178b7d47: Waiting 5696f243e6cc: Waiting 4fb17506c7d6: Layer already exists 5696f243e6cc: Layer already exists 964c1eecc7f5: Layer already exists ab8038891451: Layer already exists c6f8bfcecf05: Layer already exists 315cd8c5da97: Layer already exists d456513ae67c: Layer already exists 67a4178b7d47: Layer already exists f32ddc8d674f: Pushed 10a523df8618: Pushed e1e64a249e0a: Pushed 74eeec92b392: Pushed xlou: digest: sha256:c7c6bcae94d0bd089c6ed0efc947db82c1c2cee379d1539af9ec805dea9bb1d8 size: 2827 Updated the image_defaulter with your new image for am: "gcr.io/engineeringpit/lodestar-images/am:xlou". Updated the image_defaulter with your new image for idm: "gcr.io/engineeringpit/lodestar-images/idm:xlou". Updated the image_defaulter with your new image for ds: "gcr.io/engineeringpit/lodestar-images/ds:xlou". Updated the image_defaulter with your new image for ds-idrepo: "gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou". Updated the image_defaulter with your new image for ds-cts: "gcr.io/engineeringpit/lodestar-images/ds-cts:xlou". Updated the image_defaulter with your new image for ig: "gcr.io/engineeringpit/lodestar-images/ig:xlou". #1 [internal] load .dockerignore #1 transferring context: 2B done #1 DONE 0.0s #2 [internal] load build definition from Dockerfile #2 transferring dockerfile: 489B done #2 DONE 0.0s #3 [internal] load metadata for gcr.io/forgerock-io/am-cdk/pit1:7.3.1-2199bb185f3287050d915730f821400e00b2f8fe #3 DONE 1.1s #4 [internal] load build context #4 transferring context: 1.13kB done #4 DONE 0.0s #5 [1/5] FROM gcr.io/forgerock-io/am-cdk/pit1:7.3.1-2199bb185f3287050d915730f821400e00b2f8fe@sha256:6d248240fe003cbc7f47058e16c5da6c1ebdf47303b23368c53b385f7fce5d29 #5 resolve gcr.io/forgerock-io/am-cdk/pit1:7.3.1-2199bb185f3287050d915730f821400e00b2f8fe@sha256:6d248240fe003cbc7f47058e16c5da6c1ebdf47303b23368c53b385f7fce5d29 done #5 sha256:1bc677758ad7fa4503417ae5be18809c5a8679b5b36fcd1464d5a8e41cb13305 0B / 30.43MB 0.1s #5 sha256:5e416359c20e796138006a21e5106cafacb040e4a290d6582c3991967341e2d5 24.38kB / 24.38kB done #5 sha256:6d248240fe003cbc7f47058e16c5da6c1ebdf47303b23368c53b385f7fce5d29 7.43kB / 7.43kB done #5 sha256:458b02b5411a07f3b354cde2b461caffc1bb184a3413b5736a9e67ee87cb28b2 0B / 12.50MB 0.1s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 0B / 198.55MB 0.1s #5 sha256:1bc677758ad7fa4503417ae5be18809c5a8679b5b36fcd1464d5a8e41cb13305 7.34MB / 30.43MB 0.3s #5 sha256:458b02b5411a07f3b354cde2b461caffc1bb184a3413b5736a9e67ee87cb28b2 7.34MB / 12.50MB 0.3s #5 sha256:1bc677758ad7fa4503417ae5be18809c5a8679b5b36fcd1464d5a8e41cb13305 17.53MB / 30.43MB 0.4s #5 sha256:458b02b5411a07f3b354cde2b461caffc1bb184a3413b5736a9e67ee87cb28b2 12.50MB / 12.50MB 0.4s #5 sha256:1bc677758ad7fa4503417ae5be18809c5a8679b5b36fcd1464d5a8e41cb13305 28.31MB / 30.43MB 0.5s #5 sha256:458b02b5411a07f3b354cde2b461caffc1bb184a3413b5736a9e67ee87cb28b2 12.50MB / 12.50MB 0.4s done #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 15.73MB / 198.55MB 0.5s #5 sha256:00c7d406486b30dd399da8c9274b671589aa514d03cd6d60a6bac92f34b8ad17 0B / 173B 0.5s #5 sha256:1bc677758ad7fa4503417ae5be18809c5a8679b5b36fcd1464d5a8e41cb13305 30.43MB / 30.43MB 0.6s done #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 30.58MB / 198.55MB 0.6s #5 sha256:459bad8f72529f7efdb39da0b1a49a6bda0284746cd389597a5137d69e935410 0B / 171B 0.6s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 46.14MB / 198.55MB 0.8s #5 sha256:00c7d406486b30dd399da8c9274b671589aa514d03cd6d60a6bac92f34b8ad17 173B / 173B 0.6s done #5 sha256:459bad8f72529f7efdb39da0b1a49a6bda0284746cd389597a5137d69e935410 171B / 171B 0.8s done #5 sha256:4f4ecc4c8313b5fad7878839867473953fc1a04c14b516d43e65e67a4142b16d 0B / 12.71MB 0.8s #5 sha256:26b2279a7737ba8c0c98ba8b443532591930d8455ea487c7b75dd9160999d38d 0B / 130B 0.8s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 58.72MB / 198.55MB 1.0s #5 sha256:4f4ecc4c8313b5fad7878839867473953fc1a04c14b516d43e65e67a4142b16d 5.24MB / 12.71MB 1.0s #5 extracting sha256:1bc677758ad7fa4503417ae5be18809c5a8679b5b36fcd1464d5a8e41cb13305 #5 sha256:4f4ecc4c8313b5fad7878839867473953fc1a04c14b516d43e65e67a4142b16d 12.71MB / 12.71MB 1.1s #5 sha256:26b2279a7737ba8c0c98ba8b443532591930d8455ea487c7b75dd9160999d38d 130B / 130B 1.0s done #5 sha256:5084cc66430d96e09d4353470044d1629e38fc5620a48785a0c27f5980f81644 0B / 7.68kB 1.1s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 69.21MB / 198.55MB 1.2s #5 sha256:4f4ecc4c8313b5fad7878839867473953fc1a04c14b516d43e65e67a4142b16d 12.71MB / 12.71MB 1.1s done #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 0B / 29.48MB 1.2s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 84.26MB / 198.55MB 1.4s #5 sha256:5084cc66430d96e09d4353470044d1629e38fc5620a48785a0c27f5980f81644 7.68kB / 7.68kB 1.4s done #5 sha256:7ee4efcdd3d57330afbfbfc96f28827d7cf4bd265184eae43bdc89ab325454b4 0B / 12.24MB 1.4s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 97.77MB / 198.55MB 1.6s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 109.19MB / 198.55MB 1.8s #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 3.81MB / 29.48MB 1.8s #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 8.39MB / 29.48MB 1.9s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 119.54MB / 198.55MB 2.0s #5 sha256:7ee4efcdd3d57330afbfbfc96f28827d7cf4bd265184eae43bdc89ab325454b4 5.24MB / 12.24MB 2.0s #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 12.58MB / 29.48MB 2.1s #5 sha256:7ee4efcdd3d57330afbfbfc96f28827d7cf4bd265184eae43bdc89ab325454b4 10.49MB / 12.24MB 2.1s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 131.07MB / 198.55MB 2.2s #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 15.40MB / 29.48MB 2.2s #5 sha256:7ee4efcdd3d57330afbfbfc96f28827d7cf4bd265184eae43bdc89ab325454b4 12.24MB / 12.24MB 2.2s done #5 sha256:0d2502a4f060ef202f612161fede1cd6d41988086880f341c2b67a838518cbcc 0B / 115.63kB 2.2s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 142.85MB / 198.55MB 2.4s #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 24.76MB / 29.48MB 2.4s #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 27.89MB / 29.48MB 2.5s #5 sha256:0d2502a4f060ef202f612161fede1cd6d41988086880f341c2b67a838518cbcc 115.63kB / 115.63kB 2.4s done #5 sha256:c4fafe308f97612f3a9687f3ec0f674618e86531195f0fc47d25a6e43ff9c773 0B / 203B 2.5s #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 29.48MB / 29.48MB 2.6s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 166.09MB / 198.55MB 2.8s #5 sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 29.48MB / 29.48MB 2.6s done #5 sha256:c4fafe308f97612f3a9687f3ec0f674618e86531195f0fc47d25a6e43ff9c773 203B / 203B 2.8s done #5 sha256:82360b4919a737a7e342f51e6440dd50c0fab13b6aeee4a23b82f2840d1df7cf 0B / 172B 2.8s #5 sha256:57bb401ddab313bb680ae6cb45551d6dbcb24ece00d0a6372a59b2ae915f4df9 0B / 4.41MB 2.8s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 177.21MB / 198.55MB 3.0s #5 sha256:82360b4919a737a7e342f51e6440dd50c0fab13b6aeee4a23b82f2840d1df7cf 172B / 172B 2.9s done #5 sha256:2ecc0108a184c1afedf60e973b281e64bc04b49cb6498633aac06444176938ba 0B / 106.45kB 3.0s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 195.50MB / 198.55MB 3.3s #5 sha256:57bb401ddab313bb680ae6cb45551d6dbcb24ece00d0a6372a59b2ae915f4df9 4.41MB / 4.41MB 3.3s done #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 0B / 229.92MB 3.3s #5 sha256:2ecc0108a184c1afedf60e973b281e64bc04b49cb6498633aac06444176938ba 106.45kB / 106.45kB 3.3s done #5 sha256:5ebf33929c3db777dc8b1edee0d57ae85594af1a35d41ededf597d7a423f8307 0B / 2.08kB 3.5s #5 extracting sha256:1bc677758ad7fa4503417ae5be18809c5a8679b5b36fcd1464d5a8e41cb13305 2.9s done #5 sha256:5ebf33929c3db777dc8b1edee0d57ae85594af1a35d41ededf597d7a423f8307 2.08kB / 2.08kB 4.0s #5 sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 198.55MB / 198.55MB 4.1s done #5 sha256:5ebf33929c3db777dc8b1edee0d57ae85594af1a35d41ededf597d7a423f8307 2.08kB / 2.08kB 4.1s done #5 extracting sha256:458b02b5411a07f3b354cde2b461caffc1bb184a3413b5736a9e67ee87cb28b2 #5 sha256:4b447a5854798bac0e7b547e52f0e987aced881d3405c414e8ac07aa51584dfc 0B / 462B 4.2s #5 sha256:1071ee62813c9df48237ec9186e135bf0fcb7b170c0b57302a8a639a329b7680 0B / 819B 4.2s #5 sha256:4b447a5854798bac0e7b547e52f0e987aced881d3405c414e8ac07aa51584dfc 462B / 462B 4.5s done #5 sha256:1071ee62813c9df48237ec9186e135bf0fcb7b170c0b57302a8a639a329b7680 819B / 819B 4.4s done #5 sha256:9b9a3d3e7311d9895b69beb67208dc6e5f454678a0213d89fc255230d5f1c41c 0B / 450.54kB 4.6s #5 sha256:bff4d746ae990c48e3b00bdc157a476b4594597dd180af22789613be23ec2217 0B / 210B 4.6s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 15.79MB / 229.92MB 4.7s #5 sha256:9b9a3d3e7311d9895b69beb67208dc6e5f454678a0213d89fc255230d5f1c41c 450.54kB / 450.54kB 4.8s done #5 sha256:bff4d746ae990c48e3b00bdc157a476b4594597dd180af22789613be23ec2217 210B / 210B 4.8s done #5 sha256:587f8a2f392482835ce50876dd7512118b4b3b6b5137a19a60518a840d0e9b90 0B / 81.02kB 4.9s #5 sha256:2809dea9ee74b544ea1cd635df0e5ae4c57e2f4661dee1963dbcd91642fcb79a 0B / 79.13kB 4.9s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 31.46MB / 229.92MB 5.0s #5 sha256:2809dea9ee74b544ea1cd635df0e5ae4c57e2f4661dee1963dbcd91642fcb79a 79.13kB / 79.13kB 5.2s done #5 sha256:73f4e381cfba2097b496a912843af387139d2f7944210864bf3a0cc64ddd3766 0B / 19.70MB 5.2s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 46.47MB / 229.92MB 5.3s #5 sha256:587f8a2f392482835ce50876dd7512118b4b3b6b5137a19a60518a840d0e9b90 81.02kB / 81.02kB 5.2s done #5 sha256:4fe0c6aa60c09779528fc3ab1f466fc3982f2e5fe0803b48fdaee67908ce8d28 0B / 2.56kB 5.4s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 63.13MB / 229.92MB 5.6s #5 sha256:4fe0c6aa60c09779528fc3ab1f466fc3982f2e5fe0803b48fdaee67908ce8d28 2.56kB / 2.56kB 5.5s done #5 sha256:9625948ebaed29455d7a37fc504f4b52fed7f9aad4e2354389b923d45c1181d8 0B / 894B 5.6s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 75.50MB / 229.92MB 5.8s #5 extracting sha256:458b02b5411a07f3b354cde2b461caffc1bb184a3413b5736a9e67ee87cb28b2 1.7s done #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 92.27MB / 229.92MB 6.0s #5 sha256:9625948ebaed29455d7a37fc504f4b52fed7f9aad4e2354389b923d45c1181d8 894B / 894B 5.8s done #5 sha256:d24f0315daf2df245c0429b1e618642763e90168848fd9792e1c6ad938d5915e 0B / 288B 6.0s #5 sha256:d24f0315daf2df245c0429b1e618642763e90168848fd9792e1c6ad938d5915e 288B / 288B 6.1s done #5 extracting sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f #5 sha256:86a9bea027c0553b67ed6c22db14a9c857a92bc5d742ea8b6b3603dc09e4943e 0B / 290B 6.2s #5 sha256:73f4e381cfba2097b496a912843af387139d2f7944210864bf3a0cc64ddd3766 1.05MB / 19.70MB 6.3s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 111.41MB / 229.92MB 6.4s #5 sha256:73f4e381cfba2097b496a912843af387139d2f7944210864bf3a0cc64ddd3766 8.39MB / 19.70MB 6.4s #5 sha256:86a9bea027c0553b67ed6c22db14a9c857a92bc5d742ea8b6b3603dc09e4943e 290B / 290B 6.5s done #5 sha256:e5d9cf3b4244263f61efb4764ee9225428fa70d3c82dfa545467d0d5400fbfaf 0B / 285B 6.6s #5 sha256:73f4e381cfba2097b496a912843af387139d2f7944210864bf3a0cc64ddd3766 10.39MB / 19.70MB 6.7s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 128.97MB / 229.92MB 6.9s #5 sha256:73f4e381cfba2097b496a912843af387139d2f7944210864bf3a0cc64ddd3766 17.83MB / 19.70MB 6.9s #5 sha256:e5d9cf3b4244263f61efb4764ee9225428fa70d3c82dfa545467d0d5400fbfaf 285B / 285B 6.8s done #5 sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 0B / 21.90MB 6.9s #5 sha256:73f4e381cfba2097b496a912843af387139d2f7944210864bf3a0cc64ddd3766 19.70MB / 19.70MB 7.0s done #5 sha256:92f655b210088d23342f70e60cb18ddf58e49d82ac4da5ca729abf65b1d8e153 0B / 3.87MB 7.0s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 149.95MB / 229.92MB 7.3s #5 sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 5.20MB / 21.90MB 7.5s #5 sha256:92f655b210088d23342f70e60cb18ddf58e49d82ac4da5ca729abf65b1d8e153 3.87MB / 3.87MB 7.5s done #5 sha256:42566b84ad79a55321daf3cbcd8b320ccd09556fceea9770013ab210438ed2f8 0B / 66.39kB 7.5s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 167.77MB / 229.92MB 7.7s #5 sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 8.39MB / 21.90MB 7.7s #5 sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 12.28MB / 21.90MB 7.9s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 180.36MB / 229.92MB 8.0s #5 sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 17.83MB / 21.90MB 8.0s #5 sha256:42566b84ad79a55321daf3cbcd8b320ccd09556fceea9770013ab210438ed2f8 66.39kB / 66.39kB 7.9s done #5 sha256:e1ccf6fe5b460c5e73a94bc9d35470439e6d904d2942dc9bc9cd5c789bf7a399 0B / 821B 8.0s #5 sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 21.90MB / 21.90MB 8.1s #5 sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 21.90MB / 21.90MB 8.1s done #5 sha256:e1ccf6fe5b460c5e73a94bc9d35470439e6d904d2942dc9bc9cd5c789bf7a399 821B / 821B 8.2s done #5 sha256:ed59f0749ceb8c8c897169650fdcd268161659f469c6e6ff1f8634b4cf9ded63 0B / 235B 8.2s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 193.94MB / 229.92MB 8.4s #5 sha256:ed59f0749ceb8c8c897169650fdcd268161659f469c6e6ff1f8634b4cf9ded63 235B / 235B 8.4s done #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 207.16MB / 229.92MB 8.8s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 222.30MB / 229.92MB 9.1s #5 sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 229.92MB / 229.92MB 10.1s done #5 extracting sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 5.1s #5 extracting sha256:47b2ae38d777aa426d64784e1e39103a1ee45b03af552058a8df3422b0dcfa7f 6.1s done #5 extracting sha256:00c7d406486b30dd399da8c9274b671589aa514d03cd6d60a6bac92f34b8ad17 #5 extracting sha256:00c7d406486b30dd399da8c9274b671589aa514d03cd6d60a6bac92f34b8ad17 done #5 extracting sha256:459bad8f72529f7efdb39da0b1a49a6bda0284746cd389597a5137d69e935410 done #5 extracting sha256:4f4ecc4c8313b5fad7878839867473953fc1a04c14b516d43e65e67a4142b16d #5 extracting sha256:4f4ecc4c8313b5fad7878839867473953fc1a04c14b516d43e65e67a4142b16d 0.5s done #5 extracting sha256:26b2279a7737ba8c0c98ba8b443532591930d8455ea487c7b75dd9160999d38d done #5 extracting sha256:5084cc66430d96e09d4353470044d1629e38fc5620a48785a0c27f5980f81644 done #5 extracting sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 0.1s #5 extracting sha256:71c69704f1a68a449ca510ba5c2ae5f28af8c1c756129bb96445b3850d25a920 0.7s done #5 extracting sha256:7ee4efcdd3d57330afbfbfc96f28827d7cf4bd265184eae43bdc89ab325454b4 #5 extracting sha256:7ee4efcdd3d57330afbfbfc96f28827d7cf4bd265184eae43bdc89ab325454b4 0.5s done #5 extracting sha256:0d2502a4f060ef202f612161fede1cd6d41988086880f341c2b67a838518cbcc #5 extracting sha256:0d2502a4f060ef202f612161fede1cd6d41988086880f341c2b67a838518cbcc 0.0s done #5 extracting sha256:c4fafe308f97612f3a9687f3ec0f674618e86531195f0fc47d25a6e43ff9c773 done #5 extracting sha256:82360b4919a737a7e342f51e6440dd50c0fab13b6aeee4a23b82f2840d1df7cf done #5 extracting sha256:57bb401ddab313bb680ae6cb45551d6dbcb24ece00d0a6372a59b2ae915f4df9 #5 extracting sha256:57bb401ddab313bb680ae6cb45551d6dbcb24ece00d0a6372a59b2ae915f4df9 0.3s done #5 extracting sha256:2ecc0108a184c1afedf60e973b281e64bc04b49cb6498633aac06444176938ba 0.0s done #5 extracting sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d #5 extracting sha256:3050d07b12d59ac088dbaedda82c7f488584ccbcf9dcb7e5c9ea392faa76d98d 4.4s done #5 extracting sha256:5ebf33929c3db777dc8b1edee0d57ae85594af1a35d41ededf597d7a423f8307 done #5 extracting sha256:1071ee62813c9df48237ec9186e135bf0fcb7b170c0b57302a8a639a329b7680 #5 extracting sha256:1071ee62813c9df48237ec9186e135bf0fcb7b170c0b57302a8a639a329b7680 done #5 extracting sha256:4b447a5854798bac0e7b547e52f0e987aced881d3405c414e8ac07aa51584dfc done #5 extracting sha256:9b9a3d3e7311d9895b69beb67208dc6e5f454678a0213d89fc255230d5f1c41c #5 extracting sha256:9b9a3d3e7311d9895b69beb67208dc6e5f454678a0213d89fc255230d5f1c41c 0.3s done #5 extracting sha256:bff4d746ae990c48e3b00bdc157a476b4594597dd180af22789613be23ec2217 done #5 extracting sha256:587f8a2f392482835ce50876dd7512118b4b3b6b5137a19a60518a840d0e9b90 0.0s done #5 extracting sha256:2809dea9ee74b544ea1cd635df0e5ae4c57e2f4661dee1963dbcd91642fcb79a #5 extracting sha256:2809dea9ee74b544ea1cd635df0e5ae4c57e2f4661dee1963dbcd91642fcb79a 0.0s done #5 extracting sha256:73f4e381cfba2097b496a912843af387139d2f7944210864bf3a0cc64ddd3766 #5 extracting sha256:73f4e381cfba2097b496a912843af387139d2f7944210864bf3a0cc64ddd3766 0.4s done #5 extracting sha256:4fe0c6aa60c09779528fc3ab1f466fc3982f2e5fe0803b48fdaee67908ce8d28 done #5 extracting sha256:9625948ebaed29455d7a37fc504f4b52fed7f9aad4e2354389b923d45c1181d8 #5 extracting sha256:9625948ebaed29455d7a37fc504f4b52fed7f9aad4e2354389b923d45c1181d8 done #5 extracting sha256:d24f0315daf2df245c0429b1e618642763e90168848fd9792e1c6ad938d5915e done #5 extracting sha256:86a9bea027c0553b67ed6c22db14a9c857a92bc5d742ea8b6b3603dc09e4943e done #5 extracting sha256:e5d9cf3b4244263f61efb4764ee9225428fa70d3c82dfa545467d0d5400fbfaf done #5 extracting sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 0.1s #5 extracting sha256:d6c0f3e99274558b43bd35a05445d46fbe102fd6e951b8a9fef8885307ac8ec5 1.5s done #5 extracting sha256:92f655b210088d23342f70e60cb18ddf58e49d82ac4da5ca729abf65b1d8e153 #5 extracting sha256:92f655b210088d23342f70e60cb18ddf58e49d82ac4da5ca729abf65b1d8e153 0.4s done #5 extracting sha256:42566b84ad79a55321daf3cbcd8b320ccd09556fceea9770013ab210438ed2f8 0.0s done #5 extracting sha256:e1ccf6fe5b460c5e73a94bc9d35470439e6d904d2942dc9bc9cd5c789bf7a399 done #5 extracting sha256:ed59f0749ceb8c8c897169650fdcd268161659f469c6e6ff1f8634b4cf9ded63 done #5 DONE 27.2s #6 [2/5] RUN echo "\033[0;36m*** Building 'cdk' profile ***\033[0m" #6 0.574 *** Building 'cdk' profile *** #6 DONE 4.7s #7 [3/5] COPY --chown=forgerock:root config-profiles/cdk/ /home/forgerock/openam/ #7 DONE 0.2s #8 [4/5] COPY --chown=forgerock:root *.sh /home/forgerock/ #8 DONE 0.1s #9 [5/5] WORKDIR /home/forgerock #9 DONE 0.1s #10 exporting to image #10 exporting layers #10 exporting layers 0.1s done #10 writing image sha256:769adb21e9af44254da60955843c457af623757a1f2253e9ea3adfb73f299439 done #10 naming to gcr.io/engineeringpit/lodestar-images/am:xlou done #10 DONE 0.2s #1 [internal] load .dockerignore #1 transferring context: 2B done #1 DONE 0.1s #2 [internal] load build definition from Dockerfile #2 transferring dockerfile: 1.07kB done #2 DONE 0.1s #3 [internal] load metadata for gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9 #3 DONE 1.4s #4 [1/6] FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9@sha256:cdeaa966c6d8a59cd7da18e762f4d34e3715d8f8f461e3c6dadebde247bdf6c7 #4 resolve gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9@sha256:cdeaa966c6d8a59cd7da18e762f4d34e3715d8f8f461e3c6dadebde247bdf6c7 0.1s done #4 ... #5 [internal] load build context #5 transferring context: 279.76kB 0.1s done #5 DONE 0.1s #4 [1/6] FROM gcr.io/forgerock-io/idm-cdk/pit1:7.3.0-ed278902ceea12882b9d14775050c5120defecb9@sha256:cdeaa966c6d8a59cd7da18e762f4d34e3715d8f8f461e3c6dadebde247bdf6c7 #4 sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d 0B / 27.14MB 0.1s #4 sha256:617d05562c0cdbe27dd5dd6a21955bda26d132e9e849fbfc2db476319d6af83f 0B / 1.28kB 0.1s #4 sha256:8b3c12212e5793102d66afc17de7793e2486ffb30f689c06c80d807e22900e6e 1.88kB / 1.88kB done #4 sha256:a65af4c6b1fa2beb729b07a24baeacbf2fb70402ccb5618a10f1cc6f2fd13166 3.89kB / 3.89kB done #4 sha256:cdeaa966c6d8a59cd7da18e762f4d34e3715d8f8f461e3c6dadebde247bdf6c7 685B / 685B done #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 0B / 39.51MB 0.1s #4 sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d 2.10MB / 27.14MB 0.4s #4 sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d 8.39MB / 27.14MB 0.5s #4 sha256:617d05562c0cdbe27dd5dd6a21955bda26d132e9e849fbfc2db476319d6af83f 1.28kB / 1.28kB 0.5s done #4 sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d 15.73MB / 27.14MB 0.6s #4 sha256:c241d9cc0a4c179993ee4cf40c1a8ec245d211eb43304bd9610050e04d7219cb 0B / 3.66MB 0.6s #4 sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d 26.00MB / 27.14MB 0.8s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 6.29MB / 39.51MB 0.8s #4 sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d 27.14MB / 27.14MB 0.9s done #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 8.39MB / 39.51MB 0.9s #4 sha256:c241d9cc0a4c179993ee4cf40c1a8ec245d211eb43304bd9610050e04d7219cb 765.95kB / 3.66MB 0.9s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 15.73MB / 39.51MB 1.1s #4 sha256:c241d9cc0a4c179993ee4cf40c1a8ec245d211eb43304bd9610050e04d7219cb 3.15MB / 3.66MB 1.1s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 0B / 20.79MB 1.1s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 17.77MB / 39.51MB 1.2s #4 sha256:c241d9cc0a4c179993ee4cf40c1a8ec245d211eb43304bd9610050e04d7219cb 3.66MB / 3.66MB 1.1s done #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 0B / 280.60MB 1.2s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 22.07MB / 39.51MB 1.4s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 24.12MB / 39.51MB 1.5s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 1.05MB / 20.79MB 1.5s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 27.26MB / 39.51MB 1.6s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 2.22MB / 20.79MB 1.6s #4 extracting sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 3.72MB / 20.79MB 1.7s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 29.55MB / 39.51MB 1.8s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 34.40MB / 39.51MB 2.0s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 8.29MB / 20.79MB 2.0s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 9.44MB / 20.79MB 2.1s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 10.49MB / 20.79MB 2.2s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 39.51MB / 39.51MB 2.4s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 13.63MB / 20.79MB 2.4s #4 sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 39.51MB / 39.51MB 2.5s done #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 18.44MB / 20.79MB 2.6s #4 sha256:5e7c1e6e5e34b12b60b7139e4ea667ae31c6655054d7c76b204b3f63d6dc70d1 0B / 606B 2.6s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 20.57MB / 20.79MB 2.7s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 14.66MB / 280.60MB 2.7s #4 sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 20.79MB / 20.79MB 2.8s done #4 sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 0B / 32B 2.8s #4 sha256:5e7c1e6e5e34b12b60b7139e4ea667ae31c6655054d7c76b204b3f63d6dc70d1 606B / 606B 2.9s done #4 sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 32B / 32B 3.0s done #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 29.36MB / 280.60MB 3.3s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 43.97MB / 280.60MB 3.8s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 58.72MB / 280.60MB 4.3s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 73.53MB / 280.60MB 4.9s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 92.67MB / 280.60MB 5.6s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 109.05MB / 280.60MB 6.4s #4 extracting sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d 5.1s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 123.73MB / 280.60MB 7.2s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 138.56MB / 280.60MB 8.1s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 155.73MB / 280.60MB 8.9s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 171.97MB / 280.60MB 9.6s #4 extracting sha256:3689b8de819b48387712c6d4d62d26a52a04c9e88afc68fb9d1dbe48bfa9e21d 8.0s done #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 191.81MB / 280.60MB 10.3s #4 extracting sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 208.67MB / 280.60MB 11.2s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 225.44MB / 280.60MB 12.1s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 242.22MB / 280.60MB 12.9s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 256.90MB / 280.60MB 13.5s #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 273.68MB / 280.60MB 14.1s #4 extracting sha256:fa5b059e0b992cf5fed4e00cc4e860bc14fd59a79e1d4ae2053557531cf7087b 3.8s done #4 extracting sha256:617d05562c0cdbe27dd5dd6a21955bda26d132e9e849fbfc2db476319d6af83f #4 extracting sha256:617d05562c0cdbe27dd5dd6a21955bda26d132e9e849fbfc2db476319d6af83f 0.0s done #4 extracting sha256:c241d9cc0a4c179993ee4cf40c1a8ec245d211eb43304bd9610050e04d7219cb #4 sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 280.60MB / 280.60MB 15.4s done #4 extracting sha256:c241d9cc0a4c179993ee4cf40c1a8ec245d211eb43304bd9610050e04d7219cb 1.7s done #4 extracting sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 #4 extracting sha256:757aaba626921987196e6780e46718965e7219dcef9ac0132370147a96c99a83 1.8s done #4 extracting sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 #4 extracting sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 5.0s #4 extracting sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 10.1s #4 extracting sha256:66332df398d2ba24444dda8e869233ea6b85b5bf0dec5ec417bde44276b6e592 12.9s done #4 extracting sha256:5e7c1e6e5e34b12b60b7139e4ea667ae31c6655054d7c76b204b3f63d6dc70d1 done #4 extracting sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 #4 extracting sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 done #4 DONE 34.1s #6 [2/6] COPY debian-buster-sources.list /etc/apt/sources.list #6 DONE 1.6s #7 [3/6] RUN rm -f bundle/org.apache.felix.webconsole*.jar && rm -f bundle/openidm-felix-webconsole-*.jar #7 DONE 1.0s #8 [4/6] RUN echo "\033[0;36m*** Building 'cdk' profile ***\033[0m" #8 1.249 *** Building 'cdk' profile *** #8 DONE 1.3s #9 [5/6] COPY --chown=forgerock:root config-profiles/cdk/ /opt/openidm #9 DONE 0.2s #10 [6/6] COPY --chown=forgerock:root . /opt/openidm #10 DONE 0.2s #11 exporting to image #11 exporting layers #11 exporting layers 0.2s done #11 writing image sha256:c16cf7b9757f39c7ab2220c56ec227e7db1bdd742d35aa8d5fe1cb5141131652 done #11 naming to gcr.io/engineeringpit/lodestar-images/idm:xlou done #11 DONE 0.2s #1 [internal] load build definition from Dockerfile #1 transferring dockerfile: 1.44kB done #1 DONE 0.0s #2 [internal] load .dockerignore #2 transferring context: 2B done #2 DONE 0.0s #3 [internal] load metadata for gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c #3 DONE 1.5s #4 [1/6] FROM gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c@sha256:b42c3ab8ae8db1c5939d8ad3c06c8f9123b01727344fc66bee6cfa93e16168dd #4 resolve gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c@sha256:b42c3ab8ae8db1c5939d8ad3c06c8f9123b01727344fc66bee6cfa93e16168dd 0.0s done #4 ... #5 [internal] load build context #5 transferring context: 106.47kB 0.0s done #5 DONE 0.0s #4 [1/6] FROM gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c@sha256:b42c3ab8ae8db1c5939d8ad3c06c8f9123b01727344fc66bee6cfa93e16168dd #4 sha256:b42c3ab8ae8db1c5939d8ad3c06c8f9123b01727344fc66bee6cfa93e16168dd 685B / 685B done #4 sha256:9e3ea8720c6de96cc9ad544dddc695a3ab73f5581c5d954e0504cc4f80fb5e5c 0B / 31.40MB 0.1s #4 sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 0B / 35.69MB 0.1s #4 sha256:1aecb6f311782918297ba159aafacec0fa52e00a97a218a7e8c21499f9d2131d 0B / 1.26kB 0.1s #4 sha256:a4a735dc190ecd87deaa92a8f15c36762babb76e38743e4e7e0c91b03b44bb4b 1.88kB / 1.88kB done #4 sha256:f737d93611402e3ef5d2df58b655cec48cef9ceb334ee98d4be4ed9231319c72 4.89kB / 4.89kB done #4 sha256:9e3ea8720c6de96cc9ad544dddc695a3ab73f5581c5d954e0504cc4f80fb5e5c 7.34MB / 31.40MB 0.4s #4 sha256:9e3ea8720c6de96cc9ad544dddc695a3ab73f5581c5d954e0504cc4f80fb5e5c 9.44MB / 31.40MB 0.5s #4 sha256:1aecb6f311782918297ba159aafacec0fa52e00a97a218a7e8c21499f9d2131d 1.26kB / 1.26kB 0.4s done #4 sha256:a7b15edc2de6a98c1369fd45dd7f1400947399192173bcf9a1253285647b6fc4 0B / 1.71MB 0.5s #4 sha256:9e3ea8720c6de96cc9ad544dddc695a3ab73f5581c5d954e0504cc4f80fb5e5c 31.40MB / 31.40MB 0.7s #4 sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 8.39MB / 35.69MB 0.7s #4 sha256:9e3ea8720c6de96cc9ad544dddc695a3ab73f5581c5d954e0504cc4f80fb5e5c 31.40MB / 31.40MB 0.7s done #4 sha256:baec8bb838844ff90cd27f18ef6f9a40c6fbed3dc8a92173eccc96c5defe0c5b 0B / 276.80kB 0.8s #4 sha256:a7b15edc2de6a98c1369fd45dd7f1400947399192173bcf9a1253285647b6fc4 1.71MB / 1.71MB 0.8s done #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 0B / 43.37MB 0.9s #4 extracting sha256:9e3ea8720c6de96cc9ad544dddc695a3ab73f5581c5d954e0504cc4f80fb5e5c #4 sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 13.63MB / 35.69MB 1.1s #4 sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 19.70MB / 35.69MB 1.2s #4 sha256:baec8bb838844ff90cd27f18ef6f9a40c6fbed3dc8a92173eccc96c5defe0c5b 276.80kB / 276.80kB 1.1s done #4 sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 0B / 32B 1.2s #4 sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 31.10MB / 35.69MB 1.4s #4 sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 32B / 32B 1.3s done #4 sha256:7ef9c516aa77c08d1120c847fca4033c33b8850b317e050fe1563080a304e4ee 0B / 4.08MB 1.4s #4 sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 35.52MB / 35.69MB 1.5s #4 sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 35.69MB / 35.69MB 1.6s done #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 3.15MB / 43.37MB 1.6s #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 8.39MB / 43.37MB 1.7s #4 sha256:7ef9c516aa77c08d1120c847fca4033c33b8850b317e050fe1563080a304e4ee 2.81MB / 4.08MB 1.8s #4 sha256:7ef9c516aa77c08d1120c847fca4033c33b8850b317e050fe1563080a304e4ee 4.08MB / 4.08MB 1.8s done #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 11.53MB / 43.37MB 2.0s #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 17.04MB / 43.37MB 2.1s #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 26.21MB / 43.37MB 2.3s #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 30.41MB / 43.37MB 2.4s #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 34.60MB / 43.37MB 2.5s #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 43.37MB / 43.37MB 2.7s #4 sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 43.37MB / 43.37MB 2.7s done #4 extracting sha256:9e3ea8720c6de96cc9ad544dddc695a3ab73f5581c5d954e0504cc4f80fb5e5c 2.5s done #4 extracting sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 0.1s #4 extracting sha256:0410f1415e8979107e3ca6ad037af88387857f7f17530faceddfa2fa3c46240e 0.9s done #4 extracting sha256:1aecb6f311782918297ba159aafacec0fa52e00a97a218a7e8c21499f9d2131d done #4 extracting sha256:a7b15edc2de6a98c1369fd45dd7f1400947399192173bcf9a1253285647b6fc4 #4 extracting sha256:a7b15edc2de6a98c1369fd45dd7f1400947399192173bcf9a1253285647b6fc4 0.3s done #4 extracting sha256:baec8bb838844ff90cd27f18ef6f9a40c6fbed3dc8a92173eccc96c5defe0c5b #4 extracting sha256:baec8bb838844ff90cd27f18ef6f9a40c6fbed3dc8a92173eccc96c5defe0c5b 0.2s done #4 extracting sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 #4 extracting sha256:a80eede2d5ed6e66354bae95859ec56da860b89d295202c0a84ed842baeb8a21 0.9s done #4 extracting sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 #4 extracting sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 done #4 extracting sha256:7ef9c516aa77c08d1120c847fca4033c33b8850b317e050fe1563080a304e4ee #4 extracting sha256:7ef9c516aa77c08d1120c847fca4033c33b8850b317e050fe1563080a304e4ee 0.3s done #4 DONE 7.1s #6 [2/6] RUN apt-get update && apt-get install -y --no-install-recommends vim ncat dnsutils #6 0.485 Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB] #6 0.513 Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB] #6 0.516 Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB] #6 0.696 Get:4 http://deb.debian.org/debian bullseye/main amd64 Packages [8183 kB] #6 0.915 Get:5 http://deb.debian.org/debian-security bullseye-security/main amd64 Packages [240 kB] #6 1.116 Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [14.6 kB] #6 2.360 Fetched 8646 kB in 2s (4561 kB/s) #6 2.360 Reading package lists... #6 3.155 Reading package lists... #6 3.960 Building dependency tree... #6 4.213 Reading state information... #6 4.539 The following additional packages will be installed: #6 4.541 bind9-dnsutils bind9-host bind9-libs libbsd0 libdbus-1-3 libedit2 libfstrm0 #6 4.542 libicu67 libjson-c5 liblmdb0 liblua5.3-0 libmaxminddb0 libmd0 libpcap0.8 #6 4.543 libprotobuf-c1 libuv1 libxml2 vim-common vim-runtime xxd #6 4.547 Suggested packages: #6 4.547 mmdb-bin ctags vim-doc vim-scripts #6 4.547 Recommended packages: #6 4.547 dbus #6 4.810 The following NEW packages will be installed: #6 4.811 bind9-dnsutils bind9-host bind9-libs dnsutils libbsd0 libdbus-1-3 libedit2 #6 4.812 libfstrm0 libicu67 libjson-c5 liblmdb0 liblua5.3-0 libmaxminddb0 libmd0 #6 4.813 libpcap0.8 libprotobuf-c1 libuv1 libxml2 ncat vim vim-common vim-runtime xxd #6 4.867 0 upgraded, 23 newly installed, 0 to remove and 0 not upgraded. #6 4.867 Need to get 21.3 MB of archives. #6 4.867 After this operation, 81.2 MB of additional disk space will be used. #6 4.867 Get:1 http://deb.debian.org/debian bullseye/main amd64 xxd amd64 2:8.2.2434-3+deb11u1 [192 kB] #6 4.893 Get:2 http://deb.debian.org/debian bullseye/main amd64 vim-common all 2:8.2.2434-3+deb11u1 [226 kB] #6 4.901 Get:3 http://deb.debian.org/debian bullseye/main amd64 libuv1 amd64 1.40.0-2 [132 kB] #6 4.904 Get:4 http://deb.debian.org/debian bullseye/main amd64 libfstrm0 amd64 0.6.0-1+b1 [21.5 kB] #6 4.905 Get:5 http://deb.debian.org/debian bullseye/main amd64 libjson-c5 amd64 0.15-2 [42.8 kB] #6 4.906 Get:6 http://deb.debian.org/debian bullseye/main amd64 liblmdb0 amd64 0.9.24-1 [45.0 kB] #6 4.908 Get:7 http://deb.debian.org/debian bullseye/main amd64 libmaxminddb0 amd64 1.5.2-1 [29.8 kB] #6 4.909 Get:8 http://deb.debian.org/debian bullseye/main amd64 libprotobuf-c1 amd64 1.3.3-1+b2 [27.0 kB] #6 4.911 Get:9 http://deb.debian.org/debian bullseye/main amd64 libicu67 amd64 67.1-7 [8622 kB] #6 4.997 Get:10 http://deb.debian.org/debian bullseye/main amd64 libxml2 amd64 2.9.10+dfsg-6.7+deb11u4 [693 kB] #6 5.005 Get:11 http://deb.debian.org/debian bullseye/main amd64 bind9-libs amd64 1:9.16.37-1~deb11u1 [1424 kB] #6 5.017 Get:12 http://deb.debian.org/debian bullseye/main amd64 bind9-host amd64 1:9.16.37-1~deb11u1 [308 kB] #6 5.020 Get:13 http://deb.debian.org/debian bullseye/main amd64 libmd0 amd64 1.0.3-3 [28.0 kB] #6 5.022 Get:14 http://deb.debian.org/debian bullseye/main amd64 libbsd0 amd64 0.11.3-1 [108 kB] #6 5.024 Get:15 http://deb.debian.org/debian bullseye/main amd64 libedit2 amd64 3.1-20191231-2+b1 [96.7 kB] #6 5.026 Get:16 http://deb.debian.org/debian bullseye/main amd64 bind9-dnsutils amd64 1:9.16.37-1~deb11u1 [404 kB] #6 5.032 Get:17 http://deb.debian.org/debian bullseye/main amd64 dnsutils all 1:9.16.37-1~deb11u1 [267 kB] #6 5.036 Get:18 http://deb.debian.org/debian bullseye/main amd64 libdbus-1-3 amd64 1.12.24-0+deb11u1 [222 kB] #6 5.040 Get:19 http://deb.debian.org/debian bullseye/main amd64 liblua5.3-0 amd64 5.3.3-1.1+b1 [120 kB] #6 5.042 Get:20 http://deb.debian.org/debian bullseye/main amd64 libpcap0.8 amd64 1.10.0-2 [159 kB] #6 5.044 Get:21 http://deb.debian.org/debian bullseye/main amd64 ncat amd64 7.91+dfsg1+really7.80+dfsg1-2 [383 kB] #6 5.048 Get:22 http://deb.debian.org/debian bullseye/main amd64 vim-runtime all 2:8.2.2434-3+deb11u1 [6226 kB] #6 5.102 Get:23 http://deb.debian.org/debian bullseye/main amd64 vim amd64 2:8.2.2434-3+deb11u1 [1494 kB] #6 5.291 debconf: delaying package configuration, since apt-utils is not installed #6 5.349 Fetched 21.3 MB in 0s (73.3 MB/s) #6 5.389 Selecting previously unselected package xxd. #6 5.389 (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 7980 files and directories currently installed.) #6 5.406 Preparing to unpack .../00-xxd_2%3a8.2.2434-3+deb11u1_amd64.deb ... #6 5.413 Unpacking xxd (2:8.2.2434-3+deb11u1) ... #6 5.500 Selecting previously unselected package vim-common. #6 5.502 Preparing to unpack .../01-vim-common_2%3a8.2.2434-3+deb11u1_all.deb ... #6 5.517 Unpacking vim-common (2:8.2.2434-3+deb11u1) ... #6 5.604 Selecting previously unselected package libuv1:amd64. #6 5.604 Preparing to unpack .../02-libuv1_1.40.0-2_amd64.deb ... #6 5.617 Unpacking libuv1:amd64 (1.40.0-2) ... #6 5.692 Selecting previously unselected package libfstrm0:amd64. #6 5.696 Preparing to unpack .../03-libfstrm0_0.6.0-1+b1_amd64.deb ... #6 5.702 Unpacking libfstrm0:amd64 (0.6.0-1+b1) ... #6 5.768 Selecting previously unselected package libjson-c5:amd64. #6 5.770 Preparing to unpack .../04-libjson-c5_0.15-2_amd64.deb ... #6 5.777 Unpacking libjson-c5:amd64 (0.15-2) ... #6 5.837 Selecting previously unselected package liblmdb0:amd64. #6 5.840 Preparing to unpack .../05-liblmdb0_0.9.24-1_amd64.deb ... #6 5.846 Unpacking liblmdb0:amd64 (0.9.24-1) ... #6 5.905 Selecting previously unselected package libmaxminddb0:amd64. #6 5.909 Preparing to unpack .../06-libmaxminddb0_1.5.2-1_amd64.deb ... #6 5.916 Unpacking libmaxminddb0:amd64 (1.5.2-1) ... #6 5.980 Selecting previously unselected package libprotobuf-c1:amd64. #6 5.984 Preparing to unpack .../07-libprotobuf-c1_1.3.3-1+b2_amd64.deb ... #6 5.991 Unpacking libprotobuf-c1:amd64 (1.3.3-1+b2) ... #6 6.056 Selecting previously unselected package libicu67:amd64. #6 6.060 Preparing to unpack .../08-libicu67_67.1-7_amd64.deb ... #6 6.067 Unpacking libicu67:amd64 (67.1-7) ... #6 7.333 Selecting previously unselected package libxml2:amd64. #6 7.337 Preparing to unpack .../09-libxml2_2.9.10+dfsg-6.7+deb11u4_amd64.deb ... #6 7.344 Unpacking libxml2:amd64 (2.9.10+dfsg-6.7+deb11u4) ... #6 7.490 Selecting previously unselected package bind9-libs:amd64. #6 7.490 Preparing to unpack .../10-bind9-libs_1%3a9.16.37-1~deb11u1_amd64.deb ... #6 7.495 Unpacking bind9-libs:amd64 (1:9.16.37-1~deb11u1) ... #6 7.700 Selecting previously unselected package bind9-host. #6 7.703 Preparing to unpack .../11-bind9-host_1%3a9.16.37-1~deb11u1_amd64.deb ... #6 7.708 Unpacking bind9-host (1:9.16.37-1~deb11u1) ... #6 7.785 Selecting previously unselected package libmd0:amd64. #6 7.785 Preparing to unpack .../12-libmd0_1.0.3-3_amd64.deb ... #6 7.791 Unpacking libmd0:amd64 (1.0.3-3) ... #6 7.853 Selecting previously unselected package libbsd0:amd64. #6 7.857 Preparing to unpack .../13-libbsd0_0.11.3-1_amd64.deb ... #6 7.864 Unpacking libbsd0:amd64 (0.11.3-1) ... #6 7.932 Selecting previously unselected package libedit2:amd64. #6 7.935 Preparing to unpack .../14-libedit2_3.1-20191231-2+b1_amd64.deb ... #6 7.942 Unpacking libedit2:amd64 (3.1-20191231-2+b1) ... #6 8.008 Selecting previously unselected package bind9-dnsutils. #6 8.008 Preparing to unpack .../15-bind9-dnsutils_1%3a9.16.37-1~deb11u1_amd64.deb ... #6 8.014 Unpacking bind9-dnsutils (1:9.16.37-1~deb11u1) ... #6 8.092 Selecting previously unselected package dnsutils. #6 8.092 Preparing to unpack .../16-dnsutils_1%3a9.16.37-1~deb11u1_all.deb ... #6 8.099 Unpacking dnsutils (1:9.16.37-1~deb11u1) ... #6 8.174 Selecting previously unselected package libdbus-1-3:amd64. #6 8.178 Preparing to unpack .../17-libdbus-1-3_1.12.24-0+deb11u1_amd64.deb ... #6 8.186 Unpacking libdbus-1-3:amd64 (1.12.24-0+deb11u1) ... #6 8.273 Selecting previously unselected package liblua5.3-0:amd64. #6 8.277 Preparing to unpack .../18-liblua5.3-0_5.3.3-1.1+b1_amd64.deb ... #6 8.284 Unpacking liblua5.3-0:amd64 (5.3.3-1.1+b1) ... #6 8.360 Selecting previously unselected package libpcap0.8:amd64. #6 8.365 Preparing to unpack .../19-libpcap0.8_1.10.0-2_amd64.deb ... #6 8.371 Unpacking libpcap0.8:amd64 (1.10.0-2) ... #6 8.450 Selecting previously unselected package ncat. #6 8.454 Preparing to unpack .../20-ncat_7.91+dfsg1+really7.80+dfsg1-2_amd64.deb ... #6 8.461 Unpacking ncat (7.91+dfsg1+really7.80+dfsg1-2) ... #6 8.556 Selecting previously unselected package vim-runtime. #6 8.556 Preparing to unpack .../21-vim-runtime_2%3a8.2.2434-3+deb11u1_all.deb ... #6 8.571 Adding 'diversion of /usr/share/vim/vim82/doc/help.txt to /usr/share/vim/vim82/doc/help.txt.vim-tiny by vim-runtime' #6 8.587 Adding 'diversion of /usr/share/vim/vim82/doc/tags to /usr/share/vim/vim82/doc/tags.vim-tiny by vim-runtime' #6 8.594 Unpacking vim-runtime (2:8.2.2434-3+deb11u1) ... #6 9.723 Selecting previously unselected package vim. #6 9.729 Preparing to unpack .../22-vim_2%3a8.2.2434-3+deb11u1_amd64.deb ... #6 9.745 Unpacking vim (2:8.2.2434-3+deb11u1) ... #6 9.983 Setting up liblmdb0:amd64 (0.9.24-1) ... #6 9.999 Setting up libicu67:amd64 (67.1-7) ... #6 10.01 Setting up libmaxminddb0:amd64 (1.5.2-1) ... #6 10.03 Setting up libfstrm0:amd64 (0.6.0-1+b1) ... #6 10.04 Setting up libprotobuf-c1:amd64 (1.3.3-1+b2) ... #6 10.06 Setting up xxd (2:8.2.2434-3+deb11u1) ... #6 10.07 Setting up libuv1:amd64 (1.40.0-2) ... #6 10.08 Setting up vim-common (2:8.2.2434-3+deb11u1) ... #6 10.12 Setting up libdbus-1-3:amd64 (1.12.24-0+deb11u1) ... #6 10.13 Setting up libmd0:amd64 (1.0.3-3) ... #6 10.14 Setting up liblua5.3-0:amd64 (5.3.3-1.1+b1) ... #6 10.16 Setting up vim-runtime (2:8.2.2434-3+deb11u1) ... #6 10.29 Setting up libbsd0:amd64 (0.11.3-1) ... #6 10.31 Setting up libjson-c5:amd64 (0.15-2) ... #6 10.33 Setting up libxml2:amd64 (2.9.10+dfsg-6.7+deb11u4) ... #6 10.35 Setting up vim (2:8.2.2434-3+deb11u1) ... #6 10.36 update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vim (vim) in auto mode #6 10.37 update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vimdiff (vimdiff) in auto mode #6 10.37 update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rvim (rvim) in auto mode #6 10.38 update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rview (rview) in auto mode #6 10.39 update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vi (vi) in auto mode #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/da/man1/vi.1.gz because associated file /usr/share/man/da/man1/vim.1.gz (of link group vi) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/de/man1/vi.1.gz because associated file /usr/share/man/de/man1/vim.1.gz (of link group vi) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/fr/man1/vi.1.gz because associated file /usr/share/man/fr/man1/vim.1.gz (of link group vi) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/it/man1/vi.1.gz because associated file /usr/share/man/it/man1/vim.1.gz (of link group vi) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/ja/man1/vi.1.gz because associated file /usr/share/man/ja/man1/vim.1.gz (of link group vi) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/pl/man1/vi.1.gz because associated file /usr/share/man/pl/man1/vim.1.gz (of link group vi) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/ru/man1/vi.1.gz because associated file /usr/share/man/ru/man1/vim.1.gz (of link group vi) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/man1/vi.1.gz because associated file /usr/share/man/man1/vim.1.gz (of link group vi) doesn't exist #6 10.39 update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/view (view) in auto mode #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/da/man1/view.1.gz because associated file /usr/share/man/da/man1/vim.1.gz (of link group view) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/de/man1/view.1.gz because associated file /usr/share/man/de/man1/vim.1.gz (of link group view) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/fr/man1/view.1.gz because associated file /usr/share/man/fr/man1/vim.1.gz (of link group view) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/it/man1/view.1.gz because associated file /usr/share/man/it/man1/vim.1.gz (of link group view) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/ja/man1/view.1.gz because associated file /usr/share/man/ja/man1/vim.1.gz (of link group view) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/pl/man1/view.1.gz because associated file /usr/share/man/pl/man1/vim.1.gz (of link group view) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/ru/man1/view.1.gz because associated file /usr/share/man/ru/man1/vim.1.gz (of link group view) doesn't exist #6 10.39 update-alternatives: warning: skip creation of /usr/share/man/man1/view.1.gz because associated file /usr/share/man/man1/vim.1.gz (of link group view) doesn't exist #6 10.40 update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/ex (ex) in auto mode #6 10.40 update-alternatives: warning: skip creation of /usr/share/man/da/man1/ex.1.gz because associated file /usr/share/man/da/man1/vim.1.gz (of link group ex) doesn't exist #6 10.40 update-alternatives: warning: skip creation of /usr/share/man/de/man1/ex.1.gz because associated file /usr/share/man/de/man1/vim.1.gz (of link group ex) doesn't exist #6 10.40 update-alternatives: warning: skip creation of /usr/share/man/fr/man1/ex.1.gz because associated file /usr/share/man/fr/man1/vim.1.gz (of link group ex) doesn't exist #6 10.40 update-alternatives: warning: skip creation of /usr/share/man/it/man1/ex.1.gz because associated file /usr/share/man/it/man1/vim.1.gz (of link group ex) doesn't exist #6 10.40 update-alternatives: warning: skip creation of /usr/share/man/ja/man1/ex.1.gz because associated file /usr/share/man/ja/man1/vim.1.gz (of link group ex) doesn't exist #6 10.40 update-alternatives: warning: skip creation of /usr/share/man/pl/man1/ex.1.gz because associated file /usr/share/man/pl/man1/vim.1.gz (of link group ex) doesn't exist #6 10.40 update-alternatives: warning: skip creation of /usr/share/man/ru/man1/ex.1.gz because associated file /usr/share/man/ru/man1/vim.1.gz (of link group ex) doesn't exist #6 10.40 update-alternatives: warning: skip creation of /usr/share/man/man1/ex.1.gz because associated file /usr/share/man/man1/vim.1.gz (of link group ex) doesn't exist #6 10.41 update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/editor (editor) in auto mode #6 10.41 update-alternatives: warning: skip creation of /usr/share/man/da/man1/editor.1.gz because associated file /usr/share/man/da/man1/vim.1.gz (of link group editor) doesn't exist #6 10.41 update-alternatives: warning: skip creation of /usr/share/man/de/man1/editor.1.gz because associated file /usr/share/man/de/man1/vim.1.gz (of link group editor) doesn't exist #6 10.41 update-alternatives: warning: skip creation of /usr/share/man/fr/man1/editor.1.gz because associated file /usr/share/man/fr/man1/vim.1.gz (of link group editor) doesn't exist #6 10.41 update-alternatives: warning: skip creation of /usr/share/man/it/man1/editor.1.gz because associated file /usr/share/man/it/man1/vim.1.gz (of link group editor) doesn't exist #6 10.41 update-alternatives: warning: skip creation of /usr/share/man/ja/man1/editor.1.gz because associated file /usr/share/man/ja/man1/vim.1.gz (of link group editor) doesn't exist #6 10.41 update-alternatives: warning: skip creation of /usr/share/man/pl/man1/editor.1.gz because associated file /usr/share/man/pl/man1/vim.1.gz (of link group editor) doesn't exist #6 10.41 update-alternatives: warning: skip creation of /usr/share/man/ru/man1/editor.1.gz because associated file /usr/share/man/ru/man1/vim.1.gz (of link group editor) doesn't exist #6 10.41 update-alternatives: warning: skip creation of /usr/share/man/man1/editor.1.gz because associated file /usr/share/man/man1/vim.1.gz (of link group editor) doesn't exist #6 10.43 Setting up bind9-libs:amd64 (1:9.16.37-1~deb11u1) ... #6 10.45 Setting up libedit2:amd64 (3.1-20191231-2+b1) ... #6 10.46 Setting up libpcap0.8:amd64 (1.10.0-2) ... #6 10.48 Setting up ncat (7.91+dfsg1+really7.80+dfsg1-2) ... #6 10.49 update-alternatives: using /usr/bin/ncat to provide /bin/nc (nc) in auto mode #6 10.49 update-alternatives: warning: skip creation of /usr/share/man/man1/nc.1.gz because associated file /usr/share/man/man1/ncat.1.gz (of link group nc) doesn't exist #6 10.49 update-alternatives: warning: skip creation of /usr/share/man/man1/netcat.1.gz because associated file /usr/share/man/man1/ncat.1.gz (of link group nc) doesn't exist #6 10.50 Setting up bind9-host (1:9.16.37-1~deb11u1) ... #6 10.52 Setting up bind9-dnsutils (1:9.16.37-1~deb11u1) ... #6 10.53 Setting up dnsutils (1:9.16.37-1~deb11u1) ... #6 10.55 Processing triggers for libc-bin (2.31-13+deb11u6) ... #6 DONE 11.9s #7 [3/6] COPY --chown=forgerock:root default-scripts /opt/opendj/default-scripts #7 DONE 0.1s #8 [4/6] COPY --chown=forgerock:root ldif-ext /opt/opendj/ldif-ext #8 DONE 0.1s #9 [5/6] COPY --chown=forgerock:root *.sh /opt/opendj/ #9 DONE 0.1s #10 [6/6] RUN ./ds-setup.sh && rm ./ds-setup.sh && rm -fr ldif-ext #10 0.440 + rm -f template/config/tools.properties #10 0.444 + rm -rf -- README* bat *.zip *.png *.bat setup.sh #10 0.447 + ./bin/dskeymgr create-deployment-key --deploymentKeyPassword password #10 8.994 + deploymentKey=AH2DzEIHhJDkvlgRXhqbGbriB8UCxQ5CBVN1bkVDAJeBvnZHYh9bLVc #10 8.994 + ./setup --instancePath /opt/opendj/data --serverId docker --hostname localhost --deploymentKey AH2DzEIHhJDkvlgRXhqbGbriB8UCxQ5CBVN1bkVDAJeBvnZHYh9bLVc --deploymentKeyPassword password --rootUserPassword password --adminConnectorPort 4444 --ldapPort 1389 --enableStartTls --ldapsPort 1636 --httpPort 8080 --httpsPort 8443 --replicationPort 8989 --rootUserDn uid=admin --monitorUserDn uid=monitor --monitorUserPassword password --acceptLicense #10 11.50 READ THIS SOFTWARE LICENSE AGREEMENT CAREFULLY. BY DOWNLOADING OR INSTALLING #10 11.50 THE FORGEROCK SOFTWARE, YOU, ON BEHALF OF YOURSELF AND YOUR COMPANY, AGREE TO #10 11.50 BE BOUND BY THIS SOFTWARE LICENSE AGREEMENT. IF YOU DO NOT AGREE TO THESE #10 11.50 TERMS, DO NOT DOWNLOAD OR INSTALL THE FORGEROCK SOFTWARE. #10 11.50 #10 11.50 1. Software License. #10 11.50 #10 11.50 1.1. Development Right to Use. If Company intends to or does use the ForgeRock #10 11.50 Software only for the purpose(s) of developing, testing, prototyping and #10 11.50 demonstrating its application software, then ForgeRock hereby grants Company a #10 11.50 nonexclusive, nontransferable, limited license to use the ForgeRock Software #10 11.50 only for those purposes, solely at Company's facilities and only in a #10 11.50 non-production environment. ForgeRock may audit Company's use of the ForgeRock #10 11.50 Software to confirm that a production license is not required upon reasonable #10 11.50 written notice to Company. If Company intends to use the ForgeRock Software in #10 11.50 a live environment, Company must purchase a production license and may only use #10 11.50 the ForgeRock Software licensed thereunder in accordance with the terms and #10 11.50 conditions of that subscription agreement. #10 11.50 #10 11.50 1.2. Restrictions. Except as expressly set forth in this ForgeRock Software #10 11.50 License Agreement (the "Agreement"), Company shall not, directly or indirectly: #10 11.50 (a) sublicense, resell, rent, lease, distribute or otherwise transfer rights or #10 11.50 usage in the ForgeRock Software, including without limitation to Company #10 11.50 subsidiaries and affiliates; (b) remove or alter any copyright, trademark or #10 11.50 proprietary notices in the ForgeRock Software; or (c) use the ForgeRock #10 11.50 Software in any way that would subject the ForgeRock Software, in whole in or #10 11.50 in part, to a Copyleft License. As used herein, "Copyleft License" means a #10 11.50 software license that requires that information necessary for reproducing and #10 11.50 modifying such software must be made available publicly to recipients of #10 11.50 executable versions of such software (see, e.g., GNU General Public License and #10 11.50 http://www.gnu.org/copyleft/). #10 11.50 #10 11.50 2. Proprietary Rights. #10 11.50 #10 11.50 2.1. ForgeRock Intellectual Property. Title to and ownership of all copies of #10 11.50 the ForgeRock Software whether in machine-readable (source, object code or #10 11.50 other format) or printed form, and all related technical know-how and all #10 11.50 rights therein (including without limitation all intellectual property rights #10 11.50 applicable thereto), belong to ForgeRock and its licensors and shall remain the #10 11.50 exclusive property thereof. ForgeRock's name, logo, trade names and trademarks #10 11.50 are owned exclusively by ForgeRock and no right is granted to Company to use #10 11.50 any of the foregoing except as expressly permitted herein. All rights not #10 11.50 expressly granted to Company are reserved by ForgeRock and its licensors. #10 11.50 #10 11.50 2.2. Suggestions. Company hereby grants to ForgeRock a royalty-free, worldwide, #10 11.50 transferable, sublicensable and irrevocable right and license to use, copy, #10 11.50 modify and distribute, including by incorporating into any product or service #10 11.50 owned by ForgeRock, any suggestions, enhancements, recommendations or other #10 11.50 feedback provided by Company relating to any product or service owned or #10 11.50 offered by ForgeRock. #10 11.50 #10 11.50 2.3. Source Code. The source code underlying the ForgeRock Software is #10 11.50 available at www.forgerock.org. #10 11.50 #10 11.50 3. Term and Termination. The terms of this Agreement shall commence on the #10 11.50 Effective Date and shall continue in force unless earlier terminated in #10 11.50 accordance this Section. This Agreement shall terminate without notice to #10 11.50 Company in the event Company is in material breach of any of the terms and #10 11.50 conditions of this Agreement. As used herein, "Effective Date" means the date #10 11.50 on which Company first accepted this Agreement and downloads the ForgeRock #10 11.50 Software. #10 11.50 #10 11.50 4. Disclaimer of Warranties. THE FORGEROCK SOFTWARE LICENSED HEREUNDER IS #10 11.50 LICENSED "AS IS" AND WITHOUT WARRANTY OF ANY KIND. FORGEROCK AND IT'S LICENSORS #10 11.50 EXPRESSLY DISCLAIM ALL WARRANTIES, WHETHER EXPRESS, IMPLIED OR STATUTORY, #10 11.50 INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, #10 11.50 FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OF NON-INFRINGEMENT. #10 11.50 #10 11.50 5. General Indemnification. Company shall defend, indemnify and hold ForgeRock #10 11.50 harmless from and against any and all liabilities, damages, losses, costs and #10 11.50 expenses (including but not limited to reasonable fees of attorneys and other #10 11.50 professionals) payable to third parties based upon any claim arising out of or #10 11.50 related to the use of Company's products, provided that ForgeRock: (a) promptly #10 11.50 notifies Company of the claim; (b) provides Company with all reasonable #10 11.50 information and assistance, at Company's expense, to defend or settle such a #10 11.50 claim; and (c) grants Company authority and control of the defense or #10 11.50 settlement of such claim. Company shall not settle any such claim, without #10 11.50 ForgeRock's prior written consent, if such settlement would in any manner #10 11.50 effect ForgeRock's rights in the ForgeRock Software or otherwise. ForgeRock #10 11.50 reserves the right to retain counsel, at ForgeRock's expense, to participate in #10 11.50 the defense and settlement of any such claim. #10 11.50 #10 11.50 6. Limitation of Liability. IN NO EVENT SHALL FORGEROCK BE LIABLE FOR THE COST #10 11.50 OF PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, ANY LOST PROFITS, REVENUE, OR #10 11.50 DATA, INTERRUPTION OF BUSINESS OR FOR ANY INCIDENTAL, SPECIAL, CONSEQUENTIAL OR #10 11.50 INDIRECT DAMAGES OF ANY KIND, AND WHETHER ARISING OUT OF BREACH OF WARRANTY, #10 11.50 BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, #10 11.50 EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE OR IF SUCH DAMAGE COULD HAVE #10 11.50 BEEN REASONABLY FORESEEN. IN NO EVENT SHALL FORGEROCK'S LIABILITY ARISING OUT #10 11.50 OF OR RELATED TO THIS AGREEMENT WHETHER IN CONTRACT, TORT OR UNDER ANY OTHER #10 11.50 THEORY OF LIABILITY, EXCEED IN THE AGGREGATE $1,000 USD. #10 11.50 #10 11.50 7. General. #10 11.50 #10 11.50 7.1. Governing Law. This Agreement shall be governed by and interpreted in #10 11.50 accordance with the laws of the State of California without reference to its #10 11.50 conflicts of law provisions. #10 11.50 #10 11.50 7.2. Assignment. Company may not assign any of its rights or obligations under #10 11.50 this Agreement without the prior written consent of ForgeRock, which consent #10 11.50 shall not be unreasonably withheld. Any assignment not in conformity with this #10 11.50 Section shall be null and void. #10 11.50 #10 11.50 7.3. Waiver. A waiver on one occasion shall not be construed as a waiver of any #10 11.50 right on any future occasion. No delay or omission by a party in exercising any #10 11.50 of its rights hereunder shall operate as a waiver of such rights. #10 11.50 #10 11.50 7.4. Compliance with Law. The ForgeRock Software is subject to U.S. export #10 11.50 control laws, including the U.S. Export Administration Act and its associated #10 11.50 regulations, and may be subject to export or import regulations in other #10 11.50 countries. Company agrees to comply with all laws and regulations of the United #10 11.50 States and other countries ("Export Laws") to assure that neither the ForgeRock #10 11.50 Software, nor any direct products thereof are; (a) exported, directly or #10 11.50 indirectly, in violation of Export Laws, either to any countries that are #10 11.50 subject to U.S. export restrictions or to any end user who has been prohibited #10 11.50 from participating in the U.S. export transactions by any federal agency of the #10 11.50 U.S. government or (b) intended to be used for any purpose prohibited by Export #10 11.50 Laws, including, without limitation, nuclear, chemical, or biological weapons #10 11.50 proliferation. #10 11.50 #10 11.50 7.5. US Government Restrictions. Company acknowledges that the ForgeRock #10 11.50 Software consists of "commercial computer software" and "commercial computer #10 11.50 software documentation" as such terms are defined in the Code of Federal #10 11.50 Regulations. No Government procurement regulations or contract clauses or #10 11.50 provisions shall be deemed a part of any transaction between the parties unless #10 11.50 its inclusion is required by law, or mutually agreed in writing by the parties #10 11.50 in connection with a specific transaction. Use, duplication, reproduction, #10 11.50 release, modification, disclosure or transfer of the ForgeRock Software is #10 11.50 restricted in accordance with the terms of this Agreement. #10 11.50 #10 11.50 7.6. Provision Severability. In the event that it is determined by a court of #10 11.50 competent jurisdiction that any provision of this Agreement is invalid, #10 11.50 illegal, or otherwise unenforceable, such provision shall be enforced as nearly #10 11.50 as possible in accordance with the stated intention of the parties, while the #10 11.50 remainder of this Agreement shall remain in full force and effect and bind the #10 11.50 parties according to its terms. To the extent any provision cannot be enforced #10 11.50 in accordance with the stated intentions of the parties, such terms and #10 11.50 conditions shall be deemed not to be a part of this Agreement. #10 11.50 #10 11.50 7.7. Entire Agreement. This Agreement constitutes the entire and exclusive #10 11.50 agreement between the parties with respect to the subject matter hereof and #10 11.50 supersede any prior agreements between the parties with respect to such subject #10 11.50 matter #10 11.50 #10 19.18 #10 19.18 Validating parameters..... Done #10 20.31 Configuring certificates..... Done #10 20.54 Configuring server..... Done #10 21.39 #10 21.39 To see basic server status and configuration, you can launch #10 21.39 /opt/opendj/bin/status #10 21.39 #10 21.45 + ./bin/dsconfig --offline --no-prompt --batch #10 23.39 set-global-configuration-prop --set "server-id:&{ds.server.id|docker}" #10 24.49 #10 24.49 set-global-configuration-prop --set "group-id:&{ds.group.id|default}" #10 24.83 #10 24.84 set-global-configuration-prop --set "advertised-listen-address:&{ds.advertised.listen.address|localhost}" #10 25.11 #10 25.11 set-global-configuration-prop --advanced --set "trust-transaction-ids:&{platform.trust.transaction.header|false}" #10 25.35 #10 25.35 delete-log-publisher --publisher-name "File-Based Error Logger" #10 25.51 #10 25.52 The Log Publisher was deleted successfully #10 25.52 #10 25.52 delete-log-publisher --publisher-name "File-Based Access Logger" #10 25.70 #10 25.70 The Log Publisher was deleted successfully #10 25.70 #10 25.70 delete-log-publisher --publisher-name "File-Based Audit Logger " #10 25.86 #10 25.86 The Log Publisher was deleted successfully #10 25.86 #10 25.86 delete-log-publisher --publisher-name "File-Based HTTP Access Logger" #10 26.05 #10 26.05 The Log Publisher was deleted successfully #10 26.05 #10 26.05 delete-log-publisher --publisher-name "Json File-Based Access Logger" #10 26.20 #10 26.20 The Log Publisher was deleted successfully #10 26.20 #10 26.20 delete-log-publisher --publisher-name "Json File-Based HTTP Access Logger" #10 26.35 #10 26.35 The Log Publisher was deleted successfully #10 26.35 #10 26.35 create-log-publisher --type console-error --publisher-name "Console Error Logger" --set enabled:true --set default-severity:error --set default-severity:warning --set default-severity:notice --set override-severity:SYNC=INFO,ERROR,WARNING,NOTICE #10 26.53 #10 26.53 The Console Error Log Publisher was created successfully #10 26.53 #10 26.53 create-log-publisher --type external-access --publisher-name "Console LDAP Access Logger" --set enabled:true --set config-file:config/audit-handlers/ldap-access-stdout.json --set "filtering-policy:&{ds.log.filtering.policy|inclusive}" #10 26.65 #10 26.66 The External Access Log Publisher was created successfully #10 26.66 #10 26.66 create-log-publisher --type external-http-access --publisher-name "Console HTTP Access Logger" --set enabled:true --set config-file:config/audit-handlers/http-access-stdout.json #10 26.80 #10 26.80 The External HTTP Access Log Publisher was created successfully #10 26.80 #10 26.80 delete-sasl-mechanism-handler --handler-name "GSSAPI" #10 27.00 #10 27.00 The SASL Mechanism Handler was deleted successfully #10 27.00 #10 27.00 set-synchronization-provider-prop --provider-name "Multimaster synchronization" --set "bootstrap-replication-server:&{ds.bootstrap.replication.servers|localhost:8989}" #10 27.14 #10 27.14 set-synchronization-provider-prop --provider-name "Multimaster synchronization" --set "replication-purge-delay:86400 s" #10 27.26 #10 27.35 + dsconfig --offline --no-prompt --batch #10 29.11 set-global-configuration-prop --set "unauthenticated-requests-policy:allow" #10 30.22 #10 30.22 set-password-policy-prop --policy-name "Default Password Policy" --set "require-secure-authentication:false" --set "require-secure-password-changes:false" --reset "password-validator" #10 30.52 #10 30.53 set-password-policy-prop --policy-name "Root Password Policy" --set "require-secure-authentication:false" --set "require-secure-password-changes:false" --reset "password-validator" #10 30.82 #10 30.85 + dsconfig --offline --no-prompt --batch #10 32.76 create-trust-manager-provider --provider-name "PEM Trust Manager" --type pem --set enabled:true --set pem-directory:/var/run/secrets/keys/truststore #10 33.87 #10 33.87 The Pem Trust Manager Provider was created successfully #10 33.88 #10 33.88 set-connection-handler-prop --handler-name https --set trust-manager-provider:"PEM Trust Manager" #10 34.29 #10 34.29 set-connection-handler-prop --handler-name ldap --set trust-manager-provider:"PEM Trust Manager" #10 34.54 #10 34.54 set-connection-handler-prop --handler-name ldaps --set trust-manager-provider:"PEM Trust Manager" #10 34.80 #10 34.80 set-synchronization-provider-prop --provider-name "Multimaster Synchronization" --set trust-manager-provider:"PEM Trust Manager" #10 35.01 #10 35.01 set-administration-connector-prop --set trust-manager-provider:"PEM Trust Manager" #10 35.19 #10 35.19 delete-trust-manager-provider --provider-name "PKCS12" #10 35.59 #10 35.60 The Trust Manager Provider was deleted successfully #10 35.60 #10 35.60 create-key-manager-provider --provider-name "PEM Key Manager" --type pem --set enabled:true --set pem-directory:/var/run/secrets/keys/ds #10 35.77 #10 35.77 The Pem Key Manager Provider was created successfully #10 35.77 #10 35.77 set-connection-handler-prop --handler-name https --set key-manager-provider:"PEM Key Manager" #10 35.92 #10 35.92 set-connection-handler-prop --handler-name ldap --set key-manager-provider:"PEM Key Manager" #10 36.08 #10 36.08 set-connection-handler-prop --handler-name ldaps --set key-manager-provider:"PEM Key Manager" #10 36.24 #10 36.24 set-synchronization-provider-prop --provider-name "Multimaster Synchronization" --set key-manager-provider:"PEM Key Manager" #10 36.41 #10 36.41 set-crypto-manager-prop --set key-manager-provider:"PEM Key Manager" #10 36.58 #10 36.58 set-administration-connector-prop --set key-manager-provider:"PEM Key Manager" #10 36.76 #10 36.76 delete-key-manager-provider --provider-name "PKCS12" #10 36.95 #10 36.95 The Key Manager Provider was deleted successfully #10 36.95 #10 37.02 + cd /opt/opendj/data #10 37.02 + rm -fr legal-notices #10 37.02 + rm -fr lib/extensions #10 37.03 + ln -s /opt/opendj/lib/extensions lib/extensions #10 37.03 + ldifmodify config/config.ldif #10 38.54 + mv config/config.ldif.tmp config/config.ldif #10 38.54 + removeUserPassword db/rootUser/rootUser.ldif uid=admin #10 38.54 + file=db/rootUser/rootUser.ldif #10 38.54 + dn=uid=admin #10 38.54 + ../bin/ldifmodify db/rootUser/rootUser.ldif #10 39.73 + rm db/rootUser/rootUser.ldif #10 39.74 + mv db/rootUser/rootUser.ldif.tmp db/rootUser/rootUser.ldif #10 39.74 + removeUserPassword db/monitorUser/monitorUser.ldif uid=monitor #10 39.74 + file=db/monitorUser/monitorUser.ldif #10 39.74 + dn=uid=monitor #10 39.74 + ../bin/ldifmodify db/monitorUser/monitorUser.ldif #10 40.98 + rm db/monitorUser/monitorUser.ldif #10 40.98 + mv db/monitorUser/monitorUser.ldif.tmp db/monitorUser/monitorUser.ldif #10 40.99 + echo source <(/opt/opendj/bin/bash-completion) #10 40.99 + tar cvfz /opt/opendj/data.tar.gz bak changelogDb classes config db extlib import-tmp ldif lib locks logs var #10 41.00 bak/ #10 41.00 changelogDb/ #10 41.00 classes/ #10 41.00 config/ #10 41.00 config/config.ldif #10 41.00 config/snmp/ #10 41.00 config/snmp/security/ #10 41.00 config/snmp/security/opendj-snmp.security #10 41.00 config/messages/ #10 41.00 config/messages/account-enabled.template #10 41.00 config/messages/password-changed.template #10 41.00 config/messages/account-unlocked.template #10 41.00 config/messages/account-idle-locked.template #10 41.00 config/messages/password-expiring.template #10 41.00 config/messages/account-temporarily-locked.template #10 41.00 config/messages/account-expired.template #10 41.00 config/messages/account-reset-locked.template #10 41.00 config/messages/account-permanently-locked.template #10 41.00 config/messages/password-expired.template #10 41.00 config/messages/password-reset.template #10 41.00 config/messages/account-disabled.template #10 41.00 config/wordlist.txt #10 41.41 config/audit-handlers/ #10 41.41 config/audit-handlers/ldap-access-stdout.json #10 41.41 config/audit-handlers/oracle_tables-example.sql #10 41.41 config/audit-handlers/elasticsearch-config.json-example #10 41.41 config/audit-handlers/jms-config.json-example #10 41.41 config/audit-handlers/postgres_tables-example.sql #10 41.41 config/audit-handlers/elasticsearch-index-setup-example.json #10 41.41 config/audit-handlers/http-access-stdout.json #10 41.41 config/audit-handlers/splunk-config.json-example #10 41.41 config/audit-handlers/syslog-config.json-example #10 41.41 config/audit-handlers/mysql_tables-example.sql #10 41.41 config/audit-handlers/json-stdout-config.json-example #10 41.41 config/audit-handlers/jdbc-config.json-example #10 41.41 config/keystore.pin #10 41.41 config/rest2ldap/ #10 41.41 config/rest2ldap/endpoints/ #10 41.41 config/rest2ldap/endpoints/api/ #10 41.41 config/rest2ldap/endpoints/api/example-v1.json #10 41.42 config/MakeLDIF/ #10 41.42 config/MakeLDIF/example.template #10 41.42 config/MakeLDIF/first.names #10 41.42 config/MakeLDIF/cities #10 41.42 config/MakeLDIF/last.names #10 41.43 config/MakeLDIF/addrate.template #10 41.43 config/MakeLDIF/people_and_groups.template #10 41.43 config/MakeLDIF/states #10 41.43 config/MakeLDIF/streets #10 41.43 config/keystore #10 41.43 config/java.properties #10 41.43 config/common-passwords.txt #10 41.45 db/ #10 41.45 db/schema/ #10 41.45 db/schema/03-keystore.ldif #10 41.45 db/schema/00-core.ldif #10 41.46 db/schema/03-rfc2926.ldif #10 41.46 db/schema/03-changelog.ldif #10 41.46 db/schema/03-rfc2714.ldif #10 41.46 db/schema/02-config.ldif #10 41.47 db/schema/03-uddiv3.ldif #10 41.47 db/schema/01-pwpolicy.ldif #10 41.47 db/schema/03-rfc3112.ldif #10 41.47 db/schema/05-samba.ldif #10 41.47 db/schema/05-rfc4876.ldif #10 41.47 db/schema/03-rfc3712.ldif #10 41.47 db/schema/05-solaris.ldif #10 41.47 db/schema/03-pwpolicyextension.ldif #10 41.47 db/schema/04-rfc2307bis.ldif #10 41.47 db/schema/03-rfc2713.ldif #10 41.48 db/schema/06-compat.ldif #10 41.48 db/schema/03-rfc2739.ldif #10 41.48 db/rootUser/ #10 41.48 db/rootUser/rootUser.ldif #10 41.48 db/tasks/ #10 41.48 db/monitorUser/ #10 41.48 db/monitorUser/monitorUser.ldif #10 41.48 db/adminRoot/ #10 41.48 db/adminRoot/admin-backend.ldif #10 41.48 extlib/ #10 41.48 import-tmp/ #10 41.48 ldif/ #10 41.48 lib/ #10 41.48 lib/extensions #10 41.48 locks/ #10 41.48 locks/server.lock #10 41.48 logs/ #10 41.48 var/ #10 41.48 var/upgrade/ #10 41.48 var/upgrade/schema.ldif.current #10 41.49 var/data.version #10 41.49 + cd /opt/opendj #10 41.49 + chmod -R a+rw template/setup-profiles/AM #10 41.64 + cat ldif-ext/external-am-datastore.ldif ldif-ext/uma/opendj_uma_audit.ldif ldif-ext/uma/opendj_uma_pending_requests.ldif ldif-ext/uma/opendj_uma_resource_set_labels.ldif ldif-ext/uma/opendj_uma_resource_sets.ldif ldif-ext/alpha_bravo.ldif #10 41.64 + cat ldif-ext/orgs.ldif #10 DONE 41.7s #11 exporting to image #11 exporting layers #11 exporting layers 1.0s done #11 writing image sha256:5c791797df0b0a8cbc7c0cdb59a62f52985d6ef65d5eb7cc04fa56b1edca53b2 done #11 naming to gcr.io/engineeringpit/lodestar-images/ds:xlou done #11 DONE 1.0s #1 [internal] load .dockerignore #1 transferring context: 2B done #1 DONE 0.0s #2 [internal] load build definition from Dockerfile #2 transferring dockerfile: 2.45kB done #2 DONE 0.0s #3 [internal] load metadata for gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c #3 DONE 0.3s #4 [ 1/10] FROM gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c@sha256:b42c3ab8ae8db1c5939d8ad3c06c8f9123b01727344fc66bee6cfa93e16168dd #4 CACHED #5 [internal] load build context #5 transferring context: 103.39kB 0.0s done #5 DONE 0.0s #6 [ 2/10] COPY debian-buster-sources.list /etc/apt/sources.list #6 DONE 0.0s #7 [ 3/10] WORKDIR /opt/opendj #7 DONE 0.0s #8 [ 4/10] COPY --chown=forgerock:root common /opt/opendj/ #8 DONE 0.0s #9 [ 5/10] COPY --chown=forgerock:root idrepo /opt/opendj/ #9 DONE 0.1s #10 [ 6/10] COPY --chown=forgerock:root scripts /opt/opendj/scripts #10 DONE 0.0s #11 [ 7/10] COPY --chown=forgerock:root uma /opt/opendj/uma #11 DONE 0.0s #12 [ 8/10] COPY --chown=forgerock:root idrepo/*.ldif /var/tmp/ #12 DONE 0.1s #13 [ 9/10] RUN chmod +w template/setup-profiles/AM/config/6.5/base-entries.ldif && cat scripts/external-am-datastore.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_audit.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_pending_requests.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_set_labels.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat uma/opendj_uma_resource_sets.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && cat /var/tmp/alpha_bravo.ldif >> template/setup-profiles/AM/config/6.5/base-entries.ldif && chmod +w template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && cat /var/tmp/orgs.ldif >> template/setup-profiles/AM/identity-store/7.0/base-entries.ldif && rm /var/tmp/*ldif #13 DONE 0.5s #14 [10/10] RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh #14 4.864 #14 4.866 Configuring profile AM configuration data store........ Done #14 20.24 #14 20.24 Configuring profile AM identity data store........ Done #14 34.38 #14 34.38 Configuring profile IDM external repository.......... Done #14 56.04 #14 56.04 Configuring profile AM CTS data store......... Done #14 74.23 #14 74.24 Configuring profile DS proxied server....... Done #14 82.94 create-trust-manager-provider --provider-name "PEM Trust Manager" --type pem --set enabled:true --set pem-directory:pem-trust-directory #14 84.20 #14 84.20 The Pem Trust Manager Provider was created successfully #14 84.20 #14 84.20 set-connection-handler-prop --handler-name https --set trust-manager-provider:"PEM Trust Manager" #14 84.60 #14 84.60 set-connection-handler-prop --handler-name ldap --set trust-manager-provider:"PEM Trust Manager" #14 84.93 #14 84.93 set-connection-handler-prop --handler-name ldaps --set trust-manager-provider:"PEM Trust Manager" #14 85.14 #14 85.14 set-synchronization-provider-prop --provider-name "Multimaster Synchronization" --set trust-manager-provider:"PEM Trust Manager" #14 85.36 #14 85.36 set-administration-connector-prop --set trust-manager-provider:"PEM Trust Manager" #14 85.56 #14 85.56 delete-trust-manager-provider --provider-name "PKCS12" #14 86.05 #14 86.05 The Trust Manager Provider was deleted successfully #14 86.05 #14 86.05 create-key-manager-provider --provider-name "PEM Key Manager" --type pem --set enabled:true --set pem-directory:pem-keys-directory #14 86.31 #14 86.31 The Pem Key Manager Provider was created successfully #14 86.32 #14 86.32 set-connection-handler-prop --handler-name https --set key-manager-provider:"PEM Key Manager" #14 86.48 #14 86.48 set-connection-handler-prop --handler-name ldap --set key-manager-provider:"PEM Key Manager" #14 86.64 #14 86.64 set-connection-handler-prop --handler-name ldaps --set key-manager-provider:"PEM Key Manager" #14 86.81 #14 86.81 set-synchronization-provider-prop --provider-name "Multimaster Synchronization" --set key-manager-provider:"PEM Key Manager" #14 87.02 #14 87.02 set-crypto-manager-prop --set key-manager-provider:"PEM Key Manager" #14 87.19 #14 87.19 set-administration-connector-prop --set key-manager-provider:"PEM Key Manager" #14 87.33 #14 87.33 delete-key-manager-provider --provider-name "PKCS12" #14 87.65 #14 87.65 The Key Manager Provider was deleted successfully #14 87.65 #14 87.65 create-backend-index --backend-name amIdentityStore --set index-type:equality --type generic --index-name fr-idm-uuid #14 87.94 #14 87.94 The Backend Index was created successfully #14 87.94 #14 87.94 create-backend-index --backend-name amIdentityStore --set index-type:equality --index-name fr-idm-effectiveApplications #14 88.08 #14 88.08 The Backend Index was created successfully #14 88.09 #14 88.09 create-backend-index --backend-name amIdentityStore --set index-type:equality --index-name fr-idm-effectiveGroup #14 88.23 #14 88.23 The Backend Index was created successfully #14 88.24 #14 88.24 create-backend-index --backend-name amIdentityStore --set index-type:presence --index-name fr-idm-lastSync #14 88.41 #14 88.41 The Backend Index was created successfully #14 88.41 #14 88.41 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-user-manager --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 88.55 #14 88.55 The Backend Index was created successfully #14 88.55 #14 88.55 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-user-meta --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 88.73 #14 88.73 The Backend Index was created successfully #14 88.73 #14 88.73 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-user-notifications --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 88.87 #14 88.87 The Backend Index was created successfully #14 88.87 #14 88.87 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-user-roles --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 89.01 #14 89.01 The Backend Index was created successfully #14 89.01 #14 89.01 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-user-authzroles-internal-role --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 89.13 #14 89.14 The Backend Index was created successfully #14 89.14 #14 89.14 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-user-authzroles-managed-role --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 89.26 #14 89.26 The Backend Index was created successfully #14 89.26 #14 89.26 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-organization-owner --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 89.38 #14 89.39 The Backend Index was created successfully #14 89.39 #14 89.39 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-organization-admin --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 89.54 #14 89.54 The Backend Index was created successfully #14 89.54 #14 89.54 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-organization-member --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 89.68 #14 89.68 The Backend Index was created successfully #14 89.68 #14 89.68 create-backend-index --backend-name amIdentityStore --set index-type:ordering --type generic --index-name fr-idm-managed-user-active-date #14 89.81 #14 89.81 The Backend Index was created successfully #14 89.81 #14 89.81 create-backend-index --backend-name amIdentityStore --set index-type:ordering --type generic --index-name fr-idm-managed-user-inactive-date #14 89.93 #14 89.93 The Backend Index was created successfully #14 89.93 #14 89.93 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-user-groups --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 90.07 #14 90.07 The Backend Index was created successfully #14 90.07 #14 90.08 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-assignment-member --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 90.20 #14 90.20 The Backend Index was created successfully #14 90.21 #14 90.21 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-application-member --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 90.34 #14 90.35 The Backend Index was created successfully #14 90.35 #14 90.35 create-backend-index --backend-name amIdentityStore --set index-type:extensible --index-name fr-idm-managed-application-owner --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.7 --set index-extensible-matching-rule:1.3.6.1.4.1.36733.2.1.4.9 #14 90.50 #14 90.50 The Backend Index was created successfully #14 90.50 #14 90.55 + dsconfig --offline --no-prompt --batch #14 92.50 set-global-configuration-prop --set "unauthenticated-requests-policy:allow" #14 93.77 #14 93.78 set-password-policy-prop --policy-name "Default Password Policy" --set "require-secure-authentication:false" --set "require-secure-password-changes:false" --reset "password-validator" #14 94.21 #14 94.21 set-password-policy-prop --policy-name "Root Password Policy" --set "require-secure-authentication:false" --set "require-secure-password-changes:false" --reset "password-validator" #14 94.45 #14 DONE 94.5s #15 exporting to image #15 exporting layers #15 exporting layers 0.2s done #15 writing image sha256:ae7d27f0b1a3ef0b335bccff9b01ab641f5ff050cac8cda1b3da722eaafeeee1 done #15 naming to gcr.io/engineeringpit/lodestar-images/ds-idrepo:xlou done #15 DONE 0.3s #1 [internal] load build definition from Dockerfile #1 transferring dockerfile: 529B done #1 DONE 0.0s #2 [internal] load .dockerignore #2 transferring context: 2B done #2 DONE 0.0s #3 [internal] load metadata for gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c #3 DONE 0.3s #4 [1/7] FROM gcr.io/forgerock-io/ds/pit1:7.3.1-e49604b398e05e2a526179e932642f0a54637f2c@sha256:b42c3ab8ae8db1c5939d8ad3c06c8f9123b01727344fc66bee6cfa93e16168dd #4 DONE 0.0s #5 [internal] load build context #5 transferring context: 4.77kB done #5 DONE 0.0s #6 [2/7] COPY debian-buster-sources.list /etc/apt/sources.list #6 CACHED #7 [3/7] RUN chown -R forgerock:root /opt/opendj #7 DONE 1.8s #8 [4/7] COPY --chown=forgerock:root common /opt/opendj/ #8 DONE 0.1s #9 [5/7] COPY --chown=forgerock:root cts /opt/opendj/ #9 DONE 0.1s #10 [6/7] COPY --chown=forgerock:root scripts /opt/opendj/scripts #10 DONE 0.1s #11 [7/7] RUN bin/setup.sh && bin/relax-security-settings.sh && rm bin/setup.sh bin/relax-security-settings.sh #11 5.146 #11 5.148 Configuring profile AM CTS data store......... Done #11 22.02 #11 22.02 Configuring profile DS proxied server...... Done #11 32.69 create-trust-manager-provider --provider-name "PEM Trust Manager" --type pem --set enabled:true --set pem-directory:pem-trust-directory #11 33.76 #11 33.76 The Pem Trust Manager Provider was created successfully #11 33.77 #11 33.77 set-connection-handler-prop --handler-name https --set trust-manager-provider:"PEM Trust Manager" #11 34.17 #11 34.18 set-connection-handler-prop --handler-name ldap --set trust-manager-provider:"PEM Trust Manager" #11 34.52 #11 34.52 set-connection-handler-prop --handler-name ldaps --set trust-manager-provider:"PEM Trust Manager" #11 34.72 #11 34.72 set-synchronization-provider-prop --provider-name "Multimaster Synchronization" --set trust-manager-provider:"PEM Trust Manager" #11 34.92 #11 34.92 set-administration-connector-prop --set trust-manager-provider:"PEM Trust Manager" #11 35.12 #11 35.12 delete-trust-manager-provider --provider-name "PKCS12" #11 35.61 #11 35.61 The Trust Manager Provider was deleted successfully #11 35.61 #11 35.61 create-key-manager-provider --provider-name "PEM Key Manager" --type pem --set enabled:true --set pem-directory:pem-keys-directory #11 35.78 #11 35.78 The Pem Key Manager Provider was created successfully #11 35.78 #11 35.78 set-connection-handler-prop --handler-name https --set key-manager-provider:"PEM Key Manager" #11 35.94 #11 35.94 set-connection-handler-prop --handler-name ldap --set key-manager-provider:"PEM Key Manager" #11 36.10 #11 36.10 set-connection-handler-prop --handler-name ldaps --set key-manager-provider:"PEM Key Manager" #11 36.33 #11 36.33 set-synchronization-provider-prop --provider-name "Multimaster Synchronization" --set key-manager-provider:"PEM Key Manager" #11 36.51 #11 36.51 set-crypto-manager-prop --set key-manager-provider:"PEM Key Manager" #11 36.65 #11 36.65 set-administration-connector-prop --set key-manager-provider:"PEM Key Manager" #11 36.84 #11 36.84 delete-key-manager-provider --provider-name "PKCS12" #11 37.12 #11 37.12 The Key Manager Provider was deleted successfully #11 37.13 #11 37.17 + dsconfig --offline --no-prompt --batch #11 38.98 set-global-configuration-prop --set "unauthenticated-requests-policy:allow" #11 40.01 #11 40.01 set-password-policy-prop --policy-name "Default Password Policy" --set "require-secure-authentication:false" --set "require-secure-password-changes:false" --reset "password-validator" #11 40.41 #11 40.41 set-password-policy-prop --policy-name "Root Password Policy" --set "require-secure-authentication:false" --set "require-secure-password-changes:false" --reset "password-validator" #11 40.67 #11 DONE 40.8s #12 exporting to image #12 exporting layers #12 exporting layers 0.6s done #12 writing image sha256:606573bb3bfb0a57ee44e774dad1bd0c9dc04929bbcbeeabdc1c24b92c31906f done #12 naming to gcr.io/engineeringpit/lodestar-images/ds-cts:xlou done #12 DONE 0.6s #1 [internal] load build definition from Dockerfile #1 transferring dockerfile: 504B done #1 DONE 0.0s #2 [internal] load .dockerignore #2 transferring context: 2B done #2 DONE 0.0s #3 [internal] load metadata for gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit #3 DONE 1.7s #4 [internal] load build context #4 transferring context: 18.23kB done #4 DONE 0.0s #5 [1/5] FROM gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit@sha256:6cecde6c0a83637b7fe45bad89e44d9a1d1c7f1465e4fb2b47e5628209f80806 #5 resolve gcr.io/forgerock-io/ig/pit1:7.3.0-latest-postcommit@sha256:6cecde6c0a83637b7fe45bad89e44d9a1d1c7f1465e4fb2b47e5628209f80806 0.0s done #5 sha256:8740c948ffd4c816ea7ca963f99ca52f4788baa23f228da9581a9ea2edd3fcd7 0B / 31.40MB 0.1s #5 sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 0B / 35.65MB 0.1s #5 sha256:b97c2a3010cfee7ba5232faec26220a23620635ea88c30923f2ce48eea26793b 4.17kB / 4.17kB done #5 sha256:15a536f2a58d53ecbd7b366e8b2539a71f8a796747ff53265d475de79467b553 0B / 1.26kB 0.1s #5 sha256:6cecde6c0a83637b7fe45bad89e44d9a1d1c7f1465e4fb2b47e5628209f80806 685B / 685B done #5 sha256:1bb898edbe3d2ef25898791d4709520118515657c01296cac46e35499ab8e191 1.88kB / 1.88kB done #5 sha256:15a536f2a58d53ecbd7b366e8b2539a71f8a796747ff53265d475de79467b553 1.26kB / 1.26kB 0.4s done #5 sha256:4d2476b626761f496c69da75f50c583812b9b9f61271fb3cdbc72ef609c751ae 0B / 1.71MB 0.5s #5 sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 7.34MB / 35.65MB 0.6s #5 sha256:8740c948ffd4c816ea7ca963f99ca52f4788baa23f228da9581a9ea2edd3fcd7 8.39MB / 31.40MB 0.9s #5 sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 14.68MB / 35.65MB 0.9s #5 sha256:4d2476b626761f496c69da75f50c583812b9b9f61271fb3cdbc72ef609c751ae 1.71MB / 1.71MB 0.8s done #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 0B / 61.11MB 0.9s #5 sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 16.78MB / 35.65MB 1.0s #5 sha256:8740c948ffd4c816ea7ca963f99ca52f4788baa23f228da9581a9ea2edd3fcd7 31.40MB / 31.40MB 1.2s #5 sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 25.17MB / 35.65MB 1.2s #5 sha256:8740c948ffd4c816ea7ca963f99ca52f4788baa23f228da9581a9ea2edd3fcd7 31.40MB / 31.40MB 1.2s done #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 0B / 61.11MB 1.3s #5 extracting sha256:8740c948ffd4c816ea7ca963f99ca52f4788baa23f228da9581a9ea2edd3fcd7 #5 sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 35.65MB / 35.65MB 1.6s #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 8.39MB / 61.11MB 1.6s #5 sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 35.65MB / 35.65MB 1.6s done #5 sha256:a3cfa75b4985ac25bee04af883d5b28ee79b1c2cfaec09879ef417b42a99ad86 0B / 753B 1.7s #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 16.78MB / 61.11MB 1.9s #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 33.55MB / 61.11MB 2.1s #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 8.39MB / 61.11MB 2.1s #5 sha256:a3cfa75b4985ac25bee04af883d5b28ee79b1c2cfaec09879ef417b42a99ad86 753B / 753B 2.0s done #5 sha256:cfc943ce45fbce4aa36ef264f204b4443166fbbbdb767ee87e7aade86adcb9d2 0B / 993B 2.1s #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 16.78MB / 61.11MB 2.2s #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 36.70MB / 61.11MB 2.3s #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 26.21MB / 61.11MB 2.3s #5 sha256:cfc943ce45fbce4aa36ef264f204b4443166fbbbdb767ee87e7aade86adcb9d2 993B / 993B 2.3s done #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 42.99MB / 61.11MB 2.4s #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 34.60MB / 61.11MB 2.4s #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 50.33MB / 61.11MB 2.6s #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 50.33MB / 61.11MB 2.6s #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 57.59MB / 61.11MB 2.7s #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 57.67MB / 61.11MB 2.7s #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 61.11MB / 61.11MB 2.8s #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 61.11MB / 61.11MB 2.8s #5 sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 61.11MB / 61.11MB 2.9s done #5 sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 61.11MB / 61.11MB 3.2s done #5 extracting sha256:8740c948ffd4c816ea7ca963f99ca52f4788baa23f228da9581a9ea2edd3fcd7 2.5s done #5 extracting sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 #5 extracting sha256:8345fcb9db6cdc00d8cd20bd594329f9bf2766f6786e065fdb242b5737df8233 0.8s done #5 extracting sha256:15a536f2a58d53ecbd7b366e8b2539a71f8a796747ff53265d475de79467b553 done #5 extracting sha256:4d2476b626761f496c69da75f50c583812b9b9f61271fb3cdbc72ef609c751ae #5 extracting sha256:4d2476b626761f496c69da75f50c583812b9b9f61271fb3cdbc72ef609c751ae 0.3s done #5 extracting sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 #5 extracting sha256:6c46e6923bc2fa8aa85cfe711b3d050d059da3a5b65fb96de439fc31e311d9e6 1.0s done #5 extracting sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 #5 extracting sha256:464d810b5a974aeb5de6edd9dd0fa6ed6e2b3591ed18eb41b5658c267b39a069 0.9s done #5 extracting sha256:a3cfa75b4985ac25bee04af883d5b28ee79b1c2cfaec09879ef417b42a99ad86 #5 extracting sha256:a3cfa75b4985ac25bee04af883d5b28ee79b1c2cfaec09879ef417b42a99ad86 done #5 extracting sha256:cfc943ce45fbce4aa36ef264f204b4443166fbbbdb767ee87e7aade86adcb9d2 done #5 DONE 8.1s #6 [2/5] COPY debian-buster-sources.list /etc/apt/sources.list #6 DONE 1.2s #7 [3/5] RUN echo "\033[0;36m*** Building 'cdk' profile ***\033[0m" #7 0.450 *** Building 'cdk' profile *** #7 DONE 0.5s #8 [4/5] COPY --chown=forgerock:root config-profiles/cdk/ /var/ig #8 DONE 0.1s #9 [5/5] COPY --chown=forgerock:root . /var/ig #9 DONE 0.1s #10 exporting to image #10 exporting layers #10 exporting layers 0.1s done #10 writing image sha256:a41b151894fea0001ac2c73d68d87a320b3c67c67661fbed2662a986bd076f2f done #10 naming to gcr.io/engineeringpit/lodestar-images/ig:xlou done #10 DONE 0.1s [run_command]: /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/bin/forgeops install --namespace=xlou --fqdn xlou.iam.xlou-cdm.engineeringpit.com --custom /mnt/disks/data/xslou/lodestar-fork/ext/forgeops_stack/kustomize/overlay/internal-profiles/medium-old --legacy all [run_command]: env={'HOME': '/home/xslou', 'PATH': '/mnt/disks/data/xslou/lodestar-fork/ext/bin:/home/xslou/.local/bin:/home/xslou/bin:~/bin:~/workshop/lodestar-fork/ext/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'DOCKER_SCAN_SUGGEST': 'false'} customresourcedefinition.apiextensions.k8s.io/secretagentconfigurations.secret-agent.secrets.forgerock.io condition met deployment.apps/secret-agent-controller-manager condition met NAME READY STATUS RESTARTS AGE secret-agent-controller-manager-85df555854-jxjg6 2/2 Running 0 6d configmap/dev-utils created configmap/platform-config created ingress.networking.k8s.io/forgerock created ingress.networking.k8s.io/ig created certificate.cert-manager.io/ds-master-cert created certificate.cert-manager.io/ds-ssl-cert created issuer.cert-manager.io/selfsigned-issuer created secretagentconfiguration.secret-agent.secrets.forgerock.io/forgerock-sac created Checking cert-manager and related CRDs: cert-manager CRD found in cluster. Checking secret-agent operator and related CRDs: secret-agent CRD found in cluster.  Checking secret-agent operator is running... secret-agent operator is running Installing component(s): ['all'] platform: "custom-old" in namespace: "xlou".  Deploying base.yaml. This is a one time activity.  Waiting for K8s secrets. Waiting for secret "am-env-secrets" to exist in the cluster: .done Waiting for secret "idm-env-secrets" to exist in the cluster: ...done Waiting for secret "ds-passwords" to exist in the cluster: done Waiting for secret "ds-env-secrets" to exist in the cluster: secret/cloud-storage-credentials-cts created secret/cloud-storage-credentials-idrepo created service/ds-cts created service/ds-idrepo created statefulset.apps/ds-cts created statefulset.apps/ds-idrepo created job.batch/ldif-importer created done  Deploying ds.yaml. This includes all directory resources.  Waiting for DS deployment. This can take a few minutes. First installation takes longer. Waiting for statefulset "ds-idrepo" to exist in the cluster: Waiting for 3 pods to be ready... Waiting for 2 pods to be ready... Waiting for 1 pods to be ready... statefulset rolling update complete 3 pods at revision ds-idrepo-7b446fff4d... done Waiting for Service Account Password Update: done Waiting for statefulset "ds-cts" to exist in the cluster: statefulset rolling update complete 3 pods at revision ds-cts-87b85b6bd... done Waiting for Service Account Password Update: configmap/amster-files created configmap/idm created configmap/idm-logging-properties created service/am created service/idm created deployment.apps/am created deployment.apps/idm created job.batch/amster created done Cleaning up amster components.  Deploying apps.  Waiting for AM deployment. This can take a few minutes. First installation takes longer. Waiting for deployment "am" to exist in the cluster: deployment.apps/am condition met configmap/amster-retain created done  Waiting for amster job to complete. This can take several minutes. Waiting for job "amster" to exist in the cluster: job.batch/amster condition met done  Waiting for IDM deployment. This can take a few minutes. First installation takes longer. Waiting for deployment "idm" to exist in the cluster: pod/idm-78f4b47cb9-bksrb condition met pod/idm-78f4b47cb9-gv5tc condition met service/admin-ui created service/end-user-ui created service/login-ui created deployment.apps/admin-ui created deployment.apps/end-user-ui created deployment.apps/login-ui created done  Deploying UI.  Waiting for K8s secrets. Waiting for secret "am-env-secrets" to exist in the cluster: done Waiting for secret "idm-env-secrets" to exist in the cluster: done Waiting for secret "ds-passwords" to exist in the cluster: done Waiting for secret "ds-env-secrets" to exist in the cluster: done  Relevant passwords: SFkrGkagNlOdh96t8ApjUBAB (amadmin user) 9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz (uid=admin user) SiyEFZjyGKydvN3rIsnhpeRCJ99sItar (App str svc acct (uid=am-config,ou=admins,ou=am-config)) 71vSL9cL0wSAdod91HIdVVqSWuIj0lKi (CTS svc acct (uid=openam_cts,ou=admins,ou=famrecords,ou=openam-session,ou=tokens)) 680C8P9GUX6q6FB0xSMDoGfB6rQn2Uu5 (ID repo svc acct (uid=am-identity-bind-account,ou=admins,ou=identities))  Relevant URLs: https://xlou.iam.xlou-cdm.engineeringpit.com/platform https://xlou.iam.xlou-cdm.engineeringpit.com/admin https://xlou.iam.xlou-cdm.engineeringpit.com/am https://xlou.iam.xlou-cdm.engineeringpit.com/enduser  Enjoy your deployment! **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- -------------------- Check pod ds-cts-0 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:54:27Z --- stderr --- ------------- Check pod ds-cts-0 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-0 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-0 has been restarted 0 times. -------------------- Check pod ds-cts-1 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:54:52Z --- stderr --- ------------- Check pod ds-cts-1 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-1 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-1 has been restarted 0 times. -------------------- Check pod ds-cts-2 is running -------------------- [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:55:19Z --- stderr --- ------------- Check pod ds-cts-2 filesystem is accessible ------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------------ Check pod ds-cts-2 restart count ------------------ [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-cts-2 has been restarted 0 times. ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ------------------ Check pod ds-idrepo-0 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:54:27Z --- stderr --- ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-0 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-0 has been restarted 0 times. ------------------ Check pod ds-idrepo-1 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:55:05Z --- stderr --- ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-1 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-1 has been restarted 0 times. ------------------ Check pod ds-idrepo-2 is running ------------------ [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:55:40Z --- stderr --- ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ----------------- Check pod ds-idrepo-2 restart count ----------------- [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod ds-idrepo-2 has been restarted 0 times. ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name}` | grep 3 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- am-7849cf7bdb-2vjw5 am-7849cf7bdb-vcmhd am-7849cf7bdb-vnjrd --- stderr --- -------------- Check pod am-7849cf7bdb-2vjw5 is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-7849cf7bdb-2vjw5 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-7849cf7bdb-2vjw5 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-7849cf7bdb-2vjw5 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:56:17Z --- stderr --- ------- Check pod am-7849cf7bdb-2vjw5 filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-7849cf7bdb-2vjw5 -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-7849cf7bdb-2vjw5 restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-7849cf7bdb-2vjw5 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-7849cf7bdb-2vjw5 has been restarted 0 times. -------------- Check pod am-7849cf7bdb-vcmhd is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-7849cf7bdb-vcmhd -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-7849cf7bdb-vcmhd -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-7849cf7bdb-vcmhd -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:56:17Z --- stderr --- ------- Check pod am-7849cf7bdb-vcmhd filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-7849cf7bdb-vcmhd -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-7849cf7bdb-vcmhd restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-7849cf7bdb-vcmhd -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-7849cf7bdb-vcmhd has been restarted 0 times. -------------- Check pod am-7849cf7bdb-vnjrd is running -------------- [loop_until]: kubectl --namespace=xlou get pods am-7849cf7bdb-vnjrd -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods am-7849cf7bdb-vnjrd -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod am-7849cf7bdb-vnjrd -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:56:17Z --- stderr --- ------- Check pod am-7849cf7bdb-vnjrd filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec am-7849cf7bdb-vnjrd -c openam -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------- Check pod am-7849cf7bdb-vnjrd restart count ------------- [loop_until]: kubectl --namespace=xlou get pod am-7849cf7bdb-vnjrd -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod am-7849cf7bdb-vnjrd has been restarted 0 times. **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-zkglf --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name}` | grep 2 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- idm-78f4b47cb9-bksrb idm-78f4b47cb9-gv5tc --- stderr --- -------------- Check pod idm-78f4b47cb9-bksrb is running -------------- [loop_until]: kubectl --namespace=xlou get pods idm-78f4b47cb9-bksrb -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods idm-78f4b47cb9-bksrb -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod idm-78f4b47cb9-bksrb -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:56:17Z --- stderr --- ------- Check pod idm-78f4b47cb9-bksrb filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec idm-78f4b47cb9-bksrb -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-78f4b47cb9-bksrb restart count ------------ [loop_until]: kubectl --namespace=xlou get pod idm-78f4b47cb9-bksrb -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-78f4b47cb9-bksrb has been restarted 0 times. -------------- Check pod idm-78f4b47cb9-gv5tc is running -------------- [loop_until]: kubectl --namespace=xlou get pods idm-78f4b47cb9-gv5tc -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods idm-78f4b47cb9-gv5tc -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod idm-78f4b47cb9-gv5tc -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:56:17Z --- stderr --- ------- Check pod idm-78f4b47cb9-gv5tc filesystem is accessible ------- [loop_until]: kubectl --namespace=xlou exec idm-78f4b47cb9-gv5tc -c openidm -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var --- stderr --- ------------ Check pod idm-78f4b47cb9-gv5tc restart count ------------ [loop_until]: kubectl --namespace=xlou get pod idm-78f4b47cb9-gv5tc -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod idm-78f4b47cb9-gv5tc has been restarted 0 times. ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-787cb4f6b4-j246m --- stderr --- ---------- Check pod end-user-ui-787cb4f6b4-j246m is running ---------- [loop_until]: kubectl --namespace=xlou get pods end-user-ui-787cb4f6b4-j246m -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods end-user-ui-787cb4f6b4-j246m -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod end-user-ui-787cb4f6b4-j246m -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:57:26Z --- stderr --- --- Check pod end-user-ui-787cb4f6b4-j246m filesystem is accessible --- [loop_until]: kubectl --namespace=xlou exec end-user-ui-787cb4f6b4-j246m -c end-user-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- -------- Check pod end-user-ui-787cb4f6b4-j246m restart count -------- [loop_until]: kubectl --namespace=xlou get pod end-user-ui-787cb4f6b4-j246m -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod end-user-ui-787cb4f6b4-j246m has been restarted 0 times. *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- login-ui-57fddf97c8-n79g5 --- stderr --- ----------- Check pod login-ui-57fddf97c8-n79g5 is running ----------- [loop_until]: kubectl --namespace=xlou get pods login-ui-57fddf97c8-n79g5 -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods login-ui-57fddf97c8-n79g5 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod login-ui-57fddf97c8-n79g5 -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:57:27Z --- stderr --- ---- Check pod login-ui-57fddf97c8-n79g5 filesystem is accessible ---- [loop_until]: kubectl --namespace=xlou exec login-ui-57fddf97c8-n79g5 -c login-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod login-ui-57fddf97c8-n79g5 restart count ---------- [loop_until]: kubectl --namespace=xlou get pod login-ui-57fddf97c8-n79g5 -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod login-ui-57fddf97c8-n79g5 has been restarted 0 times. *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: awk -F" " "{print NF}" <<< `kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name}` | grep 1 [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected number of elements found [loop_until]: OK (rc = 0) --- stdout --- admin-ui-8666f85968-f8lws --- stderr --- ----------- Check pod admin-ui-8666f85968-f8lws is running ----------- [loop_until]: kubectl --namespace=xlou get pods admin-ui-8666f85968-f8lws -o=jsonpath={.status.phase} | grep "Running" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- Running --- stderr --- [loop_until]: kubectl --namespace=xlou get pods admin-ui-8666f85968-f8lws -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- true --- stderr --- [loop_until]: kubectl --namespace=xlou get pod admin-ui-8666f85968-f8lws -o jsonpath={.status.startTime} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2023-05-18T16:57:26Z --- stderr --- ---- Check pod admin-ui-8666f85968-f8lws filesystem is accessible ---- [loop_until]: kubectl --namespace=xlou exec admin-ui-8666f85968-f8lws -c admin-ui -- ls / | grep "bin" [loop_until]: (max_time=360, interval=5, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- bin boot dev docker-entrypoint.d docker-entrypoint.sh entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var variable_replacement.sh --- stderr --- ---------- Check pod admin-ui-8666f85968-f8lws restart count ---------- [loop_until]: kubectl --namespace=xlou get pod admin-ui-8666f85968-f8lws -o jsonpath={.status.containerStatuses[*].restartCount} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 0 --- stderr --- Pod admin-ui-8666f85968-f8lws has been restarted 0 times. ***************************** Checking DS-CTS component is running ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get statefulsets ds-cts -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- *************************** Checking DS-IDREPO component is running *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get statefulsets ds-idrepo -o jsonpath="current:{.status.currentReplicas} ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "current:3 ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- current:3 ready:3 replicas:3 --- stderr --- ******************************* Checking AM component is running ******************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- -------------- Waiting for 3 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments am -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:3 replicas:3" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:3 replicas:3 --- stderr --- ***************************** Checking AMSTER component is running ***************************** --------------------- Get expected number of pods --------------------- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get jobs amster -o jsonpath="{.status.succeeded}" | grep "1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ****************************** Checking IDM component is running ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- -------------- Waiting for 2 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployment idm -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:2 replicas:2" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:2 replicas:2 --- stderr --- ************************** Checking END-USER-UI component is running ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments end-user-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking LOGIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments login-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- **************************** Checking ADMIN-UI component is running **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- -------------- Waiting for 1 expected pod(s) to be ready -------------- [loop_until]: kubectl --namespace=xlou get deployments admin-ui -o jsonpath="ready:{.status.readyReplicas} replicas:{.status.replicas}" | grep "ready:1 replicas:1" [loop_until]: (max_time=900, interval=30, expected_rc=[0] [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found [loop_until]: OK (rc = 0) --- stdout --- ready:1 replicas:1 --- stderr --- ****************************** Livecheck stage: After deployment ****************************** ---------------------- Running DS-CTS livecheck ---------------------- Livecheck to ds-cts-0 [loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- OWg4SnF4ZUt1eXFHSUk2c1dGUVg0bWpjdzJ5MXRueXo= --- stderr --- [run_command]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- ldapsearch --noPropertiesFile --port 1389 --useStartTls --trustAll --bindDn "uid=admin" --bindPassword "9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz" --baseDn "" --searchScope base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-cts-1 [run_command]: kubectl --namespace=xlou exec ds-cts-1 -c ds -- ldapsearch --noPropertiesFile --port 1389 --useStartTls --trustAll --bindDn "uid=admin" --bindPassword "9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz" --baseDn "" --searchScope base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-cts-2 [run_command]: kubectl --namespace=xlou exec ds-cts-2 -c ds -- ldapsearch --noPropertiesFile --port 1389 --useStartTls --trustAll --bindDn "uid=admin" --bindPassword "9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz" --baseDn "" --searchScope base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- --------------------- Running DS-IDREPO livecheck --------------------- Livecheck to ds-idrepo-0 [loop_until]: kubectl --namespace=xlou get secret ds-passwords -o jsonpath="{.data.dirmanager\.pw}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- OWg4SnF4ZUt1eXFHSUk2c1dGUVg0bWpjdzJ5MXRueXo= --- stderr --- [run_command]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- ldapsearch --noPropertiesFile --port 1389 --useStartTls --trustAll --bindDn "uid=admin" --bindPassword "9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz" --baseDn "" --searchScope base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-idrepo-1 [run_command]: kubectl --namespace=xlou exec ds-idrepo-1 -c ds -- ldapsearch --noPropertiesFile --port 1389 --useStartTls --trustAll --bindDn "uid=admin" --bindPassword "9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz" --baseDn "" --searchScope base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- Livecheck to ds-idrepo-2 [run_command]: kubectl --namespace=xlou exec ds-idrepo-2 -c ds -- ldapsearch --noPropertiesFile --port 1389 --useStartTls --trustAll --bindDn "uid=admin" --bindPassword "9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz" --baseDn "" --searchScope base "(&)" alive [run_command]: OK (rc = 0 - expected to be in [0]) --- stdout --- dn: alive: true --- stderr --- ------------------------ Running AM livecheck ------------------------ Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready [http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/health/ready" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- [loop_until]: kubectl --namespace=xlou get secret am-env-secrets -o jsonpath="{.data.AM_PASSWORDS_AMADMIN_CLEAR}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- U0ZrckdrYWdObE9kaDk2dDhBcGpVQkFC --- stderr --- Authenticate user amadmin via REST [http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: SFkrGkagNlOdh96t8ApjUBAB" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "tokenId": "ZHqv7vC-Q1Y-vLbQ2bX21K6K4ns.*AAJTSQACMDIAAlNLABx4VTJ6Q3NqakdTUzZxclNLNEEzNHNER08xNWs9AAR0eXBlAANDVFMAAlMxAAIwMQ..*", "successUrl": "/am/console", "realm": "/" } ---------------------- Running AMSTER livecheck ---------------------- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou --field-selector status.phase!=Failed get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-zkglf --- stderr --- Amster livecheck is passed ------------------------ Running IDM livecheck ------------------------ Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping [loop_until]: kubectl --namespace=xlou get secret idm-env-secrets -o jsonpath="{.data.OPENIDM_ADMIN_PASSWORD}" [loop_until]: (max_time=60, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- OUpoYmxNSkllTjJxdmRuZHpGVFhoakFV --- stderr --- Set admin password: 9JhblMJIeN2qvdndzFTXhjAU [http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/ping" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "", "_rev": "", "shortDesc": "OpenIDM ready", "state": "ACTIVE_READY" } -------------------- Running END-USER-UI livecheck -------------------- Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/enduser [http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/enduser" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Identity Management
[] --------------------- Running LOGIN-UI livecheck --------------------- Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI [http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/am/XUI" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Login
[] --------------------- Running ADMIN-UI livecheck --------------------- Livecheck to https://xlou.iam.xlou-cdm.engineeringpit.com/platform [http_cmd]: curl -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/platform" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- Platform Admin
[] LIVECHECK SUCCEEDED **************************** Initializing component pods for DS-CTS **************************** --------------------- Get DS-CTS software version --------------------- [loop_until]: kubectl --namespace=xlou exec ds-cts-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou cp ds-cts-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-cts_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** ------------------- Get DS-IDREPO software version ------------------- [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 -c ds -- find /opt/opendj -name opendj-core.jar [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /opt/opendj/lib/opendj-core.jar --- stderr --- [loop_until]: kubectl --namespace=xlou cp ds-idrepo-0:/opt/opendj/lib/opendj-core.jar /tmp/ds-idrepo_info/opendj-core.jar -c ds [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ****************************** Initializing component pods for AM ****************************** ----------------------- Get AM software version ----------------------- Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version Create new LoginSession for user "amadmin" [LoginSession] Get "token_id" token for user "amadmin" [LoginSession] Obtaining new "token_id" token from server because it was not obtained yet for the user Authenticate user amadmin via REST [http_cmd]: curl -H "X-OpenAM-Username: amadmin" -H "X-OpenAM-Password: SFkrGkagNlOdh96t8ApjUBAB" -H "Content-Type: application/json" -H "Accept-API-Version: resource=2.0, protocol=1.0" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/authenticate?realm=/" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "tokenId": "p_xCSXVMuOrRjJ6lXpO6u5ra1aE.*AAJTSQACMDIAAlNLABxyTEFnRkprMGdOMXF2OHlSOW56RUpmNllXUzg9AAR0eXBlAANDVFMAAlMxAAIwMQ..*", "successUrl": "/am/console", "realm": "/" } [LoginSession] Obtaining session info [http_cmd]: curl -H "iPlanetDirectoryPro: p_xCSXVMuOrRjJ6lXpO6u5ra1aE.*AAJTSQACMDIAAlNLABxyTEFnRkprMGdOMXF2OHlSOW56RUpmNllXUzg9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" -H "Content-Type: application/json" -H "Accept-API-Version: resource=3.0, protocol=2.1" -H "filters_cookie: p_xCSXVMuOrRjJ6lXpO6u5ra1aE.*AAJTSQACMDIAAlNLABxyTEFnRkprMGdOMXF2OHlSOW56RUpmNllXUzg9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" -L -X POST "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/realms/root/sessions?_action=getSessionInfo" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "username": "amadmin", "universalId": "id=amadmin,ou=user,ou=am-config", "realm": "/", "latestAccessTime": "2023-05-18T16:58:32Z", "maxIdleExpirationTime": "2023-05-18T17:28:33Z", "maxSessionExpirationTime": "2023-05-18T18:58:32Z", "properties": { "AMCtxId": "e714ef69-b182-410e-971f-960bcdbcdc98-135" } } [http_cmd]: curl -L -X GET --cookie "amlbcookie=01" --cookie "iPlanetDirectoryPro=p_xCSXVMuOrRjJ6lXpO6u5ra1aE.*AAJTSQACMDIAAlNLABxyTEFnRkprMGdOMXF2OHlSOW56RUpmNllXUzg9AAR0eXBlAANDVFMAAlMxAAIwMQ..*" --cookie "route=1684429113.683.129738.149284|f60edb382037eb2df1e800d563ad78a7" "https://xlou.iam.xlou-cdm.engineeringpit.com/am/json/serverinfo/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "_rev": "557845788", "version": "7.3.1-SNAPSHOT", "fullVersion": "ForgeRock Access Management 7.3.1-SNAPSHOT Build 2199bb185f3287050d915730f821400e00b2f8fe (2023-May-17 10:32)", "revision": "2199bb185f3287050d915730f821400e00b2f8fe", "date": "2023-May-17 10:32" } **************************** Initializing component pods for AMSTER **************************** ***************************** Initializing component pods for IDM ***************************** ---------------------- Get IDM software version ---------------------- Getting product version from https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version [http_cmd]: curl -H "X-OpenIDM-Username: anonymous" -H "X-OpenIDM-Password: anonymous" -L -X GET "https://xlou.iam.xlou-cdm.engineeringpit.com/openidm/info/version" [http_cmd]: http status code OK --- status code --- http status code is 200 (expected 200) --- http response --- { "_id": "version", "productVersion": "7.3.0-SNAPSHOT", "productBuildDate": "20230330162641", "productRevision": "ed278902ce" } ************************* Initializing component pods for END-USER-UI ************************* ------------------ Get END-USER-UI software version ------------------ [loop_until]: kubectl --namespace=xlou exec end-user-ui-787cb4f6b4-j246m -c end-user-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.2f29f86b.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp end-user-ui-787cb4f6b4-j246m:/usr/share/nginx/html/js/chunk-vendors.2f29f86b.js /tmp/end-user-ui_info/chunk-vendors.2f29f86b.js -c end-user-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** -------------------- Get LOGIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou exec login-ui-57fddf97c8-n79g5 -c login-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.c809ddd8.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp login-ui-57fddf97c8-n79g5:/usr/share/nginx/html/js/chunk-vendors.c809ddd8.js /tmp/login-ui_info/chunk-vendors.c809ddd8.js -c login-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** -------------------- Get ADMIN-UI software version -------------------- [loop_until]: kubectl --namespace=xlou exec admin-ui-8666f85968-f8lws -c admin-ui -- find /usr/share/nginx/html -name chunk-vendors.*.js [loop_until]: (max_time=30, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- /usr/share/nginx/html/js/chunk-vendors.2251928a.js --- stderr --- [loop_until]: kubectl --namespace=xlou cp admin-ui-8666f85968-f8lws:/usr/share/nginx/html/js/chunk-vendors.2251928a.js /tmp/admin-ui_info/chunk-vendors.2251928a.js -c admin-ui [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- tar: Removing leading `/' from member names --- stderr --- ==================================================================================================== ================ Admin password for DS-CTS is: 9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz ================ ==================================================================================================== ==================================================================================================== ============== Admin password for DS-IDREPO is: 9h8JqxeKuyqGII6sWFQX4mjcw2y1tnyz ============== ==================================================================================================== ==================================================================================================== ====================== Admin password for AM is: SFkrGkagNlOdh96t8ApjUBAB ====================== ==================================================================================================== ==================================================================================================== ===================== Admin password for IDM is: 9JhblMJIeN2qvdndzFTXhjAU ===================== ==================================================================================================== *************************************** Dumping pod list *************************************** Dumping pod list to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/_pod-list.txt **************************** Initializing component pods for DS-CTS **************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-cts -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=ds-cts -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-cts-0 ds-cts-1 ds-cts-2 --- stderr --- ************************** Initializing component pods for DS-IDREPO ************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get statefulsets -l app=ds-idrepo -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=ds-idrepo -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 --- stderr --- ****************************** Initializing component pods for AM ****************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app=am -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 3 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=am -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- am-7849cf7bdb-2vjw5 am-7849cf7bdb-vcmhd am-7849cf7bdb-vnjrd --- stderr --- **************************** Initializing component pods for AMSTER **************************** ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=amster -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- amster-zkglf --- stderr --- ***************************** Initializing component pods for IDM ***************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployment -l app=idm -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 2 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app=idm -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- idm-78f4b47cb9-bksrb idm-78f4b47cb9-gv5tc --- stderr --- ************************* Initializing component pods for END-USER-UI ************************* --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=end-user-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- end-user-ui-787cb4f6b4-j246m --- stderr --- *************************** Initializing component pods for LOGIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=login-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- login-ui-57fddf97c8-n79g5 --- stderr --- *************************** Initializing component pods for ADMIN-UI *************************** --------------------- Get expected number of pods --------------------- [loop_until]: kubectl --namespace=xlou get deployments -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].spec.replicas} [loop_until]: (max_time=180, interval=5, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- 1 --- stderr --- ---------------------------- Get pod list ---------------------------- [loop_until]: kubectl --namespace=xlou get pods -l app.kubernetes.io/name=admin-ui -o jsonpath={.items[*].metadata.name} [loop_until]: (max_time=180, interval=10, expected_rc=[0] [loop_until]: OK (rc = 0) --- stdout --- admin-ui-8666f85968-f8lws --- stderr --- *********************************** Dumping components logs *********************************** ----------------------- Dumping logs for DS-CTS ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/ds-cts-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/ds-cts-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/ds-cts-2.txt Check pod logs for errors --------------------- Dumping logs for DS-IDREPO --------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/ds-idrepo-0.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/ds-idrepo-1.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/ds-idrepo-2.txt Check pod logs for errors ------------------------- Dumping logs for AM ------------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/am-7849cf7bdb-2vjw5.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/am-7849cf7bdb-vcmhd.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/am-7849cf7bdb-vnjrd.txt Check pod logs for errors ----------------------- Dumping logs for AMSTER ----------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/amster-zkglf.txt Check pod logs for errors ------------------------ Dumping logs for IDM ------------------------ Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/idm-78f4b47cb9-bksrb.txt Check pod logs for errors Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/idm-78f4b47cb9-gv5tc.txt Check pod logs for errors -------------------- Dumping logs for END-USER-UI -------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/end-user-ui-787cb4f6b4-j246m.txt Check pod logs for errors ---------------------- Dumping logs for LOGIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/login-ui-57fddf97c8-n79g5.txt Check pod logs for errors ---------------------- Dumping logs for ADMIN-UI ---------------------- Dumping pod description and logs to /mnt/disks/data/xslou/lodestar-fork/results/pyrock/access_token/pod-logs/stack/20230518-165839-after-deployment/admin-ui-8666f85968-f8lws.txt Check pod logs for errors [18/May/2023 16:59:02] - INFO: Deployment successful ________________________________________________________________________________ [18/May/2023 16:59:02] Deploy_all_forgerock_components post : Post method ________________________________________________________________________________ Setting result to PASS Task has been successfully stopped