==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-7655dd7665-d26cm Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Fri, 11 Aug 2023 21:08:24 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=7655dd7665 skaffold.dev/run-id=3b7e940f-c4fd-4fb5-87d6-f14a309ee008 Annotations: Status: Running IP: 10.106.45.26 IPs: IP: 10.106.45.26 Controlled By: ReplicaSet/lodemon-7655dd7665 Containers: lodemon: Container ID: containerd://22b01bf64714bec1bb766c7c43edde3f7b7166a52120078e8b8f2389ffcece76 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Fri, 11 Aug 2023 21:08:25 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ljt6h (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-ljt6h: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 22:08:26 INFO 22:08:26 INFO --------------------- Get expected number of pods --------------------- 22:08:26 INFO 22:08:26 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 22:08:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:26 INFO [loop_until]: OK (rc = 0) 22:08:26 DEBUG --- stdout --- 22:08:26 DEBUG 3 22:08:26 DEBUG --- stderr --- 22:08:26 DEBUG 22:08:26 INFO 22:08:26 INFO ---------------------------- Get pod list ---------------------------- 22:08:26 INFO 22:08:26 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 22:08:26 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 22:08:26 INFO [loop_until]: OK (rc = 0) 22:08:26 DEBUG --- stdout --- 22:08:26 DEBUG am-55f77847b7-7kvs5 am-55f77847b7-nhzv4 am-55f77847b7-rpq9w 22:08:26 DEBUG --- stderr --- 22:08:26 DEBUG 22:08:26 INFO 22:08:26 INFO -------------- Check pod am-55f77847b7-7kvs5 is running -------------- 22:08:26 INFO 22:08:26 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-7kvs5 -o=jsonpath={.status.phase} | grep "Running" 22:08:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:26 INFO [loop_until]: OK (rc = 0) 22:08:26 DEBUG --- stdout --- 22:08:26 DEBUG Running 22:08:26 DEBUG --- stderr --- 22:08:26 DEBUG 22:08:26 INFO 22:08:26 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-7kvs5 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:26 INFO [loop_until]: OK (rc = 0) 22:08:26 DEBUG --- stdout --- 22:08:26 DEBUG true 22:08:26 DEBUG --- stderr --- 22:08:26 DEBUG 22:08:26 INFO 22:08:26 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-7kvs5 --output jsonpath={.status.startTime} 22:08:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:26 INFO [loop_until]: OK (rc = 0) 22:08:26 DEBUG --- stdout --- 22:08:26 DEBUG 2023-08-11T20:58:56Z 22:08:26 DEBUG --- stderr --- 22:08:26 DEBUG 22:08:26 INFO 22:08:26 INFO ------- Check pod am-55f77847b7-7kvs5 filesystem is accessible ------- 22:08:26 INFO 22:08:26 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-7kvs5 --container openam -- ls / | grep "bin" 22:08:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:26 INFO [loop_until]: OK (rc = 0) 22:08:26 DEBUG --- stdout --- 22:08:26 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 22:08:26 DEBUG --- stderr --- 22:08:26 DEBUG 22:08:26 INFO 22:08:26 INFO ------------- Check pod am-55f77847b7-7kvs5 restart count ------------- 22:08:26 INFO 22:08:26 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-7kvs5 --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:26 INFO [loop_until]: OK (rc = 0) 22:08:26 DEBUG --- stdout --- 22:08:26 DEBUG 0 22:08:26 DEBUG --- stderr --- 22:08:26 DEBUG 22:08:26 INFO Pod am-55f77847b7-7kvs5 has been restarted 0 times. 22:08:26 INFO 22:08:26 INFO -------------- Check pod am-55f77847b7-nhzv4 is running -------------- 22:08:26 INFO 22:08:26 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-nhzv4 -o=jsonpath={.status.phase} | grep "Running" 22:08:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG Running 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-nhzv4 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG true 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-nhzv4 --output jsonpath={.status.startTime} 22:08:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG 2023-08-11T20:58:56Z 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO ------- Check pod am-55f77847b7-nhzv4 filesystem is accessible ------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-nhzv4 --container openam -- ls / | grep "bin" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO ------------- Check pod am-55f77847b7-nhzv4 restart count ------------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-nhzv4 --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG 0 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO Pod am-55f77847b7-nhzv4 has been restarted 0 times. 22:08:27 INFO 22:08:27 INFO -------------- Check pod am-55f77847b7-rpq9w is running -------------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-rpq9w -o=jsonpath={.status.phase} | grep "Running" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG Running 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-rpq9w -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG true 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-rpq9w --output jsonpath={.status.startTime} 22:08:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG 2023-08-11T20:58:56Z 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO ------- Check pod am-55f77847b7-rpq9w filesystem is accessible ------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-rpq9w --container openam -- ls / | grep "bin" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO ------------- Check pod am-55f77847b7-rpq9w restart count ------------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-rpq9w --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG 0 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO Pod am-55f77847b7-rpq9w has been restarted 0 times. 22:08:27 INFO 22:08:27 INFO --------------------- Get expected number of pods --------------------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 22:08:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG 2 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO ---------------------------- Get pod list ---------------------------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 22:08:27 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG idm-65858d8c4c-h7xxp idm-65858d8c4c-v78nh 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO -------------- Check pod idm-65858d8c4c-h7xxp is running -------------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-h7xxp -o=jsonpath={.status.phase} | grep "Running" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG Running 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-h7xxp -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG true 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-h7xxp --output jsonpath={.status.startTime} 22:08:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG 2023-08-11T20:58:56Z 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO ------- Check pod idm-65858d8c4c-h7xxp filesystem is accessible ------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-h7xxp --container openidm -- ls / | grep "bin" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO 22:08:27 INFO ------------ Check pod idm-65858d8c4c-h7xxp restart count ------------ 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-h7xxp --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:27 INFO [loop_until]: OK (rc = 0) 22:08:27 DEBUG --- stdout --- 22:08:27 DEBUG 0 22:08:27 DEBUG --- stderr --- 22:08:27 DEBUG 22:08:27 INFO Pod idm-65858d8c4c-h7xxp has been restarted 0 times. 22:08:27 INFO 22:08:27 INFO -------------- Check pod idm-65858d8c4c-v78nh is running -------------- 22:08:27 INFO 22:08:27 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-v78nh -o=jsonpath={.status.phase} | grep "Running" 22:08:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG Running 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-v78nh -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG true 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-v78nh --output jsonpath={.status.startTime} 22:08:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG 2023-08-11T20:58:56Z 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO ------- Check pod idm-65858d8c4c-v78nh filesystem is accessible ------- 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-v78nh --container openidm -- ls / | grep "bin" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO ------------ Check pod idm-65858d8c4c-v78nh restart count ------------ 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-v78nh --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG 0 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO Pod idm-65858d8c4c-v78nh has been restarted 0 times. 22:08:28 INFO 22:08:28 INFO --------------------- Get expected number of pods --------------------- 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 22:08:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG 3 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO ---------------------------- Get pod list ---------------------------- 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 22:08:28 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG Running 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG true 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 22:08:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG 2023-08-11T20:25:09Z 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG 0 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO Pod ds-idrepo-0 has been restarted 0 times. 22:08:28 INFO 22:08:28 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG Running 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG true 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 22:08:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG 2023-08-11T20:36:55Z 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG 0 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO Pod ds-idrepo-1 has been restarted 0 times. 22:08:28 INFO 22:08:28 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:28 INFO [loop_until]: OK (rc = 0) 22:08:28 DEBUG --- stdout --- 22:08:28 DEBUG Running 22:08:28 DEBUG --- stderr --- 22:08:28 DEBUG 22:08:28 INFO 22:08:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG true 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 22:08:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG 2023-08-11T20:47:58Z 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG 0 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO Pod ds-idrepo-2 has been restarted 0 times. 22:08:29 INFO 22:08:29 INFO --------------------- Get expected number of pods --------------------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 22:08:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG 3 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO ---------------------------- Get pod list ---------------------------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 22:08:29 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO -------------------- Check pod ds-cts-0 is running -------------------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG Running 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG true 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 22:08:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG 2023-08-11T20:25:09Z 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG 0 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO Pod ds-cts-0 has been restarted 0 times. 22:08:29 INFO 22:08:29 INFO -------------------- Check pod ds-cts-1 is running -------------------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG Running 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG true 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 22:08:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG 2023-08-11T20:25:36Z 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG 0 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO Pod ds-cts-1 has been restarted 0 times. 22:08:29 INFO 22:08:29 INFO -------------------- Check pod ds-cts-2 is running -------------------- 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:29 INFO [loop_until]: OK (rc = 0) 22:08:29 DEBUG --- stdout --- 22:08:29 DEBUG Running 22:08:29 DEBUG --- stderr --- 22:08:29 DEBUG 22:08:29 INFO 22:08:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 22:08:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:30 INFO [loop_until]: OK (rc = 0) 22:08:30 DEBUG --- stdout --- 22:08:30 DEBUG true 22:08:30 DEBUG --- stderr --- 22:08:30 DEBUG 22:08:30 INFO 22:08:30 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 22:08:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:30 INFO [loop_until]: OK (rc = 0) 22:08:30 DEBUG --- stdout --- 22:08:30 DEBUG 2023-08-11T20:25:59Z 22:08:30 DEBUG --- stderr --- 22:08:30 DEBUG 22:08:30 INFO 22:08:30 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 22:08:30 INFO 22:08:30 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 22:08:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 22:08:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 22:08:30 INFO [loop_until]: OK (rc = 0) 22:08:30 DEBUG --- stdout --- 22:08:30 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 22:08:30 DEBUG --- stderr --- 22:08:30 DEBUG 22:08:30 INFO 22:08:30 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 22:08:30 INFO 22:08:30 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 22:08:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:08:30 INFO [loop_until]: OK (rc = 0) 22:08:30 DEBUG --- stdout --- 22:08:30 DEBUG 0 22:08:30 DEBUG --- stderr --- 22:08:30 DEBUG 22:08:30 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.26:8080 Press CTRL+C to quit 22:09:01 INFO 22:09:01 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:01 INFO [loop_until]: OK (rc = 0) 22:09:01 DEBUG --- stdout --- 22:09:01 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:01 DEBUG --- stderr --- 22:09:01 DEBUG 22:09:01 INFO 22:09:01 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:01 INFO [loop_until]: OK (rc = 0) 22:09:01 DEBUG --- stdout --- 22:09:01 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:01 DEBUG --- stderr --- 22:09:01 DEBUG 22:09:01 INFO 22:09:01 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:01 INFO [loop_until]: OK (rc = 0) 22:09:01 DEBUG --- stdout --- 22:09:01 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:01 DEBUG --- stderr --- 22:09:01 DEBUG 22:09:01 INFO 22:09:01 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:01 INFO [loop_until]: OK (rc = 0) 22:09:01 DEBUG --- stdout --- 22:09:01 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:01 DEBUG --- stderr --- 22:09:01 DEBUG 22:09:01 INFO 22:09:01 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:01 INFO [loop_until]: OK (rc = 0) 22:09:01 DEBUG --- stdout --- 22:09:01 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:01 DEBUG --- stderr --- 22:09:01 DEBUG 22:09:01 INFO 22:09:01 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:02 INFO [loop_until]: OK (rc = 0) 22:09:02 DEBUG --- stdout --- 22:09:02 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:02 DEBUG --- stderr --- 22:09:02 DEBUG 22:09:02 INFO 22:09:02 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:02 INFO [loop_until]: OK (rc = 0) 22:09:02 DEBUG --- stdout --- 22:09:02 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:02 DEBUG --- stderr --- 22:09:02 DEBUG 22:09:02 INFO 22:09:02 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:02 INFO [loop_until]: OK (rc = 0) 22:09:02 DEBUG --- stdout --- 22:09:02 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:02 DEBUG --- stderr --- 22:09:02 DEBUG 22:09:02 INFO 22:09:02 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:02 INFO [loop_until]: OK (rc = 0) 22:09:02 DEBUG --- stdout --- 22:09:02 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:02 DEBUG --- stderr --- 22:09:02 DEBUG 22:09:02 INFO 22:09:02 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:02 INFO [loop_until]: OK (rc = 0) 22:09:02 DEBUG --- stdout --- 22:09:02 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:02 DEBUG --- stderr --- 22:09:02 DEBUG 22:09:02 INFO 22:09:02 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:02 INFO [loop_until]: OK (rc = 0) 22:09:02 DEBUG --- stdout --- 22:09:02 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:02 DEBUG --- stderr --- 22:09:02 DEBUG 22:09:02 INFO 22:09:02 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:02 INFO [loop_until]: OK (rc = 0) 22:09:02 DEBUG --- stdout --- 22:09:02 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:02 DEBUG --- stderr --- 22:09:02 DEBUG 22:09:02 INFO 22:09:02 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:02 INFO [loop_until]: OK (rc = 0) 22:09:02 DEBUG --- stdout --- 22:09:02 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:02 DEBUG --- stderr --- 22:09:02 DEBUG 22:09:03 INFO 22:09:03 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:03 INFO [loop_until]: OK (rc = 0) 22:09:03 DEBUG --- stdout --- 22:09:03 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:03 DEBUG --- stderr --- 22:09:03 DEBUG 22:09:03 INFO 22:09:03 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:03 INFO [loop_until]: OK (rc = 0) 22:09:03 DEBUG --- stdout --- 22:09:03 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:03 DEBUG --- stderr --- 22:09:03 DEBUG 22:09:03 INFO 22:09:03 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:03 INFO [loop_until]: OK (rc = 0) 22:09:03 DEBUG --- stdout --- 22:09:03 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:03 DEBUG --- stderr --- 22:09:03 DEBUG 22:09:03 INFO 22:09:03 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:03 INFO [loop_until]: OK (rc = 0) 22:09:03 DEBUG --- stdout --- 22:09:03 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:03 DEBUG --- stderr --- 22:09:03 DEBUG 22:09:03 INFO 22:09:03 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:03 INFO [loop_until]: OK (rc = 0) 22:09:03 DEBUG --- stdout --- 22:09:03 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:03 DEBUG --- stderr --- 22:09:03 DEBUG 22:09:03 INFO 22:09:03 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:03 INFO [loop_until]: OK (rc = 0) 22:09:03 DEBUG --- stdout --- 22:09:03 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:03 DEBUG --- stderr --- 22:09:03 DEBUG 22:09:03 INFO 22:09:03 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:03 INFO [loop_until]: OK (rc = 0) 22:09:03 DEBUG --- stdout --- 22:09:03 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:03 DEBUG --- stderr --- 22:09:03 DEBUG 22:09:03 INFO 22:09:03 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- 22:09:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG 22:09:04 INFO 22:09:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- 22:09:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG 22:09:04 INFO 22:09:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- 22:09:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG 22:09:04 INFO 22:09:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- 22:09:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG 22:09:04 INFO 22:09:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- 22:09:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG 22:09:04 INFO 22:09:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- 22:09:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG 22:09:04 INFO 22:09:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 22:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- 22:09:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG 22:09:04 INFO Initializing monitoring instance threads 22:09:04 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 22:09:04 INFO Starting instance threads 22:09:04 INFO 22:09:04 INFO Thread started 22:09:04 INFO [loop_until]: kubectl --namespace=xlou top node 22:09:04 INFO 22:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO Thread started 22:09:04 INFO [loop_until]: kubectl --namespace=xlou top pods 22:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144" 22:09:04 INFO Thread started Exception in thread Thread-23: 22:09:04 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 22:09:04 INFO Thread started Exception in thread Thread-24: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-25: 22:09:04 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691788144" self.run() 22:09:04 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691788144" File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 22:09:04 INFO Thread started self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run 22:09:04 INFO Thread started self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop Exception in thread Thread-28: 22:09:04 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144" Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() 22:09:04 INFO Thread started 22:09:04 INFO All threads has been started if self.prom_data['functions']: 127.0.0.1 - - [11/Aug/2023 22:09:04] "GET /monitoring/start HTTP/1.1" 200 - instance.run() 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- File "/usr/local/lib/python3.9/threading.py", line 910, in run 22:09:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 16m 2271Mi am-55f77847b7-nhzv4 13m 4421Mi am-55f77847b7-rpq9w 23m 4340Mi ds-cts-0 6m 381Mi ds-cts-1 6m 364Mi ds-cts-2 8m 402Mi ds-idrepo-0 27m 10325Mi ds-idrepo-1 25m 10310Mi ds-idrepo-2 38m 10255Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 8m 2729Mi idm-65858d8c4c-v78nh 8m 1291Mi lodemon-7655dd7665-d26cm 272m 60Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 15Mi KeyError: 'functions' 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG self.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) KeyError: 'functions' File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: KeyError: 'functions' if self.prom_data['functions']: KeyError: 'functions' 22:09:04 INFO [loop_until]: OK (rc = 0) 22:09:04 DEBUG --- stdout --- 22:09:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 338m 2% 1301Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 5336Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 3381Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 2594Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2103Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 3971Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 81m 0% 10973Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 83m 0% 10879Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 80m 0% 10933Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1620Mi 2% 22:09:04 DEBUG --- stderr --- 22:09:04 DEBUG 22:09:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:05 WARNING Response is NONE 22:09:05 DEBUG Exception is preset. Setting retry_loop to true 22:09:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:07 WARNING Response is NONE 22:09:07 WARNING Response is NONE 22:09:07 WARNING Response is NONE 22:09:07 DEBUG Exception is preset. Setting retry_loop to true 22:09:07 DEBUG Exception is preset. Setting retry_loop to true 22:09:07 DEBUG Exception is preset. Setting retry_loop to true 22:09:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:11 WARNING Response is NONE 22:09:11 WARNING Response is NONE 22:09:11 WARNING Response is NONE 22:09:11 WARNING Response is NONE 22:09:11 DEBUG Exception is preset. Setting retry_loop to true 22:09:11 DEBUG Exception is preset. Setting retry_loop to true 22:09:11 DEBUG Exception is preset. Setting retry_loop to true 22:09:11 DEBUG Exception is preset. Setting retry_loop to true 22:09:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:16 WARNING Response is NONE 22:09:16 DEBUG Exception is preset. Setting retry_loop to true 22:09:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:18 WARNING Response is NONE 22:09:18 WARNING Response is NONE 22:09:18 DEBUG Exception is preset. Setting retry_loop to true 22:09:18 DEBUG Exception is preset. Setting retry_loop to true 22:09:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:20 WARNING Response is NONE 22:09:20 DEBUG Exception is preset. Setting retry_loop to true 22:09:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:21 WARNING Response is NONE 22:09:21 DEBUG Exception is preset. Setting retry_loop to true 22:09:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:23 WARNING Response is NONE 22:09:23 DEBUG Exception is preset. Setting retry_loop to true 22:09:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:25 WARNING Response is NONE 22:09:25 WARNING Response is NONE 22:09:25 DEBUG Exception is preset. Setting retry_loop to true 22:09:25 DEBUG Exception is preset. Setting retry_loop to true 22:09:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:28 WARNING Response is NONE 22:09:28 DEBUG Exception is preset. Setting retry_loop to true 22:09:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:30 WARNING Response is NONE 22:09:30 DEBUG Exception is preset. Setting retry_loop to true 22:09:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:32 WARNING Response is NONE 22:09:32 DEBUG Exception is preset. Setting retry_loop to true 22:09:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:34 WARNING Response is NONE 22:09:34 DEBUG Exception is preset. Setting retry_loop to true 22:09:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:36 WARNING Response is NONE 22:09:36 DEBUG Exception is preset. Setting retry_loop to true 22:09:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:37 WARNING Response is NONE 22:09:37 DEBUG Exception is preset. Setting retry_loop to true 22:09:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:39 WARNING Response is NONE 22:09:39 DEBUG Exception is preset. Setting retry_loop to true 22:09:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:41 WARNING Response is NONE 22:09:41 DEBUG Exception is preset. Setting retry_loop to true 22:09:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:43 WARNING Response is NONE 22:09:43 DEBUG Exception is preset. Setting retry_loop to true 22:09:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:45 WARNING Response is NONE 22:09:45 DEBUG Exception is preset. Setting retry_loop to true 22:09:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:45 WARNING Response is NONE 22:09:45 DEBUG Exception is preset. Setting retry_loop to true 22:09:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:47 WARNING Response is NONE 22:09:47 DEBUG Exception is preset. Setting retry_loop to true 22:09:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:48 WARNING Response is NONE 22:09:48 DEBUG Exception is preset. Setting retry_loop to true 22:09:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:50 WARNING Response is NONE 22:09:50 DEBUG Exception is preset. Setting retry_loop to true 22:09:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:52 WARNING Response is NONE 22:09:52 DEBUG Exception is preset. Setting retry_loop to true 22:09:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:54 WARNING Response is NONE 22:09:54 DEBUG Exception is preset. Setting retry_loop to true 22:09:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:57 WARNING Response is NONE 22:09:57 DEBUG Exception is preset. Setting retry_loop to true 22:09:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:58 WARNING Response is NONE 22:09:58 DEBUG Exception is preset. Setting retry_loop to true 22:09:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:09:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:09:59 WARNING Response is NONE 22:09:59 DEBUG Exception is preset. Setting retry_loop to true 22:09:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:01 WARNING Response is NONE 22:10:01 DEBUG Exception is preset. Setting retry_loop to true 22:10:01 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:10:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:03 WARNING Response is NONE 22:10:03 DEBUG Exception is preset. Setting retry_loop to true 22:10:03 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:10:04 INFO 22:10:04 INFO [loop_until]: kubectl --namespace=xlou top node 22:10:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:10:04 INFO 22:10:04 INFO [loop_until]: kubectl --namespace=xlou top pods 22:10:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:10:04 INFO [loop_until]: OK (rc = 0) 22:10:04 DEBUG --- stdout --- 22:10:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5334Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3384Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2595Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2099Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 3972Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 725m 4% 10984Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 160m 1% 10888Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 95m 0% 10952Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 267m 1% 1619Mi 2% 22:10:04 DEBUG --- stderr --- 22:10:04 DEBUG 22:10:04 INFO [loop_until]: OK (rc = 0) 22:10:04 DEBUG --- stdout --- 22:10:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 20m 2273Mi am-55f77847b7-nhzv4 12m 4421Mi am-55f77847b7-rpq9w 14m 4341Mi ds-cts-0 10m 391Mi ds-cts-1 7m 368Mi ds-cts-2 65m 404Mi ds-idrepo-0 362m 10334Mi ds-idrepo-1 151m 10317Mi ds-idrepo-2 114m 10267Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 12m 2730Mi idm-65858d8c4c-v78nh 9m 1292Mi lodemon-7655dd7665-d26cm 3m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 142m 48Mi 22:10:04 DEBUG --- stderr --- 22:10:04 DEBUG 22:10:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:05 WARNING Response is NONE 22:10:05 DEBUG Exception is preset. Setting retry_loop to true 22:10:05 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:10:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:06 WARNING Response is NONE 22:10:06 WARNING Response is NONE 22:10:06 DEBUG Exception is preset. Setting retry_loop to true 22:10:06 DEBUG Exception is preset. Setting retry_loop to true 22:10:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:08 WARNING Response is NONE 22:10:08 DEBUG Exception is preset. Setting retry_loop to true 22:10:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:08 WARNING Response is NONE 22:10:08 DEBUG Exception is preset. Setting retry_loop to true 22:10:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:10 WARNING Response is NONE 22:10:10 DEBUG Exception is preset. Setting retry_loop to true 22:10:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:11 WARNING Response is NONE 22:10:11 DEBUG Exception is preset. Setting retry_loop to true 22:10:11 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:10:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:17 WARNING Response is NONE 22:10:17 WARNING Response is NONE 22:10:17 DEBUG Exception is preset. Setting retry_loop to true 22:10:17 DEBUG Exception is preset. Setting retry_loop to true 22:10:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:19 WARNING Response is NONE 22:10:19 DEBUG Exception is preset. Setting retry_loop to true 22:10:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:20 WARNING Response is NONE 22:10:20 DEBUG Exception is preset. Setting retry_loop to true 22:10:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:21 WARNING Response is NONE 22:10:21 DEBUG Exception is preset. Setting retry_loop to true 22:10:21 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:10:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:26 WARNING Response is NONE 22:10:26 DEBUG Exception is preset. Setting retry_loop to true 22:10:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:29 WARNING Response is NONE 22:10:29 WARNING Response is NONE 22:10:29 DEBUG Exception is preset. Setting retry_loop to true 22:10:29 DEBUG Exception is preset. Setting retry_loop to true 22:10:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:30 WARNING Response is NONE 22:10:30 DEBUG Exception is preset. Setting retry_loop to true 22:10:30 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:10:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:31 WARNING Response is NONE 22:10:31 DEBUG Exception is preset. Setting retry_loop to true 22:10:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:37 WARNING Response is NONE 22:10:37 DEBUG Exception is preset. Setting retry_loop to true 22:10:37 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:10:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:40 WARNING Response is NONE 22:10:40 WARNING Response is NONE 22:10:40 DEBUG Exception is preset. Setting retry_loop to true 22:10:40 DEBUG Exception is preset. Setting retry_loop to true 22:10:40 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): 22:10:40 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): raise FailException('Failed to obtain response from server...') File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable TypeError: 'LodestarLogger' object is not callable 22:10:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:42 WARNING Response is NONE 22:10:42 DEBUG Exception is preset. Setting retry_loop to true 22:10:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:10:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:10:53 WARNING Response is NONE 22:10:53 DEBUG Exception is preset. Setting retry_loop to true 22:10:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:04 WARNING Response is NONE 22:11:04 DEBUG Exception is preset. Setting retry_loop to true 22:11:04 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:11:05 INFO 22:11:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:11:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:11:05 INFO 22:11:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:11:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:11:05 INFO [loop_until]: OK (rc = 0) 22:11:05 DEBUG --- stdout --- 22:11:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5333Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3383Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5505Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2596Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2102Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 3972Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 69m 0% 10982Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 10892Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10938Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1622Mi 2% 22:11:05 DEBUG --- stderr --- 22:11:05 DEBUG 22:11:05 INFO [loop_until]: OK (rc = 0) 22:11:05 DEBUG --- stdout --- 22:11:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 2276Mi am-55f77847b7-nhzv4 15m 4422Mi am-55f77847b7-rpq9w 16m 4341Mi ds-cts-0 13m 391Mi ds-cts-1 8m 368Mi ds-cts-2 8m 404Mi ds-idrepo-0 17m 10334Mi ds-idrepo-1 22m 10317Mi ds-idrepo-2 19m 10268Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 10m 2730Mi idm-65858d8c4c-v78nh 7m 1292Mi lodemon-7655dd7665-d26cm 3m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 48Mi 22:11:05 DEBUG --- stderr --- 22:11:05 DEBUG 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 WARNING Response is NONE 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 DEBUG Exception is preset. Setting retry_loop to true 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:26 WARNING Response is NONE 22:11:26 DEBUG Exception is preset. Setting retry_loop to true 22:11:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:28 WARNING Response is NONE 22:11:28 WARNING Response is NONE 22:11:28 WARNING Response is NONE 22:11:28 DEBUG Exception is preset. Setting retry_loop to true 22:11:28 DEBUG Exception is preset. Setting retry_loop to true 22:11:28 DEBUG Exception is preset. Setting retry_loop to true 22:11:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:32 WARNING Response is NONE 22:11:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:32 WARNING Response is NONE 22:11:32 DEBUG Exception is preset. Setting retry_loop to true 22:11:32 WARNING Response is NONE 22:11:32 WARNING Response is NONE 22:11:32 WARNING Response is NONE 22:11:32 DEBUG Exception is preset. Setting retry_loop to true 22:11:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:32 DEBUG Exception is preset. Setting retry_loop to true 22:11:32 DEBUG Exception is preset. Setting retry_loop to true 22:11:32 DEBUG Exception is preset. Setting retry_loop to true 22:11:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:37 WARNING Response is NONE 22:11:37 DEBUG Exception is preset. Setting retry_loop to true 22:11:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:39 WARNING Response is NONE 22:11:39 WARNING Response is NONE 22:11:39 DEBUG Exception is preset. Setting retry_loop to true 22:11:39 DEBUG Exception is preset. Setting retry_loop to true 22:11:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:41 WARNING Response is NONE 22:11:41 WARNING Response is NONE 22:11:41 DEBUG Exception is preset. Setting retry_loop to true 22:11:41 DEBUG Exception is preset. Setting retry_loop to true 22:11:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:41 WARNING Response is NONE 22:11:41 DEBUG Exception is preset. Setting retry_loop to true 22:11:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:43 WARNING Response is NONE 22:11:43 DEBUG Exception is preset. Setting retry_loop to true 22:11:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:45 WARNING Response is NONE 22:11:45 WARNING Response is NONE 22:11:45 DEBUG Exception is preset. Setting retry_loop to true 22:11:45 DEBUG Exception is preset. Setting retry_loop to true 22:11:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:48 WARNING Response is NONE 22:11:48 DEBUG Exception is preset. Setting retry_loop to true 22:11:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:50 WARNING Response is NONE 22:11:50 DEBUG Exception is preset. Setting retry_loop to true 22:11:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:52 WARNING Response is NONE 22:11:52 DEBUG Exception is preset. Setting retry_loop to true 22:11:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:54 WARNING Response is NONE 22:11:54 DEBUG Exception is preset. Setting retry_loop to true 22:11:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:56 WARNING Response is NONE 22:11:56 DEBUG Exception is preset. Setting retry_loop to true 22:11:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:58 WARNING Response is NONE 22:11:58 DEBUG Exception is preset. Setting retry_loop to true 22:11:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:11:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:11:59 WARNING Response is NONE 22:11:59 DEBUG Exception is preset. Setting retry_loop to true 22:11:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:01 WARNING Response is NONE 22:12:01 DEBUG Exception is preset. Setting retry_loop to true 22:12:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:03 WARNING Response is NONE 22:12:03 DEBUG Exception is preset. Setting retry_loop to true 22:12:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:05 INFO 22:12:05 INFO 22:12:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:12:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:12:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:12:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:12:05 INFO [loop_until]: OK (rc = 0) 22:12:05 DEBUG --- stdout --- 22:12:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 2286Mi am-55f77847b7-nhzv4 12m 4422Mi am-55f77847b7-rpq9w 16m 4341Mi ds-cts-0 6m 391Mi ds-cts-1 6m 368Mi ds-cts-2 13m 404Mi ds-idrepo-0 27m 10334Mi ds-idrepo-1 34m 10312Mi ds-idrepo-2 30m 10265Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 8m 2730Mi idm-65858d8c4c-v78nh 6m 1292Mi lodemon-7655dd7665-d26cm 3m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 270m 98Mi 22:12:05 DEBUG --- stderr --- 22:12:05 DEBUG 22:12:05 INFO [loop_until]: OK (rc = 0) 22:12:05 DEBUG --- stdout --- 22:12:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1303Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5335Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 3395Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2593Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2100Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 3970Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 74m 0% 10985Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 10890Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 76m 0% 10935Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 356m 2% 1621Mi 2% 22:12:05 DEBUG --- stderr --- 22:12:05 DEBUG 22:12:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:05 WARNING Response is NONE 22:12:05 DEBUG Exception is preset. Setting retry_loop to true 22:12:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:06 WARNING Response is NONE 22:12:06 DEBUG Exception is preset. Setting retry_loop to true 22:12:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:07 WARNING Response is NONE 22:12:07 DEBUG Exception is preset. Setting retry_loop to true 22:12:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:09 WARNING Response is NONE 22:12:09 DEBUG Exception is preset. Setting retry_loop to true 22:12:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:10 WARNING Response is NONE 22:12:10 DEBUG Exception is preset. Setting retry_loop to true 22:12:10 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:12 WARNING Response is NONE 22:12:12 DEBUG Exception is preset. Setting retry_loop to true 22:12:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:14 WARNING Response is NONE 22:12:14 DEBUG Exception is preset. Setting retry_loop to true 22:12:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:17 WARNING Response is NONE 22:12:17 DEBUG Exception is preset. Setting retry_loop to true 22:12:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:19 WARNING Response is NONE 22:12:19 DEBUG Exception is preset. Setting retry_loop to true 22:12:19 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:20 WARNING Response is NONE 22:12:20 DEBUG Exception is preset. Setting retry_loop to true 22:12:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:21 WARNING Response is NONE 22:12:21 DEBUG Exception is preset. Setting retry_loop to true 22:12:21 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:23 WARNING Response is NONE 22:12:23 DEBUG Exception is preset. Setting retry_loop to true 22:12:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:23 WARNING Response is NONE 22:12:23 DEBUG Exception is preset. Setting retry_loop to true 22:12:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:27 WARNING Response is NONE 22:12:27 DEBUG Exception is preset. Setting retry_loop to true 22:12:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:28 WARNING Response is NONE 22:12:28 DEBUG Exception is preset. Setting retry_loop to true 22:12:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:29 WARNING Response is NONE 22:12:29 WARNING Response is NONE 22:12:29 DEBUG Exception is preset. Setting retry_loop to true 22:12:29 DEBUG Exception is preset. Setting retry_loop to true 22:12:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:31 WARNING Response is NONE 22:12:31 DEBUG Exception is preset. Setting retry_loop to true 22:12:31 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:34 WARNING Response is NONE 22:12:34 DEBUG Exception is preset. Setting retry_loop to true 22:12:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:34 WARNING Response is NONE 22:12:34 DEBUG Exception is preset. Setting retry_loop to true 22:12:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:38 WARNING Response is NONE 22:12:38 DEBUG Exception is preset. Setting retry_loop to true 22:12:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:39 WARNING Response is NONE 22:12:39 DEBUG Exception is preset. Setting retry_loop to true 22:12:39 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:40 WARNING Response is NONE 22:12:40 DEBUG Exception is preset. Setting retry_loop to true 22:12:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:42 WARNING Response is NONE 22:12:42 DEBUG Exception is preset. Setting retry_loop to true 22:12:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:45 WARNING Response is NONE 22:12:45 DEBUG Exception is preset. Setting retry_loop to true 22:12:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:45 WARNING Response is NONE 22:12:45 DEBUG Exception is preset. Setting retry_loop to true 22:12:45 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:46 WARNING Response is NONE 22:12:46 DEBUG Exception is preset. Setting retry_loop to true 22:12:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:49 WARNING Response is NONE 22:12:49 DEBUG Exception is preset. Setting retry_loop to true 22:12:49 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:51 WARNING Response is NONE 22:12:51 DEBUG Exception is preset. Setting retry_loop to true 22:12:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:53 WARNING Response is NONE 22:12:53 DEBUG Exception is preset. Setting retry_loop to true 22:12:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:12:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:56 WARNING Response is NONE 22:12:56 DEBUG Exception is preset. Setting retry_loop to true 22:12:56 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:12:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:12:57 WARNING Response is NONE 22:12:57 DEBUG Exception is preset. Setting retry_loop to true 22:12:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:13:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:13:02 WARNING Response is NONE 22:13:02 DEBUG Exception is preset. Setting retry_loop to true 22:13:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:13:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:13:05 WARNING Response is NONE 22:13:05 DEBUG Exception is preset. Setting retry_loop to true 22:13:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:13:05 INFO 22:13:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:13:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:13:05 INFO 22:13:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:13:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:13:05 INFO [loop_until]: OK (rc = 0) 22:13:05 DEBUG --- stdout --- 22:13:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 2298Mi am-55f77847b7-nhzv4 11m 4423Mi am-55f77847b7-rpq9w 11m 4342Mi ds-cts-0 7m 391Mi ds-cts-1 7m 368Mi ds-cts-2 11m 405Mi ds-idrepo-0 26m 10338Mi ds-idrepo-1 22m 10312Mi ds-idrepo-2 19m 10266Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 9m 2730Mi idm-65858d8c4c-v78nh 7m 1292Mi lodemon-7655dd7665-d26cm 3m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 98Mi 22:13:05 DEBUG --- stderr --- 22:13:05 DEBUG 22:13:05 INFO [loop_until]: OK (rc = 0) 22:13:05 DEBUG --- stdout --- 22:13:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5337Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 3407Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5509Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2596Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2104Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 3970Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 80m 0% 10988Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 10891Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 77m 0% 10934Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1619Mi 2% 22:13:05 DEBUG --- stderr --- 22:13:05 DEBUG 22:13:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:13:08 WARNING Response is NONE 22:13:08 DEBUG Exception is preset. Setting retry_loop to true 22:13:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 22:13:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:13:14 WARNING Response is NONE 22:13:14 DEBUG Exception is preset. Setting retry_loop to true 22:13:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:13:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:13:16 WARNING Response is NONE 22:13:16 DEBUG Exception is preset. Setting retry_loop to true 22:13:16 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:13:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691788144 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 22:13:19 WARNING Response is NONE 22:13:19 DEBUG Exception is preset. Setting retry_loop to true 22:13:19 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 22:14:05 INFO 22:14:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:14:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:14:05 INFO [loop_until]: OK (rc = 0) 22:14:05 DEBUG --- stdout --- 22:14:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 36m 2373Mi am-55f77847b7-nhzv4 10m 4422Mi am-55f77847b7-rpq9w 49m 4357Mi ds-cts-0 77m 392Mi ds-cts-1 74m 369Mi ds-cts-2 66m 405Mi ds-idrepo-0 19m 10338Mi ds-idrepo-1 37m 10320Mi ds-idrepo-2 21m 10269Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 38m 2744Mi idm-65858d8c4c-v78nh 6m 1292Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 2m 98Mi 22:14:05 DEBUG --- stderr --- 22:14:05 DEBUG 22:14:05 INFO 22:14:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:14:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:14:05 INFO [loop_until]: OK (rc = 0) 22:14:05 DEBUG --- stdout --- 22:14:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 108m 0% 5352Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 3484Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 5524Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2596Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2107Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 97m 0% 3980Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 137m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 134m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 422m 2% 10991Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10896Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 140m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 155m 0% 10943Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 494m 3% 1863Mi 3% 22:14:05 DEBUG --- stderr --- 22:14:05 DEBUG 22:15:05 INFO 22:15:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:15:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:15:05 INFO [loop_until]: OK (rc = 0) 22:15:05 DEBUG --- stdout --- 22:15:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 2560Mi am-55f77847b7-nhzv4 13m 4435Mi am-55f77847b7-rpq9w 15m 4357Mi ds-cts-0 294m 393Mi ds-cts-1 148m 368Mi ds-cts-2 229m 406Mi ds-idrepo-0 3162m 13163Mi ds-idrepo-1 213m 10323Mi ds-idrepo-2 241m 10276Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 9m 2744Mi idm-65858d8c4c-v78nh 12m 1292Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1025m 375Mi 22:15:05 DEBUG --- stderr --- 22:15:05 DEBUG 22:15:05 INFO 22:15:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:15:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:15:05 INFO [loop_until]: OK (rc = 0) 22:15:05 DEBUG --- stdout --- 22:15:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5353Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3671Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5517Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2597Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2094Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 3981Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 211m 1% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 379m 2% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3288m 20% 13742Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 277m 1% 10903Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 200m 1% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 278m 1% 10944Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1142m 7% 1890Mi 3% 22:15:05 DEBUG --- stderr --- 22:15:05 DEBUG 22:16:05 INFO 22:16:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:16:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:16:05 INFO [loop_until]: OK (rc = 0) 22:16:05 DEBUG --- stdout --- 22:16:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2560Mi am-55f77847b7-nhzv4 11m 4435Mi am-55f77847b7-rpq9w 11m 4358Mi ds-cts-0 9m 389Mi ds-cts-1 6m 368Mi ds-cts-2 7m 406Mi ds-idrepo-0 2559m 13403Mi ds-idrepo-1 33m 10319Mi ds-idrepo-2 14m 10278Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 12m 2745Mi idm-65858d8c4c-v78nh 7m 1293Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1119m 375Mi 22:16:05 DEBUG --- stderr --- 22:16:05 DEBUG 22:16:05 INFO 22:16:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:16:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:16:05 INFO [loop_until]: OK (rc = 0) 22:16:05 DEBUG --- stdout --- 22:16:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3670Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5518Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2598Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2101Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 3992Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2729m 17% 13961Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10898Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 10939Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1215m 7% 1893Mi 3% 22:16:05 DEBUG --- stderr --- 22:16:05 DEBUG 22:17:05 INFO 22:17:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:17:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:17:05 INFO [loop_until]: OK (rc = 0) 22:17:05 DEBUG --- stdout --- 22:17:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 2561Mi am-55f77847b7-nhzv4 12m 4435Mi am-55f77847b7-rpq9w 10m 4358Mi ds-cts-0 8m 390Mi ds-cts-1 7m 370Mi ds-cts-2 9m 407Mi ds-idrepo-0 2732m 13381Mi ds-idrepo-1 19m 10319Mi ds-idrepo-2 20m 10281Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 12m 2745Mi idm-65858d8c4c-v78nh 8m 1293Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1180m 375Mi 22:17:05 DEBUG --- stderr --- 22:17:05 DEBUG 22:17:05 INFO 22:17:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:17:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:17:05 INFO [loop_until]: OK (rc = 0) 22:17:05 DEBUG --- stdout --- 22:17:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5355Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3670Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2601Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 3984Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2814m 17% 13944Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 70m 0% 10904Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 10942Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1250m 7% 1894Mi 3% 22:17:05 DEBUG --- stderr --- 22:17:05 DEBUG 22:18:05 INFO 22:18:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:18:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:18:05 INFO [loop_until]: OK (rc = 0) 22:18:05 DEBUG --- stdout --- 22:18:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2560Mi am-55f77847b7-nhzv4 11m 4437Mi am-55f77847b7-rpq9w 10m 4358Mi ds-cts-0 7m 390Mi ds-cts-1 7m 372Mi ds-cts-2 7m 407Mi ds-idrepo-0 3022m 13560Mi ds-idrepo-1 21m 10322Mi ds-idrepo-2 19m 10281Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 11m 2745Mi idm-65858d8c4c-v78nh 8m 1296Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1233m 375Mi 22:18:05 DEBUG --- stderr --- 22:18:05 DEBUG 22:18:05 INFO 22:18:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:18:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:18:05 INFO [loop_until]: OK (rc = 0) 22:18:05 DEBUG --- stdout --- 22:18:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5363Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 3669Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5519Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2601Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 3984Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 68m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3155m 19% 14116Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 73m 0% 10904Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 79m 0% 10945Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1353m 8% 1893Mi 3% 22:18:05 DEBUG --- stderr --- 22:18:05 DEBUG 22:19:05 INFO 22:19:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:19:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:19:05 INFO [loop_until]: OK (rc = 0) 22:19:05 DEBUG --- stdout --- 22:19:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 2560Mi am-55f77847b7-nhzv4 9m 4437Mi am-55f77847b7-rpq9w 9m 4360Mi ds-cts-0 8m 390Mi ds-cts-1 6m 370Mi ds-cts-2 6m 407Mi ds-idrepo-0 3036m 13593Mi ds-idrepo-1 18m 10323Mi ds-idrepo-2 15m 10280Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 16m 2745Mi idm-65858d8c4c-v78nh 10m 1296Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1344m 375Mi 22:19:05 DEBUG --- stderr --- 22:19:05 DEBUG 22:19:05 INFO 22:19:05 INFO [loop_until]: kubectl --namespace=xlou top node 22:19:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:19:05 INFO [loop_until]: OK (rc = 0) 22:19:05 DEBUG --- stdout --- 22:19:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1303Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5355Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3670Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5523Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 83m 0% 2603Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 118m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 3988Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3088m 19% 14145Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 10905Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 10947Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1259m 7% 1619Mi 2% 22:19:05 DEBUG --- stderr --- 22:19:05 DEBUG 22:20:05 INFO 22:20:05 INFO [loop_until]: kubectl --namespace=xlou top pods 22:20:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:20:05 INFO [loop_until]: OK (rc = 0) 22:20:05 DEBUG --- stdout --- 22:20:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2560Mi am-55f77847b7-nhzv4 10m 4438Mi am-55f77847b7-rpq9w 11m 4360Mi ds-cts-0 8m 390Mi ds-cts-1 5m 371Mi ds-cts-2 7m 408Mi ds-idrepo-0 13m 13592Mi ds-idrepo-1 16m 10323Mi ds-idrepo-2 19m 10284Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 15m 2745Mi idm-65858d8c4c-v78nh 8m 1296Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 98Mi 22:20:05 DEBUG --- stderr --- 22:20:05 DEBUG 22:20:06 INFO 22:20:06 INFO [loop_until]: kubectl --namespace=xlou top node 22:20:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:20:06 INFO [loop_until]: OK (rc = 0) 22:20:06 DEBUG --- stdout --- 22:20:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5356Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3670Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 5523Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2601Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 3990Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 14147Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 70m 0% 10912Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 10949Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1618Mi 2% 22:20:06 DEBUG --- stderr --- 22:20:06 DEBUG 22:21:06 INFO 22:21:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:21:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:21:06 INFO [loop_until]: OK (rc = 0) 22:21:06 DEBUG --- stdout --- 22:21:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 8m 2561Mi am-55f77847b7-nhzv4 18m 4441Mi am-55f77847b7-rpq9w 11m 4361Mi ds-cts-0 5m 390Mi ds-cts-1 8m 372Mi ds-cts-2 8m 407Mi ds-idrepo-0 16m 13592Mi ds-idrepo-1 2687m 12597Mi ds-idrepo-2 21m 10286Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 9m 2746Mi idm-65858d8c4c-v78nh 6m 1296Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 969m 363Mi 22:21:06 DEBUG --- stderr --- 22:21:06 DEBUG 22:21:06 INFO 22:21:06 INFO [loop_until]: kubectl --namespace=xlou top node 22:21:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:21:06 INFO [loop_until]: OK (rc = 0) 22:21:06 DEBUG --- stdout --- 22:21:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5358Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 3671Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5528Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2602Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2112Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 3991Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10913Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2798m 17% 13084Mi 22% gke-xlou-cdm-frontend-a8771548-k40m 1170m 7% 1879Mi 3% 22:21:06 DEBUG --- stderr --- 22:21:06 DEBUG 22:22:06 INFO 22:22:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:22:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:22:06 INFO [loop_until]: OK (rc = 0) 22:22:06 DEBUG --- stdout --- 22:22:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 2558Mi am-55f77847b7-nhzv4 10m 4441Mi am-55f77847b7-rpq9w 8m 4361Mi ds-cts-0 6m 390Mi ds-cts-1 8m 371Mi ds-cts-2 7m 408Mi ds-idrepo-0 13m 13592Mi ds-idrepo-1 2702m 13366Mi ds-idrepo-2 13m 10280Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 9m 2746Mi idm-65858d8c4c-v78nh 8m 1296Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1141m 363Mi 22:22:06 DEBUG --- stderr --- 22:22:06 DEBUG 22:22:06 INFO 22:22:06 INFO [loop_until]: kubectl --namespace=xlou top node 22:22:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:22:06 INFO [loop_until]: OK (rc = 0) 22:22:06 DEBUG --- stdout --- 22:22:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5358Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3666Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5529Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2603Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2104Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 3985Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 10905Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2759m 17% 13901Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1212m 7% 1882Mi 3% 22:22:06 DEBUG --- stderr --- 22:22:06 DEBUG 22:23:06 INFO 22:23:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:23:06 INFO [loop_until]: OK (rc = 0) 22:23:06 DEBUG --- stdout --- 22:23:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 2559Mi am-55f77847b7-nhzv4 12m 4441Mi am-55f77847b7-rpq9w 13m 4361Mi ds-cts-0 8m 390Mi ds-cts-1 5m 371Mi ds-cts-2 8m 408Mi ds-idrepo-0 24m 13592Mi ds-idrepo-1 2711m 13346Mi ds-idrepo-2 13m 10280Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 10m 2746Mi idm-65858d8c4c-v78nh 7m 1296Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1195m 364Mi 22:23:06 DEBUG --- stderr --- 22:23:06 DEBUG 22:23:06 INFO 22:23:06 INFO [loop_until]: kubectl --namespace=xlou top node 22:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:23:06 INFO [loop_until]: OK (rc = 0) 22:23:06 DEBUG --- stdout --- 22:23:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5358Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3666Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5525Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2606Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 3987Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 72m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 10905Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2780m 17% 13883Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1282m 8% 1882Mi 3% 22:23:06 DEBUG --- stderr --- 22:23:06 DEBUG 22:24:06 INFO 22:24:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:24:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:24:06 INFO [loop_until]: OK (rc = 0) 22:24:06 DEBUG --- stdout --- 22:24:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2559Mi am-55f77847b7-nhzv4 8m 4442Mi am-55f77847b7-rpq9w 10m 4361Mi ds-cts-0 6m 390Mi ds-cts-1 6m 371Mi ds-cts-2 15m 409Mi ds-idrepo-0 14m 13592Mi ds-idrepo-1 3180m 13513Mi ds-idrepo-2 24m 10281Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 8m 2747Mi idm-65858d8c4c-v78nh 9m 1296Mi lodemon-7655dd7665-d26cm 1m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1295m 364Mi 22:24:06 DEBUG --- stderr --- 22:24:06 DEBUG 22:24:06 INFO 22:24:06 INFO [loop_until]: kubectl --namespace=xlou top node 22:24:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:24:06 INFO [loop_until]: OK (rc = 0) 22:24:06 DEBUG --- stdout --- 22:24:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5358Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3666Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5530Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 2601Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 118m 0% 2113Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 3985Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 14148Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 10908Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3218m 20% 14048Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1368m 8% 1881Mi 3% 22:24:06 DEBUG --- stderr --- 22:24:06 DEBUG 22:25:06 INFO 22:25:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:25:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:25:06 INFO [loop_until]: OK (rc = 0) 22:25:06 DEBUG --- stdout --- 22:25:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 19m 2559Mi am-55f77847b7-nhzv4 9m 4441Mi am-55f77847b7-rpq9w 10m 4361Mi ds-cts-0 6m 391Mi ds-cts-1 5m 371Mi ds-cts-2 6m 409Mi ds-idrepo-0 12m 13591Mi ds-idrepo-1 3112m 13562Mi ds-idrepo-2 19m 10287Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 9m 2747Mi idm-65858d8c4c-v78nh 6m 1297Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1361m 364Mi 22:25:06 DEBUG --- stderr --- 22:25:06 DEBUG 22:25:06 INFO 22:25:06 INFO [loop_until]: kubectl --namespace=xlou top node 22:25:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:25:06 INFO [loop_until]: OK (rc = 0) 22:25:06 DEBUG --- stdout --- 22:25:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5358Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3668Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5526Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2601Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 3988Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14146Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10914Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3175m 19% 14091Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1423m 8% 1882Mi 3% 22:25:06 DEBUG --- stderr --- 22:25:06 DEBUG 22:26:06 INFO 22:26:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:26:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:06 INFO [loop_until]: OK (rc = 0) 22:26:06 DEBUG --- stdout --- 22:26:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 7m 2559Mi am-55f77847b7-nhzv4 7m 4442Mi am-55f77847b7-rpq9w 8m 4361Mi ds-cts-0 6m 391Mi ds-cts-1 8m 372Mi ds-cts-2 9m 409Mi ds-idrepo-0 12m 13591Mi ds-idrepo-1 15m 13562Mi ds-idrepo-2 16m 10287Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 8m 2747Mi idm-65858d8c4c-v78nh 7m 1297Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 98Mi 22:26:06 DEBUG --- stderr --- 22:26:06 DEBUG 22:26:06 INFO 22:26:06 INFO [loop_until]: kubectl --namespace=xlou top node 22:26:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:26:06 INFO [loop_until]: OK (rc = 0) 22:26:06 DEBUG --- stdout --- 22:26:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5355Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 3667Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5526Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 2602Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 3990Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10917Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14092Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 63m 0% 1621Mi 2% 22:26:06 DEBUG --- stderr --- 22:26:06 DEBUG 22:27:06 INFO 22:27:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:27:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:06 INFO [loop_until]: OK (rc = 0) 22:27:06 DEBUG --- stdout --- 22:27:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 7m 2559Mi am-55f77847b7-nhzv4 9m 4442Mi am-55f77847b7-rpq9w 8m 4362Mi ds-cts-0 9m 391Mi ds-cts-1 7m 372Mi ds-cts-2 6m 409Mi ds-idrepo-0 15m 13592Mi ds-idrepo-1 14m 13563Mi ds-idrepo-2 2466m 12247Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 8m 2747Mi idm-65858d8c4c-v78nh 6m 1299Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1301m 379Mi 22:27:06 DEBUG --- stderr --- 22:27:06 DEBUG 22:27:06 INFO 22:27:06 INFO [loop_until]: kubectl --namespace=xlou top node 22:27:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:27:07 INFO [loop_until]: OK (rc = 0) 22:27:07 DEBUG --- stdout --- 22:27:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 5358Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 59m 0% 3670Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5528Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 2607Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2103Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 3989Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 74m 0% 14148Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2500m 15% 12813Mi 21% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14094Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1372m 8% 1899Mi 3% 22:27:07 DEBUG --- stderr --- 22:27:07 DEBUG 22:28:06 INFO 22:28:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:28:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:28:06 INFO [loop_until]: OK (rc = 0) 22:28:06 DEBUG --- stdout --- 22:28:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 6m 2559Mi am-55f77847b7-nhzv4 9m 4443Mi am-55f77847b7-rpq9w 8m 4362Mi ds-cts-0 7m 391Mi ds-cts-1 8m 372Mi ds-cts-2 7m 410Mi ds-idrepo-0 13m 13592Mi ds-idrepo-1 14m 13556Mi ds-idrepo-2 2533m 13332Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 9m 2747Mi idm-65858d8c4c-v78nh 7m 1299Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1204m 379Mi 22:28:06 DEBUG --- stderr --- 22:28:06 DEBUG 22:28:07 INFO 22:28:07 INFO [loop_until]: kubectl --namespace=xlou top node 22:28:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:28:07 INFO [loop_until]: OK (rc = 0) 22:28:07 DEBUG --- stdout --- 22:28:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 3668Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5527Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2607Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2109Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 3990Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 14149Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2603m 16% 13877Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14089Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1284m 8% 1898Mi 3% 22:28:07 DEBUG --- stderr --- 22:28:07 DEBUG 22:29:06 INFO 22:29:06 INFO [loop_until]: kubectl --namespace=xlou top pods 22:29:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:29:06 INFO [loop_until]: OK (rc = 0) 22:29:06 DEBUG --- stdout --- 22:29:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 7m 2563Mi am-55f77847b7-nhzv4 9m 4449Mi am-55f77847b7-rpq9w 8m 4362Mi ds-cts-0 8m 391Mi ds-cts-1 8m 372Mi ds-cts-2 6m 409Mi ds-idrepo-0 14m 13594Mi ds-idrepo-1 14m 13557Mi ds-idrepo-2 2690m 13367Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 15m 2748Mi idm-65858d8c4c-v78nh 6m 1300Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1211m 380Mi 22:29:06 DEBUG --- stderr --- 22:29:06 DEBUG 22:29:07 INFO 22:29:07 INFO [loop_until]: kubectl --namespace=xlou top node 22:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:29:07 INFO [loop_until]: OK (rc = 0) 22:29:07 DEBUG --- stdout --- 22:29:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 59m 0% 5357Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 60m 0% 3672Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2608Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2100Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 3993Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 66m 0% 14153Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2796m 17% 13928Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14089Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1304m 8% 1898Mi 3% 22:29:07 DEBUG --- stderr --- 22:29:07 DEBUG 22:30:07 INFO 22:30:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:30:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:30:07 INFO [loop_until]: OK (rc = 0) 22:30:07 DEBUG --- stdout --- 22:30:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 34m 2568Mi am-55f77847b7-nhzv4 12m 4454Mi am-55f77847b7-rpq9w 8m 4362Mi ds-cts-0 7m 391Mi ds-cts-1 10m 372Mi ds-cts-2 8m 409Mi ds-idrepo-0 13m 13592Mi ds-idrepo-1 16m 13558Mi ds-idrepo-2 2869m 13486Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 13m 2748Mi idm-65858d8c4c-v78nh 8m 1300Mi lodemon-7655dd7665-d26cm 5m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1278m 380Mi 22:30:07 DEBUG --- stderr --- 22:30:07 DEBUG 22:30:07 INFO 22:30:07 INFO [loop_until]: kubectl --namespace=xlou top node 22:30:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:30:07 INFO [loop_until]: OK (rc = 0) 22:30:07 DEBUG --- stdout --- 22:30:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5356Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 3674Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 2608Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2111Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 3987Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 66m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2883m 18% 14037Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14091Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1351m 8% 1899Mi 3% 22:30:07 DEBUG --- stderr --- 22:30:07 DEBUG 22:31:07 INFO 22:31:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:31:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:31:07 INFO [loop_until]: OK (rc = 0) 22:31:07 DEBUG --- stdout --- 22:31:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2574Mi am-55f77847b7-nhzv4 12m 4455Mi am-55f77847b7-rpq9w 8m 4361Mi ds-cts-0 6m 391Mi ds-cts-1 7m 372Mi ds-cts-2 7m 409Mi ds-idrepo-0 13m 13593Mi ds-idrepo-1 33m 13551Mi ds-idrepo-2 2923m 13499Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 8m 2749Mi idm-65858d8c4c-v78nh 8m 1302Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1324m 381Mi 22:31:07 DEBUG --- stderr --- 22:31:07 DEBUG 22:31:07 INFO 22:31:07 INFO [loop_until]: kubectl --namespace=xlou top node 22:31:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:31:07 INFO [loop_until]: OK (rc = 0) 22:31:07 DEBUG --- stdout --- 22:31:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5352Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3684Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2609Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 3993Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3145m 19% 14049Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 80m 0% 14085Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1417m 8% 1900Mi 3% 22:31:07 DEBUG --- stderr --- 22:31:07 DEBUG 22:32:07 INFO 22:32:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:32:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:32:07 INFO [loop_until]: OK (rc = 0) 22:32:07 DEBUG --- stdout --- 22:32:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 7m 2574Mi am-55f77847b7-nhzv4 8m 4455Mi am-55f77847b7-rpq9w 11m 4372Mi ds-cts-0 6m 391Mi ds-cts-1 5m 372Mi ds-cts-2 7m 410Mi ds-idrepo-0 13m 13593Mi ds-idrepo-1 19m 13556Mi ds-idrepo-2 173m 13679Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 10m 2750Mi idm-65858d8c4c-v78nh 6m 1303Mi lodemon-7655dd7665-d26cm 1m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 53m 98Mi 22:32:07 DEBUG --- stderr --- 22:32:07 DEBUG 22:32:07 INFO 22:32:07 INFO [loop_until]: kubectl --namespace=xlou top node 22:32:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:32:07 INFO [loop_until]: OK (rc = 0) 22:32:07 DEBUG --- stdout --- 22:32:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5364Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 3685Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 2608Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 3993Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 56m 0% 14211Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14088Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1618Mi 2% 22:32:07 DEBUG --- stderr --- 22:32:07 DEBUG 22:33:07 INFO 22:33:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:33:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:33:07 INFO [loop_until]: OK (rc = 0) 22:33:07 DEBUG --- stdout --- 22:33:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 8m 2574Mi am-55f77847b7-nhzv4 8m 4455Mi am-55f77847b7-rpq9w 31m 4394Mi ds-cts-0 6m 391Mi ds-cts-1 5m 372Mi ds-cts-2 9m 411Mi ds-idrepo-0 12m 13592Mi ds-idrepo-1 17m 13558Mi ds-idrepo-2 15m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 8m 2752Mi idm-65858d8c4c-v78nh 6m 1304Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1224m 416Mi 22:33:07 DEBUG --- stderr --- 22:33:07 DEBUG 22:33:07 INFO 22:33:07 INFO [loop_until]: kubectl --namespace=xlou top node 22:33:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:33:07 INFO [loop_until]: OK (rc = 0) 22:33:07 DEBUG --- stdout --- 22:33:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 5368Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3684Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2612Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 3998Mi 6% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 72m 0% 14150Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 14211Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14089Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1887m 11% 1906Mi 3% 22:33:07 DEBUG --- stderr --- 22:33:07 DEBUG 22:34:07 INFO 22:34:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:34:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:34:07 INFO [loop_until]: OK (rc = 0) 22:34:07 DEBUG --- stdout --- 22:34:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 20m 2580Mi am-55f77847b7-nhzv4 27m 4459Mi am-55f77847b7-rpq9w 43m 4388Mi ds-cts-0 7m 393Mi ds-cts-1 6m 374Mi ds-cts-2 6m 411Mi ds-idrepo-0 160m 13594Mi ds-idrepo-1 61m 13564Mi ds-idrepo-2 70m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 344m 2863Mi idm-65858d8c4c-v78nh 340m 2796Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 121m 475Mi 22:34:07 DEBUG --- stderr --- 22:34:07 DEBUG 22:34:07 INFO 22:34:07 INFO [loop_until]: kubectl --namespace=xlou top node 22:34:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:34:07 INFO [loop_until]: OK (rc = 0) 22:34:07 DEBUG --- stdout --- 22:34:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 3693Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 417m 2% 4154Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 407m 2% 4361Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 208m 1% 14152Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 123m 0% 14213Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 115m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 175m 1% 1979Mi 3% 22:34:07 DEBUG --- stderr --- 22:34:07 DEBUG 22:35:07 INFO 22:35:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:35:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:35:07 INFO [loop_until]: OK (rc = 0) 22:35:07 DEBUG --- stdout --- 22:35:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 27m 2578Mi am-55f77847b7-nhzv4 15m 4459Mi am-55f77847b7-rpq9w 31m 4392Mi ds-cts-0 7m 393Mi ds-cts-1 6m 373Mi ds-cts-2 6m 410Mi ds-idrepo-0 132m 13594Mi ds-idrepo-1 71m 13564Mi ds-idrepo-2 56m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 135m 3496Mi idm-65858d8c4c-v78nh 153m 3382Mi lodemon-7655dd7665-d26cm 2m 65Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 71m 479Mi 22:35:07 DEBUG --- stderr --- 22:35:07 DEBUG 22:35:08 INFO 22:35:08 INFO [loop_until]: kubectl --namespace=xlou top node 22:35:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:35:08 INFO [loop_until]: OK (rc = 0) 22:35:08 DEBUG --- stdout --- 22:35:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3687Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 216m 1% 4685Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 233m 1% 4737Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 176m 1% 14153Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 106m 0% 14214Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 116m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 140m 0% 1995Mi 3% 22:35:08 DEBUG --- stderr --- 22:35:08 DEBUG 22:36:07 INFO 22:36:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:36:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:36:07 INFO [loop_until]: OK (rc = 0) 22:36:07 DEBUG --- stdout --- 22:36:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2578Mi am-55f77847b7-nhzv4 21m 4461Mi am-55f77847b7-rpq9w 15m 4386Mi ds-cts-0 6m 395Mi ds-cts-1 5m 373Mi ds-cts-2 7m 411Mi ds-idrepo-0 131m 13594Mi ds-idrepo-1 69m 13562Mi ds-idrepo-2 37m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 144m 3507Mi idm-65858d8c4c-v78nh 144m 3394Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 59m 485Mi 22:36:07 DEBUG --- stderr --- 22:36:07 DEBUG 22:36:08 INFO 22:36:08 INFO [loop_until]: kubectl --namespace=xlou top node 22:36:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:36:08 INFO [loop_until]: OK (rc = 0) 22:36:08 DEBUG --- stdout --- 22:36:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3686Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 216m 1% 4693Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 214m 1% 4746Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 201m 1% 14156Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 85m 0% 14216Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 123m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 113m 0% 2003Mi 3% 22:36:08 DEBUG --- stderr --- 22:36:08 DEBUG 22:37:07 INFO 22:37:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:37:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:37:07 INFO [loop_until]: OK (rc = 0) 22:37:07 DEBUG --- stdout --- 22:37:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 2578Mi am-55f77847b7-nhzv4 12m 4462Mi am-55f77847b7-rpq9w 12m 4386Mi ds-cts-0 9m 393Mi ds-cts-1 7m 374Mi ds-cts-2 6m 411Mi ds-idrepo-0 106m 13594Mi ds-idrepo-1 52m 13563Mi ds-idrepo-2 63m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 116m 3509Mi idm-65858d8c4c-v78nh 124m 3396Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 39m 488Mi 22:37:07 DEBUG --- stderr --- 22:37:07 DEBUG 22:37:08 INFO 22:37:08 INFO [loop_until]: kubectl --namespace=xlou top node 22:37:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:37:08 INFO [loop_until]: OK (rc = 0) 22:37:08 DEBUG --- stdout --- 22:37:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 3689Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 203m 1% 4698Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 147m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 177m 1% 4751Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 158m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 111m 0% 14218Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 101m 0% 14106Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 106m 0% 2005Mi 3% 22:37:08 DEBUG --- stderr --- 22:37:08 DEBUG 22:38:07 INFO 22:38:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:38:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:38:07 INFO [loop_until]: OK (rc = 0) 22:38:07 DEBUG --- stdout --- 22:38:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 2582Mi am-55f77847b7-nhzv4 14m 4462Mi am-55f77847b7-rpq9w 14m 4387Mi ds-cts-0 7m 393Mi ds-cts-1 6m 374Mi ds-cts-2 6m 410Mi ds-idrepo-0 101m 13594Mi ds-idrepo-1 59m 13563Mi ds-idrepo-2 32m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 132m 3516Mi idm-65858d8c4c-v78nh 117m 3398Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 65m 487Mi 22:38:07 DEBUG --- stderr --- 22:38:07 DEBUG 22:38:08 INFO 22:38:08 INFO [loop_until]: kubectl --namespace=xlou top node 22:38:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:38:08 INFO [loop_until]: OK (rc = 0) 22:38:08 DEBUG --- stdout --- 22:38:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3695Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 176m 1% 4700Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 145m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 198m 1% 4757Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 161m 1% 14162Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 90m 0% 14219Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 103m 0% 14105Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 139m 0% 2005Mi 3% 22:38:08 DEBUG --- stderr --- 22:38:08 DEBUG 22:39:07 INFO 22:39:07 INFO [loop_until]: kubectl --namespace=xlou top pods 22:39:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:39:07 INFO [loop_until]: OK (rc = 0) 22:39:07 DEBUG --- stdout --- 22:39:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 2584Mi am-55f77847b7-nhzv4 19m 4463Mi am-55f77847b7-rpq9w 16m 4388Mi ds-cts-0 7m 393Mi ds-cts-1 14m 380Mi ds-cts-2 7m 411Mi ds-idrepo-0 111m 13594Mi ds-idrepo-1 37m 13564Mi ds-idrepo-2 37m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 130m 3518Mi idm-65858d8c4c-v78nh 149m 3400Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 44m 487Mi 22:39:07 DEBUG --- stderr --- 22:39:07 DEBUG 22:39:08 INFO 22:39:08 INFO [loop_until]: kubectl --namespace=xlou top node 22:39:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:39:08 INFO [loop_until]: OK (rc = 0) 22:39:08 DEBUG --- stdout --- 22:39:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5381Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3694Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 213m 1% 4704Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 152m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 186m 1% 4760Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 167m 1% 14163Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 88m 0% 14213Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 95m 0% 14102Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 110m 0% 2006Mi 3% 22:39:08 DEBUG --- stderr --- 22:39:08 DEBUG 22:40:08 INFO 22:40:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:40:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:40:08 INFO [loop_until]: OK (rc = 0) 22:40:08 DEBUG --- stdout --- 22:40:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 2584Mi am-55f77847b7-nhzv4 13m 4463Mi am-55f77847b7-rpq9w 24m 4388Mi ds-cts-0 7m 394Mi ds-cts-1 5m 380Mi ds-cts-2 8m 411Mi ds-idrepo-0 106m 13594Mi ds-idrepo-1 38m 13564Mi ds-idrepo-2 30m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 115m 3512Mi idm-65858d8c4c-v78nh 87m 3402Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 46m 486Mi 22:40:08 DEBUG --- stderr --- 22:40:08 DEBUG 22:40:08 INFO 22:40:08 INFO [loop_until]: kubectl --namespace=xlou top node 22:40:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:40:08 INFO [loop_until]: OK (rc = 0) 22:40:08 DEBUG --- stdout --- 22:40:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5380Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3694Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 163m 1% 4706Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 144m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 168m 1% 4754Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 157m 0% 14167Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 76m 0% 14222Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 89m 0% 14111Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 116m 0% 2006Mi 3% 22:40:08 DEBUG --- stderr --- 22:40:08 DEBUG 22:41:08 INFO 22:41:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:41:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:41:08 INFO [loop_until]: OK (rc = 0) 22:41:08 DEBUG --- stdout --- 22:41:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 2585Mi am-55f77847b7-nhzv4 13m 4464Mi am-55f77847b7-rpq9w 13m 4389Mi ds-cts-0 6m 393Mi ds-cts-1 6m 380Mi ds-cts-2 6m 411Mi ds-idrepo-0 114m 13594Mi ds-idrepo-1 57m 13560Mi ds-idrepo-2 43m 13671Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 125m 3524Mi idm-65858d8c4c-v78nh 118m 3412Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 40m 486Mi 22:41:08 DEBUG --- stderr --- 22:41:08 DEBUG 22:41:08 INFO 22:41:08 INFO [loop_until]: kubectl --namespace=xlou top node 22:41:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:41:08 INFO [loop_until]: OK (rc = 0) 22:41:08 DEBUG --- stdout --- 22:41:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3693Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 176m 1% 4717Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 187m 1% 4766Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 167m 1% 14170Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 97m 0% 14213Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 98m 0% 14108Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 107m 0% 2013Mi 3% 22:41:08 DEBUG --- stderr --- 22:41:08 DEBUG 22:42:08 INFO 22:42:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:42:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:42:08 INFO [loop_until]: OK (rc = 0) 22:42:08 DEBUG --- stdout --- 22:42:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2585Mi am-55f77847b7-nhzv4 11m 4463Mi am-55f77847b7-rpq9w 12m 4389Mi ds-cts-0 7m 393Mi ds-cts-1 5m 381Mi ds-cts-2 7m 411Mi ds-idrepo-0 107m 13594Mi ds-idrepo-1 39m 13564Mi ds-idrepo-2 40m 13678Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 143m 3524Mi idm-65858d8c4c-v78nh 147m 3414Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 47m 487Mi 22:42:08 DEBUG --- stderr --- 22:42:08 DEBUG 22:42:08 INFO 22:42:08 INFO [loop_until]: kubectl --namespace=xlou top node 22:42:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:42:08 INFO [loop_until]: OK (rc = 0) 22:42:08 DEBUG --- stdout --- 22:42:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3694Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 216m 1% 4716Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 216m 1% 4764Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 160m 1% 14168Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 83m 0% 14226Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 91m 0% 14114Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 118m 0% 2002Mi 3% 22:42:08 DEBUG --- stderr --- 22:42:08 DEBUG 22:43:08 INFO 22:43:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:43:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:43:08 INFO [loop_until]: OK (rc = 0) 22:43:08 DEBUG --- stdout --- 22:43:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2585Mi am-55f77847b7-nhzv4 11m 4464Mi am-55f77847b7-rpq9w 19m 4387Mi ds-cts-0 7m 394Mi ds-cts-1 8m 380Mi ds-cts-2 9m 411Mi ds-idrepo-0 94m 13595Mi ds-idrepo-1 35m 13564Mi ds-idrepo-2 28m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 110m 3526Mi idm-65858d8c4c-v78nh 106m 3422Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 52m 487Mi 22:43:08 DEBUG --- stderr --- 22:43:08 DEBUG 22:43:09 INFO 22:43:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:43:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:43:09 INFO [loop_until]: OK (rc = 0) 22:43:09 DEBUG --- stdout --- 22:43:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3698Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 164m 1% 4720Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 160m 1% 4773Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 151m 0% 14180Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 80m 0% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 88m 0% 14114Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 115m 0% 2005Mi 3% 22:43:09 DEBUG --- stderr --- 22:43:09 DEBUG 22:44:08 INFO 22:44:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:44:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:44:08 INFO [loop_until]: OK (rc = 0) 22:44:08 DEBUG --- stdout --- 22:44:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2580Mi am-55f77847b7-nhzv4 15m 4464Mi am-55f77847b7-rpq9w 10m 4387Mi ds-cts-0 9m 394Mi ds-cts-1 6m 381Mi ds-cts-2 7m 411Mi ds-idrepo-0 114m 13595Mi ds-idrepo-1 39m 13564Mi ds-idrepo-2 30m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 132m 3543Mi idm-65858d8c4c-v78nh 135m 3417Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 488Mi 22:44:08 DEBUG --- stderr --- 22:44:08 DEBUG 22:44:09 INFO 22:44:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:44:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:44:09 INFO [loop_until]: OK (rc = 0) 22:44:09 DEBUG --- stdout --- 22:44:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3693Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 197m 1% 4714Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 200m 1% 4784Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 175m 1% 14172Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 84m 0% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 14117Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 104m 0% 2003Mi 3% 22:44:09 DEBUG --- stderr --- 22:44:09 DEBUG 22:45:08 INFO 22:45:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:45:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:45:08 INFO [loop_until]: OK (rc = 0) 22:45:08 DEBUG --- stdout --- 22:45:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 2580Mi am-55f77847b7-nhzv4 13m 4464Mi am-55f77847b7-rpq9w 11m 4387Mi ds-cts-0 6m 394Mi ds-cts-1 5m 381Mi ds-cts-2 6m 411Mi ds-idrepo-0 106m 13595Mi ds-idrepo-1 42m 13564Mi ds-idrepo-2 31m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 121m 3547Mi idm-65858d8c4c-v78nh 118m 3419Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 487Mi 22:45:08 DEBUG --- stderr --- 22:45:08 DEBUG 22:45:09 INFO 22:45:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:45:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:45:09 INFO [loop_until]: OK (rc = 0) 22:45:09 DEBUG --- stdout --- 22:45:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3691Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 179m 1% 4718Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 167m 1% 4784Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 163m 1% 14174Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 82m 0% 14233Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 96m 0% 14115Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 97m 0% 2005Mi 3% 22:45:09 DEBUG --- stderr --- 22:45:09 DEBUG 22:46:08 INFO 22:46:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:46:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:46:08 INFO [loop_until]: OK (rc = 0) 22:46:08 DEBUG --- stdout --- 22:46:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2586Mi am-55f77847b7-nhzv4 10m 4464Mi am-55f77847b7-rpq9w 15m 4388Mi ds-cts-0 6m 394Mi ds-cts-1 12m 379Mi ds-cts-2 6m 411Mi ds-idrepo-0 104m 13595Mi ds-idrepo-1 45m 13564Mi ds-idrepo-2 28m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 106m 3559Mi idm-65858d8c4c-v78nh 106m 3421Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 487Mi 22:46:08 DEBUG --- stderr --- 22:46:08 DEBUG 22:46:09 INFO 22:46:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:46:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:46:09 INFO [loop_until]: OK (rc = 0) 22:46:09 DEBUG --- stdout --- 22:46:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5381Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3697Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 179m 1% 4720Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 171m 1% 4798Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 169m 1% 14177Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 81m 0% 14233Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 98m 0% 14116Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 103m 0% 2008Mi 3% 22:46:09 DEBUG --- stderr --- 22:46:09 DEBUG 22:47:08 INFO 22:47:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:47:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:47:08 INFO [loop_until]: OK (rc = 0) 22:47:08 DEBUG --- stdout --- 22:47:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2586Mi am-55f77847b7-nhzv4 10m 4464Mi am-55f77847b7-rpq9w 15m 4388Mi ds-cts-0 6m 394Mi ds-cts-1 6m 380Mi ds-cts-2 7m 411Mi ds-idrepo-0 110m 13594Mi ds-idrepo-1 40m 13564Mi ds-idrepo-2 31m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 110m 3560Mi idm-65858d8c4c-v78nh 102m 3428Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 490Mi 22:47:08 DEBUG --- stderr --- 22:47:08 DEBUG 22:47:09 INFO 22:47:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:47:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:47:09 INFO [loop_until]: OK (rc = 0) 22:47:09 DEBUG --- stdout --- 22:47:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3696Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 187m 1% 4727Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 185m 1% 4802Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 164m 1% 14175Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 83m 0% 14235Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 14123Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2006Mi 3% 22:47:09 DEBUG --- stderr --- 22:47:09 DEBUG 22:48:08 INFO 22:48:08 INFO [loop_until]: kubectl --namespace=xlou top pods 22:48:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:48:08 INFO [loop_until]: OK (rc = 0) 22:48:08 DEBUG --- stdout --- 22:48:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2587Mi am-55f77847b7-nhzv4 13m 4464Mi am-55f77847b7-rpq9w 13m 4389Mi ds-cts-0 6m 394Mi ds-cts-1 5m 379Mi ds-cts-2 7m 411Mi ds-idrepo-0 107m 13595Mi ds-idrepo-1 36m 13564Mi ds-idrepo-2 28m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 112m 3562Mi idm-65858d8c4c-v78nh 100m 3432Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 34m 490Mi 22:48:08 DEBUG --- stderr --- 22:48:08 DEBUG 22:48:09 INFO 22:48:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:48:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:48:09 INFO [loop_until]: OK (rc = 0) 22:48:09 DEBUG --- stdout --- 22:48:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3697Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 167m 1% 4731Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 153m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 183m 1% 4800Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 161m 1% 14179Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 82m 0% 14239Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 93m 0% 14121Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 105m 0% 2007Mi 3% 22:48:09 DEBUG --- stderr --- 22:48:09 DEBUG 22:49:09 INFO 22:49:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:49:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:49:09 INFO [loop_until]: OK (rc = 0) 22:49:09 DEBUG --- stdout --- 22:49:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2587Mi am-55f77847b7-nhzv4 12m 4464Mi am-55f77847b7-rpq9w 11m 4389Mi ds-cts-0 6m 394Mi ds-cts-1 7m 379Mi ds-cts-2 6m 411Mi ds-idrepo-0 99m 13594Mi ds-idrepo-1 38m 13564Mi ds-idrepo-2 29m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 92m 3563Mi idm-65858d8c4c-v78nh 91m 3434Mi lodemon-7655dd7665-d26cm 1m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 491Mi 22:49:09 DEBUG --- stderr --- 22:49:09 DEBUG 22:49:09 INFO 22:49:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:49:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:49:09 INFO [loop_until]: OK (rc = 0) 22:49:09 DEBUG --- stdout --- 22:49:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5385Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3696Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 168m 1% 4736Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 163m 1% 4804Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 158m 0% 14182Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 78m 0% 14237Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 14133Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2006Mi 3% 22:49:09 DEBUG --- stderr --- 22:49:09 DEBUG 22:50:09 INFO 22:50:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:50:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:50:09 INFO [loop_until]: OK (rc = 0) 22:50:09 DEBUG --- stdout --- 22:50:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2587Mi am-55f77847b7-nhzv4 16m 4464Mi am-55f77847b7-rpq9w 13m 4389Mi ds-cts-0 6m 394Mi ds-cts-1 6m 380Mi ds-cts-2 7m 411Mi ds-idrepo-0 104m 13595Mi ds-idrepo-1 38m 13564Mi ds-idrepo-2 30m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 112m 3565Mi idm-65858d8c4c-v78nh 149m 3433Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 490Mi 22:50:09 DEBUG --- stderr --- 22:50:09 DEBUG 22:50:09 INFO 22:50:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:50:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:50:09 INFO [loop_until]: OK (rc = 0) 22:50:09 DEBUG --- stdout --- 22:50:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5385Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3698Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 217m 1% 4732Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 174m 1% 4806Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 164m 1% 14182Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 81m 0% 14237Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 14126Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 94m 0% 2008Mi 3% 22:50:09 DEBUG --- stderr --- 22:50:09 DEBUG 22:51:09 INFO 22:51:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:51:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:51:09 INFO [loop_until]: OK (rc = 0) 22:51:09 DEBUG --- stdout --- 22:51:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 2587Mi am-55f77847b7-nhzv4 14m 4464Mi am-55f77847b7-rpq9w 12m 4389Mi ds-cts-0 6m 394Mi ds-cts-1 5m 377Mi ds-cts-2 7m 411Mi ds-idrepo-0 103m 13595Mi ds-idrepo-1 54m 13564Mi ds-idrepo-2 31m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 158m 3568Mi idm-65858d8c4c-v78nh 127m 3435Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 491Mi 22:51:09 DEBUG --- stderr --- 22:51:09 DEBUG 22:51:09 INFO 22:51:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:51:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:51:09 INFO [loop_until]: OK (rc = 0) 22:51:09 DEBUG --- stdout --- 22:51:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3697Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 199m 1% 4736Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 147m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 231m 1% 4810Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 157m 0% 14184Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 80m 0% 14240Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 102m 0% 14129Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2010Mi 3% 22:51:09 DEBUG --- stderr --- 22:51:09 DEBUG 22:52:09 INFO 22:52:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:52:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:52:09 INFO [loop_until]: OK (rc = 0) 22:52:09 DEBUG --- stdout --- 22:52:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2588Mi am-55f77847b7-nhzv4 12m 4464Mi am-55f77847b7-rpq9w 14m 4389Mi ds-cts-0 6m 394Mi ds-cts-1 9m 377Mi ds-cts-2 7m 411Mi ds-idrepo-0 112m 13595Mi ds-idrepo-1 40m 13565Mi ds-idrepo-2 33m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 112m 3570Mi idm-65858d8c4c-v78nh 115m 3437Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 491Mi 22:52:09 DEBUG --- stderr --- 22:52:09 DEBUG 22:52:09 INFO 22:52:09 INFO [loop_until]: kubectl --namespace=xlou top node 22:52:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:52:09 INFO [loop_until]: OK (rc = 0) 22:52:09 DEBUG --- stdout --- 22:52:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1303Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3699Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 176m 1% 4735Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 147m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 197m 1% 4810Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 164m 1% 14187Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 80m 0% 14240Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 14128Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2010Mi 3% 22:52:09 DEBUG --- stderr --- 22:52:09 DEBUG 22:53:09 INFO 22:53:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:53:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:53:09 INFO [loop_until]: OK (rc = 0) 22:53:09 DEBUG --- stdout --- 22:53:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 2587Mi am-55f77847b7-nhzv4 10m 4464Mi am-55f77847b7-rpq9w 12m 4389Mi ds-cts-0 13m 394Mi ds-cts-1 5m 377Mi ds-cts-2 6m 411Mi ds-idrepo-0 104m 13595Mi ds-idrepo-1 33m 13564Mi ds-idrepo-2 41m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 93m 3564Mi idm-65858d8c4c-v78nh 119m 3441Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 28m 491Mi 22:53:09 DEBUG --- stderr --- 22:53:09 DEBUG 22:53:10 INFO 22:53:10 INFO [loop_until]: kubectl --namespace=xlou top node 22:53:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:53:10 INFO [loop_until]: OK (rc = 0) 22:53:10 DEBUG --- stdout --- 22:53:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3699Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 172m 1% 4734Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 150m 0% 4804Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 155m 0% 14186Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 90m 0% 14245Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 14133Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 99m 0% 2010Mi 3% 22:53:10 DEBUG --- stderr --- 22:53:10 DEBUG 22:54:09 INFO 22:54:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:54:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:54:09 INFO [loop_until]: OK (rc = 0) 22:54:09 DEBUG --- stdout --- 22:54:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2587Mi am-55f77847b7-nhzv4 11m 4464Mi am-55f77847b7-rpq9w 10m 4389Mi ds-cts-0 6m 394Mi ds-cts-1 5m 378Mi ds-cts-2 8m 408Mi ds-idrepo-0 111m 13595Mi ds-idrepo-1 40m 13564Mi ds-idrepo-2 31m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 120m 3566Mi idm-65858d8c4c-v78nh 154m 3446Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 491Mi 22:54:09 DEBUG --- stderr --- 22:54:09 DEBUG 22:54:10 INFO 22:54:10 INFO [loop_until]: kubectl --namespace=xlou top node 22:54:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:54:10 INFO [loop_until]: OK (rc = 0) 22:54:10 DEBUG --- stdout --- 22:54:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3697Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5559Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 224m 1% 4741Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 158m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 185m 1% 4805Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 167m 1% 14189Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 80m 0% 14249Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 94m 0% 14135Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 101m 0% 2010Mi 3% 22:54:10 DEBUG --- stderr --- 22:54:10 DEBUG 22:55:09 INFO 22:55:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:55:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:55:09 INFO [loop_until]: OK (rc = 0) 22:55:09 DEBUG --- stdout --- 22:55:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2587Mi am-55f77847b7-nhzv4 11m 4465Mi am-55f77847b7-rpq9w 11m 4390Mi ds-cts-0 8m 394Mi ds-cts-1 6m 378Mi ds-cts-2 7m 408Mi ds-idrepo-0 104m 13595Mi ds-idrepo-1 39m 13565Mi ds-idrepo-2 28m 13682Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 114m 3570Mi idm-65858d8c4c-v78nh 126m 3455Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 491Mi 22:55:09 DEBUG --- stderr --- 22:55:09 DEBUG 22:55:10 INFO 22:55:10 INFO [loop_until]: kubectl --namespace=xlou top node 22:55:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:55:10 INFO [loop_until]: OK (rc = 0) 22:55:10 DEBUG --- stdout --- 22:55:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5396Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3697Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 187m 1% 4756Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 207m 1% 4821Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 156m 0% 14191Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 14250Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 90m 0% 14139Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2011Mi 3% 22:55:10 DEBUG --- stderr --- 22:55:10 DEBUG 22:56:09 INFO 22:56:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:56:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:56:09 INFO [loop_until]: OK (rc = 0) 22:56:09 DEBUG --- stdout --- 22:56:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2587Mi am-55f77847b7-nhzv4 11m 4469Mi am-55f77847b7-rpq9w 10m 4390Mi ds-cts-0 7m 394Mi ds-cts-1 6m 377Mi ds-cts-2 7m 408Mi ds-idrepo-0 98m 13595Mi ds-idrepo-1 37m 13564Mi ds-idrepo-2 29m 13682Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 103m 3591Mi idm-65858d8c4c-v78nh 89m 3457Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 28m 491Mi 22:56:09 DEBUG --- stderr --- 22:56:09 DEBUG 22:56:10 INFO 22:56:10 INFO [loop_until]: kubectl --namespace=xlou top node 22:56:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:56:10 INFO [loop_until]: OK (rc = 0) 22:56:10 DEBUG --- stdout --- 22:56:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5381Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3699Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5556Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 162m 1% 4758Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 165m 1% 4830Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 149m 0% 14194Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 14254Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 89m 0% 14139Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2011Mi 3% 22:56:10 DEBUG --- stderr --- 22:56:10 DEBUG 22:57:09 INFO 22:57:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:57:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:57:09 INFO [loop_until]: OK (rc = 0) 22:57:09 DEBUG --- stdout --- 22:57:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2588Mi am-55f77847b7-nhzv4 12m 4469Mi am-55f77847b7-rpq9w 12m 4390Mi ds-cts-0 7m 394Mi ds-cts-1 5m 377Mi ds-cts-2 8m 408Mi ds-idrepo-0 112m 13591Mi ds-idrepo-1 43m 13565Mi ds-idrepo-2 29m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 99m 3593Mi idm-65858d8c4c-v78nh 98m 3474Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 490Mi 22:57:09 DEBUG --- stderr --- 22:57:09 DEBUG 22:57:10 INFO 22:57:10 INFO [loop_until]: kubectl --namespace=xlou top node 22:57:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:57:10 INFO [loop_until]: OK (rc = 0) 22:57:10 DEBUG --- stdout --- 22:57:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 3701Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5555Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 176m 1% 4773Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 159m 1% 4833Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 160m 1% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 14255Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 96m 0% 14139Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 94m 0% 2007Mi 3% 22:57:10 DEBUG --- stderr --- 22:57:10 DEBUG 22:58:09 INFO 22:58:09 INFO [loop_until]: kubectl --namespace=xlou top pods 22:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:58:09 INFO [loop_until]: OK (rc = 0) 22:58:09 DEBUG --- stdout --- 22:58:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2587Mi am-55f77847b7-nhzv4 11m 4469Mi am-55f77847b7-rpq9w 11m 4390Mi ds-cts-0 6m 394Mi ds-cts-1 5m 378Mi ds-cts-2 7m 408Mi ds-idrepo-0 104m 13594Mi ds-idrepo-1 41m 13565Mi ds-idrepo-2 34m 13682Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 100m 3595Mi idm-65858d8c4c-v78nh 135m 3477Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 29m 490Mi 22:58:09 DEBUG --- stderr --- 22:58:09 DEBUG 22:58:10 INFO 22:58:10 INFO [loop_until]: kubectl --namespace=xlou top node 22:58:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:58:10 INFO [loop_until]: OK (rc = 0) 22:58:10 DEBUG --- stdout --- 22:58:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3700Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5555Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 194m 1% 4775Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 152m 0% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 158m 0% 4835Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 158m 0% 14197Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 86m 0% 14255Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 14142Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 97m 0% 2009Mi 3% 22:58:10 DEBUG --- stderr --- 22:58:10 DEBUG 22:59:10 INFO 22:59:10 INFO [loop_until]: kubectl --namespace=xlou top pods 22:59:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:59:10 INFO [loop_until]: OK (rc = 0) 22:59:10 DEBUG --- stdout --- 22:59:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2587Mi am-55f77847b7-nhzv4 11m 4471Mi am-55f77847b7-rpq9w 12m 4390Mi ds-cts-0 7m 394Mi ds-cts-1 5m 378Mi ds-cts-2 6m 408Mi ds-idrepo-0 105m 13594Mi ds-idrepo-1 38m 13564Mi ds-idrepo-2 28m 13685Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 97m 3627Mi idm-65858d8c4c-v78nh 101m 3484Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 491Mi 22:59:10 DEBUG --- stderr --- 22:59:10 DEBUG 22:59:10 INFO 22:59:10 INFO [loop_until]: kubectl --namespace=xlou top node 22:59:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 22:59:10 INFO [loop_until]: OK (rc = 0) 22:59:10 DEBUG --- stdout --- 22:59:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3701Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5563Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 169m 1% 4783Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 145m 0% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 156m 0% 4869Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 157m 0% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 14262Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 14145Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2011Mi 3% 22:59:10 DEBUG --- stderr --- 22:59:10 DEBUG 23:00:10 INFO 23:00:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:00:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:00:10 INFO [loop_until]: OK (rc = 0) 23:00:10 DEBUG --- stdout --- 23:00:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2588Mi am-55f77847b7-nhzv4 10m 4472Mi am-55f77847b7-rpq9w 14m 4390Mi ds-cts-0 10m 394Mi ds-cts-1 8m 382Mi ds-cts-2 8m 408Mi ds-idrepo-0 100m 13594Mi ds-idrepo-1 37m 13565Mi ds-idrepo-2 28m 13685Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 103m 3629Mi idm-65858d8c4c-v78nh 94m 3487Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 491Mi 23:00:10 DEBUG --- stderr --- 23:00:10 DEBUG 23:00:10 INFO 23:00:10 INFO [loop_until]: kubectl --namespace=xlou top node 23:00:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:00:10 INFO [loop_until]: OK (rc = 0) 23:00:10 DEBUG --- stdout --- 23:00:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5385Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3701Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5558Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 146m 0% 4782Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 155m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 171m 1% 4869Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 149m 0% 14199Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 76m 0% 14261Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 14145Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2010Mi 3% 23:00:10 DEBUG --- stderr --- 23:00:10 DEBUG 23:01:10 INFO 23:01:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:01:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:01:10 INFO [loop_until]: OK (rc = 0) 23:01:10 DEBUG --- stdout --- 23:01:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2588Mi am-55f77847b7-nhzv4 10m 4472Mi am-55f77847b7-rpq9w 11m 4391Mi ds-cts-0 7m 394Mi ds-cts-1 5m 382Mi ds-cts-2 7m 408Mi ds-idrepo-0 98m 13594Mi ds-idrepo-1 34m 13564Mi ds-idrepo-2 28m 13685Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 94m 3633Mi idm-65858d8c4c-v78nh 122m 3499Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 491Mi 23:01:10 DEBUG --- stderr --- 23:01:10 DEBUG 23:01:10 INFO 23:01:10 INFO [loop_until]: kubectl --namespace=xlou top node 23:01:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:01:11 INFO [loop_until]: OK (rc = 0) 23:01:11 DEBUG --- stdout --- 23:01:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3698Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5556Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 195m 1% 4798Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 159m 1% 4873Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 156m 0% 14201Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 14264Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 14149Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2010Mi 3% 23:01:11 DEBUG --- stderr --- 23:01:11 DEBUG 23:02:10 INFO 23:02:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:02:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:02:10 INFO [loop_until]: OK (rc = 0) 23:02:10 DEBUG --- stdout --- 23:02:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2591Mi am-55f77847b7-nhzv4 11m 4472Mi am-55f77847b7-rpq9w 13m 4391Mi ds-cts-0 6m 394Mi ds-cts-1 5m 382Mi ds-cts-2 7m 408Mi ds-idrepo-0 108m 13594Mi ds-idrepo-1 35m 13564Mi ds-idrepo-2 31m 13685Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 106m 3625Mi idm-65858d8c4c-v78nh 101m 3511Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 491Mi 23:02:10 DEBUG --- stderr --- 23:02:10 DEBUG 23:02:11 INFO 23:02:11 INFO [loop_until]: kubectl --namespace=xlou top node 23:02:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:02:11 INFO [loop_until]: OK (rc = 0) 23:02:11 DEBUG --- stdout --- 23:02:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3701Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5557Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 164m 1% 4811Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 171m 1% 4867Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 165m 1% 14202Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 14266Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 14152Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 97m 0% 2010Mi 3% 23:02:11 DEBUG --- stderr --- 23:02:11 DEBUG 23:03:10 INFO 23:03:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:03:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:03:10 INFO [loop_until]: OK (rc = 0) 23:03:10 DEBUG --- stdout --- 23:03:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2593Mi am-55f77847b7-nhzv4 10m 4472Mi am-55f77847b7-rpq9w 10m 4391Mi ds-cts-0 8m 395Mi ds-cts-1 6m 382Mi ds-cts-2 6m 408Mi ds-idrepo-0 101m 13594Mi ds-idrepo-1 30m 13565Mi ds-idrepo-2 22m 13685Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 66m 3633Mi idm-65858d8c4c-v78nh 72m 3512Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 491Mi 23:03:10 DEBUG --- stderr --- 23:03:10 DEBUG 23:03:11 INFO 23:03:11 INFO [loop_until]: kubectl --namespace=xlou top node 23:03:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:03:11 INFO [loop_until]: OK (rc = 0) 23:03:11 DEBUG --- stdout --- 23:03:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5382Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3703Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5560Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 134m 0% 4812Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 137m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 133m 0% 4875Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 130m 0% 14203Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 73m 0% 14268Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 14152Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 101m 0% 2006Mi 3% 23:03:11 DEBUG --- stderr --- 23:03:11 DEBUG 23:04:10 INFO 23:04:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:04:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:04:10 INFO [loop_until]: OK (rc = 0) 23:04:10 DEBUG --- stdout --- 23:04:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 7m 2593Mi am-55f77847b7-nhzv4 8m 4472Mi am-55f77847b7-rpq9w 8m 4391Mi ds-cts-0 7m 395Mi ds-cts-1 6m 382Mi ds-cts-2 8m 408Mi ds-idrepo-0 12m 13594Mi ds-idrepo-1 11m 13564Mi ds-idrepo-2 9m 13685Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 8m 3633Mi idm-65858d8c4c-v78nh 6m 3511Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 100Mi 23:04:10 DEBUG --- stderr --- 23:04:10 DEBUG 23:04:11 INFO 23:04:11 INFO [loop_until]: kubectl --namespace=xlou top node 23:04:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:04:11 INFO [loop_until]: OK (rc = 0) 23:04:11 DEBUG --- stdout --- 23:04:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3702Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5562Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4812Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4878Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14202Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14151Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1622Mi 2% 23:04:11 DEBUG --- stderr --- 23:04:11 DEBUG 127.0.0.1 - - [11/Aug/2023 23:04:54] "GET /monitoring/average?start_time=23-08-11_21:34:23&stop_time=23-08-11_22:02:54 HTTP/1.1" 200 - 23:05:10 INFO 23:05:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:05:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:05:10 INFO [loop_until]: OK (rc = 0) 23:05:10 DEBUG --- stdout --- 23:05:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 7m 2596Mi am-55f77847b7-nhzv4 9m 4472Mi am-55f77847b7-rpq9w 8m 4391Mi ds-cts-0 6m 395Mi ds-cts-1 5m 383Mi ds-cts-2 7m 409Mi ds-idrepo-0 11m 13594Mi ds-idrepo-1 11m 13564Mi ds-idrepo-2 14m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 7m 3633Mi idm-65858d8c4c-v78nh 4m 3511Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 100Mi 23:05:10 DEBUG --- stderr --- 23:05:10 DEBUG 23:05:11 INFO 23:05:11 INFO [loop_until]: kubectl --namespace=xlou top node 23:05:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:05:11 INFO [loop_until]: OK (rc = 0) 23:05:11 DEBUG --- stdout --- 23:05:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5385Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3705Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5554Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 4814Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 65m 0% 4873Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14201Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14261Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14150Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1621Mi 2% 23:05:11 DEBUG --- stderr --- 23:05:11 DEBUG 23:06:10 INFO 23:06:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:06:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:06:10 INFO [loop_until]: OK (rc = 0) 23:06:10 DEBUG --- stdout --- 23:06:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2597Mi am-55f77847b7-nhzv4 63m 4519Mi am-55f77847b7-rpq9w 10m 4391Mi ds-cts-0 8m 395Mi ds-cts-1 5m 383Mi ds-cts-2 8m 409Mi ds-idrepo-0 83m 13595Mi ds-idrepo-1 42m 13564Mi ds-idrepo-2 27m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 100m 3639Mi idm-65858d8c4c-v78nh 100m 3508Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 95m 468Mi 23:06:10 DEBUG --- stderr --- 23:06:10 DEBUG 23:06:11 INFO 23:06:11 INFO [loop_until]: kubectl --namespace=xlou top node 23:06:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:06:11 INFO [loop_until]: OK (rc = 0) 23:06:11 DEBUG --- stdout --- 23:06:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1303Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5387Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3707Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 5603Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 184m 1% 4819Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 136m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 178m 1% 4880Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 140m 0% 14206Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 79m 0% 14265Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 98m 0% 14152Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 172m 1% 1988Mi 3% 23:06:11 DEBUG --- stderr --- 23:06:11 DEBUG 23:07:10 INFO 23:07:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:07:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:07:10 INFO [loop_until]: OK (rc = 0) 23:07:10 DEBUG --- stdout --- 23:07:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2597Mi am-55f77847b7-nhzv4 11m 4520Mi am-55f77847b7-rpq9w 12m 4391Mi ds-cts-0 9m 395Mi ds-cts-1 5m 383Mi ds-cts-2 7m 409Mi ds-idrepo-0 97m 13595Mi ds-idrepo-1 37m 13565Mi ds-idrepo-2 27m 13693Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 95m 3643Mi idm-65858d8c4c-v78nh 113m 3506Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 85m 477Mi 23:07:10 DEBUG --- stderr --- 23:07:10 DEBUG 23:07:11 INFO 23:07:11 INFO [loop_until]: kubectl --namespace=xlou top node 23:07:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:07:11 INFO [loop_until]: OK (rc = 0) 23:07:11 DEBUG --- stdout --- 23:07:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5388Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3706Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5606Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 165m 1% 4807Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 166m 1% 4880Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 153m 0% 14207Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 78m 0% 14293Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 90m 0% 14155Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 149m 0% 1995Mi 3% 23:07:11 DEBUG --- stderr --- 23:07:11 DEBUG 23:08:10 INFO 23:08:10 INFO [loop_until]: kubectl --namespace=xlou top pods 23:08:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:08:10 INFO [loop_until]: OK (rc = 0) 23:08:10 DEBUG --- stdout --- 23:08:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2608Mi am-55f77847b7-nhzv4 12m 4520Mi am-55f77847b7-rpq9w 10m 4391Mi ds-cts-0 6m 395Mi ds-cts-1 6m 383Mi ds-cts-2 7m 409Mi ds-idrepo-0 88m 13595Mi ds-idrepo-1 34m 13565Mi ds-idrepo-2 25m 13693Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 87m 3642Mi idm-65858d8c4c-v78nh 92m 3509Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 60m 475Mi 23:08:10 DEBUG --- stderr --- 23:08:10 DEBUG 23:08:11 INFO 23:08:11 INFO [loop_until]: kubectl --namespace=xlou top node 23:08:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:08:11 INFO [loop_until]: OK (rc = 0) 23:08:11 DEBUG --- stdout --- 23:08:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5388Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3713Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5605Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 159m 1% 4808Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 153m 0% 4883Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 139m 0% 14211Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 14281Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 127m 0% 1992Mi 3% 23:08:11 DEBUG --- stderr --- 23:08:11 DEBUG 23:09:11 INFO 23:09:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:09:11 INFO [loop_until]: OK (rc = 0) 23:09:11 DEBUG --- stdout --- 23:09:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2637Mi am-55f77847b7-nhzv4 11m 4520Mi am-55f77847b7-rpq9w 12m 4392Mi ds-cts-0 8m 395Mi ds-cts-1 5m 384Mi ds-cts-2 9m 409Mi ds-idrepo-0 100m 13595Mi ds-idrepo-1 36m 13564Mi ds-idrepo-2 24m 13694Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 92m 3645Mi idm-65858d8c4c-v78nh 87m 3512Mi lodemon-7655dd7665-d26cm 1m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 64m 485Mi 23:09:11 DEBUG --- stderr --- 23:09:11 DEBUG 23:09:12 INFO 23:09:12 INFO [loop_until]: kubectl --namespace=xlou top node 23:09:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:09:12 INFO [loop_until]: OK (rc = 0) 23:09:12 DEBUG --- stdout --- 23:09:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5387Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 3746Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5609Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 161m 1% 4812Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 164m 1% 4889Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 151m 0% 14210Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 14283Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 14160Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 134m 0% 2005Mi 3% 23:09:12 DEBUG --- stderr --- 23:09:12 DEBUG 23:10:11 INFO 23:10:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:10:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:10:11 INFO [loop_until]: OK (rc = 0) 23:10:11 DEBUG --- stdout --- 23:10:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2669Mi am-55f77847b7-nhzv4 12m 4520Mi am-55f77847b7-rpq9w 12m 4397Mi ds-cts-0 6m 395Mi ds-cts-1 5m 383Mi ds-cts-2 6m 409Mi ds-idrepo-0 100m 13594Mi ds-idrepo-1 35m 13564Mi ds-idrepo-2 26m 13694Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 86m 3640Mi idm-65858d8c4c-v78nh 109m 3514Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 48m 486Mi 23:10:11 DEBUG --- stderr --- 23:10:11 DEBUG 23:10:12 INFO 23:10:12 INFO [loop_until]: kubectl --namespace=xlou top node 23:10:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:10:12 INFO [loop_until]: OK (rc = 0) 23:10:12 DEBUG --- stdout --- 23:10:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 5394Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3776Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5608Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 172m 1% 4810Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 158m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 148m 0% 4883Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 153m 0% 14212Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 76m 0% 14282Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 85m 0% 14162Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 115m 0% 2002Mi 3% 23:10:12 DEBUG --- stderr --- 23:10:12 DEBUG 23:11:11 INFO 23:11:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:11:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:11:11 INFO [loop_until]: OK (rc = 0) 23:11:11 DEBUG --- stdout --- 23:11:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2699Mi am-55f77847b7-nhzv4 13m 4520Mi am-55f77847b7-rpq9w 12m 4397Mi ds-cts-0 6m 395Mi ds-cts-1 6m 383Mi ds-cts-2 7m 409Mi ds-idrepo-0 94m 13595Mi ds-idrepo-1 43m 13565Mi ds-idrepo-2 25m 13694Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 90m 3642Mi idm-65858d8c4c-v78nh 97m 3519Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 46m 487Mi 23:11:11 DEBUG --- stderr --- 23:11:11 DEBUG 23:11:12 INFO 23:11:12 INFO [loop_until]: kubectl --namespace=xlou top node 23:11:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:11:12 INFO [loop_until]: OK (rc = 0) 23:11:12 DEBUG --- stdout --- 23:11:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 5392Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3811Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5606Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 163m 1% 4818Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 142m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 152m 0% 4880Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 147m 0% 14216Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 89m 0% 14164Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 113m 0% 2005Mi 3% 23:11:12 DEBUG --- stderr --- 23:11:12 DEBUG 23:12:11 INFO 23:12:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:12:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:12:11 INFO [loop_until]: OK (rc = 0) 23:12:11 DEBUG --- stdout --- 23:12:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2733Mi am-55f77847b7-nhzv4 11m 4520Mi am-55f77847b7-rpq9w 12m 4397Mi ds-cts-0 7m 395Mi ds-cts-1 5m 383Mi ds-cts-2 6m 410Mi ds-idrepo-0 98m 13595Mi ds-idrepo-1 35m 13565Mi ds-idrepo-2 30m 13702Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 106m 3647Mi idm-65858d8c4c-v78nh 92m 3516Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 52m 490Mi 23:12:11 DEBUG --- stderr --- 23:12:11 DEBUG 23:12:12 INFO 23:12:12 INFO [loop_until]: kubectl --namespace=xlou top node 23:12:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:12:12 INFO [loop_until]: OK (rc = 0) 23:12:12 DEBUG --- stdout --- 23:12:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5392Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3845Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5602Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 162m 1% 4815Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 160m 1% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 166m 1% 4887Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 150m 0% 14216Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 81m 0% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 82m 0% 14162Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 115m 0% 2006Mi 3% 23:12:12 DEBUG --- stderr --- 23:12:12 DEBUG 23:13:11 INFO 23:13:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:13:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:13:11 INFO [loop_until]: OK (rc = 0) 23:13:11 DEBUG --- stdout --- 23:13:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2763Mi am-55f77847b7-nhzv4 11m 4520Mi am-55f77847b7-rpq9w 11m 4397Mi ds-cts-0 6m 395Mi ds-cts-1 7m 383Mi ds-cts-2 5m 409Mi ds-idrepo-0 97m 13595Mi ds-idrepo-1 30m 13565Mi ds-idrepo-2 25m 13702Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 88m 3649Mi idm-65858d8c4c-v78nh 97m 3521Mi lodemon-7655dd7665-d26cm 1m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 38m 490Mi 23:13:11 DEBUG --- stderr --- 23:13:11 DEBUG 23:13:12 INFO 23:13:12 INFO [loop_until]: kubectl --namespace=xlou top node 23:13:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:13:12 INFO [loop_until]: OK (rc = 0) 23:13:12 DEBUG --- stdout --- 23:13:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3874Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5606Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 157m 0% 4822Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 155m 0% 4892Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 148m 0% 14219Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 14294Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 80m 0% 14163Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 109m 0% 2007Mi 3% 23:13:12 DEBUG --- stderr --- 23:13:12 DEBUG 23:14:11 INFO 23:14:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:14:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:14:11 INFO [loop_until]: OK (rc = 0) 23:14:11 DEBUG --- stdout --- 23:14:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2801Mi am-55f77847b7-nhzv4 13m 4520Mi am-55f77847b7-rpq9w 11m 4397Mi ds-cts-0 6m 395Mi ds-cts-1 5m 383Mi ds-cts-2 8m 409Mi ds-idrepo-0 86m 13595Mi ds-idrepo-1 30m 13565Mi ds-idrepo-2 26m 13701Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 90m 3651Mi idm-65858d8c4c-v78nh 90m 3521Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 36m 491Mi 23:14:11 DEBUG --- stderr --- 23:14:11 DEBUG 23:14:12 INFO 23:14:12 INFO [loop_until]: kubectl --namespace=xlou top node 23:14:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:14:12 INFO [loop_until]: OK (rc = 0) 23:14:12 DEBUG --- stdout --- 23:14:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5391Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3912Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5608Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 158m 0% 4819Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 146m 0% 4892Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 141m 0% 14217Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 14297Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 79m 0% 14166Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 103m 0% 2009Mi 3% 23:14:12 DEBUG --- stderr --- 23:14:12 DEBUG 23:15:11 INFO 23:15:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:15:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:15:11 INFO [loop_until]: OK (rc = 0) 23:15:11 DEBUG --- stdout --- 23:15:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 2827Mi am-55f77847b7-nhzv4 15m 4531Mi am-55f77847b7-rpq9w 11m 4397Mi ds-cts-0 12m 389Mi ds-cts-1 5m 384Mi ds-cts-2 8m 410Mi ds-idrepo-0 94m 13595Mi ds-idrepo-1 33m 13565Mi ds-idrepo-2 24m 13702Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 94m 3653Mi idm-65858d8c4c-v78nh 91m 3523Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 61m 491Mi 23:15:11 DEBUG --- stderr --- 23:15:11 DEBUG 23:15:12 INFO 23:15:12 INFO [loop_until]: kubectl --namespace=xlou top node 23:15:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:15:12 INFO [loop_until]: OK (rc = 0) 23:15:12 DEBUG --- stdout --- 23:15:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5392Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3940Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5624Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 158m 0% 4825Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 168m 1% 4896Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 149m 0% 14219Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 14301Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 82m 0% 14167Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 131m 0% 2011Mi 3% 23:15:12 DEBUG --- stderr --- 23:15:12 DEBUG 23:16:11 INFO 23:16:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:16:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:16:11 INFO [loop_until]: OK (rc = 0) 23:16:11 DEBUG --- stdout --- 23:16:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2862Mi am-55f77847b7-nhzv4 13m 4565Mi am-55f77847b7-rpq9w 16m 4397Mi ds-cts-0 25m 398Mi ds-cts-1 7m 384Mi ds-cts-2 5m 410Mi ds-idrepo-0 93m 13595Mi ds-idrepo-1 147m 13568Mi ds-idrepo-2 26m 13702Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 79m 3653Mi idm-65858d8c4c-v78nh 94m 3526Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 488Mi 23:16:11 DEBUG --- stderr --- 23:16:11 DEBUG 23:16:12 INFO 23:16:12 INFO [loop_until]: kubectl --namespace=xlou top node 23:16:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:16:13 INFO [loop_until]: OK (rc = 0) 23:16:13 DEBUG --- stdout --- 23:16:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3970Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5656Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 156m 0% 4825Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 158m 0% 4897Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 139m 0% 14223Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 14303Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 211m 1% 14172Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 110m 0% 2018Mi 3% 23:16:13 DEBUG --- stderr --- 23:16:13 DEBUG 23:17:11 INFO 23:17:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:17:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:17:11 INFO [loop_until]: OK (rc = 0) 23:17:11 DEBUG --- stdout --- 23:17:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2895Mi am-55f77847b7-nhzv4 9m 4688Mi am-55f77847b7-rpq9w 11m 4397Mi ds-cts-0 6m 391Mi ds-cts-1 7m 384Mi ds-cts-2 6m 410Mi ds-idrepo-0 408m 13684Mi ds-idrepo-1 320m 13745Mi ds-idrepo-2 250m 13734Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 78m 3656Mi idm-65858d8c4c-v78nh 93m 3528Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 48m 488Mi 23:17:11 DEBUG --- stderr --- 23:17:11 DEBUG 23:17:13 INFO 23:17:13 INFO [loop_until]: kubectl --namespace=xlou top node 23:17:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:17:13 INFO [loop_until]: OK (rc = 0) 23:17:13 DEBUG --- stdout --- 23:17:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5392Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 4006Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5771Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 166m 1% 4830Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 157m 0% 4898Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 462m 2% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 294m 1% 14337Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 239m 1% 14344Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 120m 0% 2007Mi 3% 23:17:13 DEBUG --- stderr --- 23:17:13 DEBUG 23:18:11 INFO 23:18:11 INFO [loop_until]: kubectl --namespace=xlou top pods 23:18:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:18:11 INFO [loop_until]: OK (rc = 0) 23:18:11 DEBUG --- stdout --- 23:18:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 2924Mi am-55f77847b7-nhzv4 9m 4688Mi am-55f77847b7-rpq9w 11m 4397Mi ds-cts-0 6m 391Mi ds-cts-1 11m 384Mi ds-cts-2 6m 410Mi ds-idrepo-0 124m 13748Mi ds-idrepo-1 32m 13745Mi ds-idrepo-2 40m 13744Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 89m 3659Mi idm-65858d8c4c-v78nh 98m 3526Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 489Mi 23:18:11 DEBUG --- stderr --- 23:18:11 DEBUG 23:18:13 INFO 23:18:13 INFO [loop_until]: kubectl --namespace=xlou top node 23:18:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:18:13 INFO [loop_until]: OK (rc = 0) 23:18:13 DEBUG --- stdout --- 23:18:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 4039Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5770Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 166m 1% 4827Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 159m 1% 4901Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 181m 1% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 88m 0% 14341Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 106m 0% 2005Mi 3% 23:18:13 DEBUG --- stderr --- 23:18:13 DEBUG 23:19:12 INFO 23:19:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:19:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:19:12 INFO [loop_until]: OK (rc = 0) 23:19:12 DEBUG --- stdout --- 23:19:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 2953Mi am-55f77847b7-nhzv4 8m 4688Mi am-55f77847b7-rpq9w 12m 4397Mi ds-cts-0 6m 391Mi ds-cts-1 9m 384Mi ds-cts-2 6m 410Mi ds-idrepo-0 94m 13741Mi ds-idrepo-1 32m 13746Mi ds-idrepo-2 30m 13744Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 96m 3655Mi idm-65858d8c4c-v78nh 84m 3528Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 40m 493Mi 23:19:12 DEBUG --- stderr --- 23:19:12 DEBUG 23:19:13 INFO 23:19:13 INFO [loop_until]: kubectl --namespace=xlou top node 23:19:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:19:13 INFO [loop_until]: OK (rc = 0) 23:19:13 DEBUG --- stdout --- 23:19:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1315Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5394Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 4070Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5775Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 157m 0% 4827Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 147m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 164m 1% 4895Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 150m 0% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 81m 0% 14342Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 81m 0% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 112m 0% 2009Mi 3% 23:19:13 DEBUG --- stderr --- 23:19:13 DEBUG 23:20:12 INFO 23:20:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:20:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:20:12 INFO [loop_until]: OK (rc = 0) 23:20:12 DEBUG --- stdout --- 23:20:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 2986Mi am-55f77847b7-nhzv4 8m 4688Mi am-55f77847b7-rpq9w 12m 4397Mi ds-cts-0 5m 391Mi ds-cts-1 8m 384Mi ds-cts-2 6m 410Mi ds-idrepo-0 102m 13734Mi ds-idrepo-1 35m 13746Mi ds-idrepo-2 25m 13744Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 96m 3661Mi idm-65858d8c4c-v78nh 103m 3533Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 492Mi 23:20:12 DEBUG --- stderr --- 23:20:12 DEBUG 23:20:13 INFO 23:20:13 INFO [loop_until]: kubectl --namespace=xlou top node 23:20:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:20:13 INFO [loop_until]: OK (rc = 0) 23:20:13 DEBUG --- stdout --- 23:20:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 4100Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 5774Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 167m 1% 4835Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 164m 1% 4901Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 160m 1% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 76m 0% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 14355Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 105m 0% 2010Mi 3% 23:20:13 DEBUG --- stderr --- 23:20:13 DEBUG 23:21:12 INFO 23:21:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:21:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:21:12 INFO [loop_until]: OK (rc = 0) 23:21:12 DEBUG --- stdout --- 23:21:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 17m 3020Mi am-55f77847b7-nhzv4 11m 4687Mi am-55f77847b7-rpq9w 13m 4397Mi ds-cts-0 6m 391Mi ds-cts-1 5m 384Mi ds-cts-2 6m 410Mi ds-idrepo-0 349m 13688Mi ds-idrepo-1 205m 13668Mi ds-idrepo-2 224m 13672Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 91m 3663Mi idm-65858d8c4c-v78nh 102m 3535Mi lodemon-7655dd7665-d26cm 1m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 34m 493Mi 23:21:12 DEBUG --- stderr --- 23:21:12 DEBUG 23:21:13 INFO 23:21:13 INFO [loop_until]: kubectl --namespace=xlou top node 23:21:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:21:13 INFO [loop_until]: OK (rc = 0) 23:21:13 DEBUG --- stdout --- 23:21:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1311Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 4129Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 172m 1% 4835Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 153m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 164m 1% 4900Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 266m 1% 14311Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 91m 0% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 274m 1% 14271Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2009Mi 3% 23:21:13 DEBUG --- stderr --- 23:21:13 DEBUG 23:22:12 INFO 23:22:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:22:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:22:12 INFO [loop_until]: OK (rc = 0) 23:22:12 DEBUG --- stdout --- 23:22:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 3051Mi am-55f77847b7-nhzv4 9m 4687Mi am-55f77847b7-rpq9w 12m 4398Mi ds-cts-0 8m 391Mi ds-cts-1 6m 384Mi ds-cts-2 7m 410Mi ds-idrepo-0 102m 13689Mi ds-idrepo-1 35m 13668Mi ds-idrepo-2 27m 13673Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 102m 3667Mi idm-65858d8c4c-v78nh 101m 3537Mi lodemon-7655dd7665-d26cm 4m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 493Mi 23:22:12 DEBUG --- stderr --- 23:22:12 DEBUG 23:22:13 INFO 23:22:13 INFO [loop_until]: kubectl --namespace=xlou top node 23:22:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:22:13 INFO [loop_until]: OK (rc = 0) 23:22:13 DEBUG --- stdout --- 23:22:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 4160Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 175m 1% 4837Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 153m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 174m 1% 4905Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 155m 0% 14311Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 88m 0% 14270Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 2013Mi 3% 23:22:13 DEBUG --- stderr --- 23:22:13 DEBUG 23:23:12 INFO 23:23:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:23:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:23:12 INFO [loop_until]: OK (rc = 0) 23:23:12 DEBUG --- stdout --- 23:23:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 3087Mi am-55f77847b7-nhzv4 14m 4691Mi am-55f77847b7-rpq9w 12m 4398Mi ds-cts-0 6m 391Mi ds-cts-1 5m 384Mi ds-cts-2 6m 410Mi ds-idrepo-0 110m 13689Mi ds-idrepo-1 42m 13669Mi ds-idrepo-2 32m 13673Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 88m 3668Mi idm-65858d8c4c-v78nh 93m 3539Mi lodemon-7655dd7665-d26cm 1m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 493Mi 23:23:12 DEBUG --- stderr --- 23:23:12 DEBUG 23:23:13 INFO 23:23:13 INFO [loop_until]: kubectl --namespace=xlou top node 23:23:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:23:13 INFO [loop_until]: OK (rc = 0) 23:23:13 DEBUG --- stdout --- 23:23:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 4198Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5772Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 155m 0% 4839Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 157m 0% 4906Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 147m 0% 14317Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 84m 0% 14269Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 98m 0% 14273Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2014Mi 3% 23:23:13 DEBUG --- stderr --- 23:23:13 DEBUG 23:24:12 INFO 23:24:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:24:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:24:12 INFO [loop_until]: OK (rc = 0) 23:24:12 DEBUG --- stdout --- 23:24:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 3118Mi am-55f77847b7-nhzv4 8m 4691Mi am-55f77847b7-rpq9w 11m 4398Mi ds-cts-0 7m 391Mi ds-cts-1 5m 384Mi ds-cts-2 9m 411Mi ds-idrepo-0 104m 13731Mi ds-idrepo-1 33m 13670Mi ds-idrepo-2 24m 13674Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 90m 3670Mi idm-65858d8c4c-v78nh 102m 3535Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 493Mi 23:24:12 DEBUG --- stderr --- 23:24:12 DEBUG 23:24:13 INFO 23:24:13 INFO [loop_until]: kubectl --namespace=xlou top node 23:24:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:24:13 INFO [loop_until]: OK (rc = 0) 23:24:13 DEBUG --- stdout --- 23:24:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5394Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 4231Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5770Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 172m 1% 4838Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 165m 1% 4910Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 160m 1% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 14275Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 85m 0% 14275Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 101m 0% 2014Mi 3% 23:24:13 DEBUG --- stderr --- 23:24:13 DEBUG 23:25:12 INFO 23:25:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:25:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:25:12 INFO [loop_until]: OK (rc = 0) 23:25:12 DEBUG --- stdout --- 23:25:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 3154Mi am-55f77847b7-nhzv4 9m 4691Mi am-55f77847b7-rpq9w 11m 4398Mi ds-cts-0 6m 391Mi ds-cts-1 6m 385Mi ds-cts-2 6m 410Mi ds-idrepo-0 99m 13731Mi ds-idrepo-1 32m 13670Mi ds-idrepo-2 24m 13675Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 85m 3663Mi idm-65858d8c4c-v78nh 94m 3537Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 493Mi 23:25:12 DEBUG --- stderr --- 23:25:12 DEBUG 23:25:14 INFO 23:25:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:25:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:25:14 INFO [loop_until]: OK (rc = 0) 23:25:14 DEBUG --- stdout --- 23:25:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1318Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 4264Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 168m 1% 4836Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 156m 0% 4905Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 150m 0% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 76m 0% 14275Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 85m 0% 14276Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 94m 0% 2015Mi 3% 23:25:14 DEBUG --- stderr --- 23:25:14 DEBUG 23:26:12 INFO 23:26:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:26:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:26:12 INFO [loop_until]: OK (rc = 0) 23:26:12 DEBUG --- stdout --- 23:26:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 3189Mi am-55f77847b7-nhzv4 8m 4691Mi am-55f77847b7-rpq9w 13m 4398Mi ds-cts-0 6m 391Mi ds-cts-1 5m 384Mi ds-cts-2 6m 410Mi ds-idrepo-0 88m 13732Mi ds-idrepo-1 31m 13671Mi ds-idrepo-2 24m 13675Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 85m 3674Mi idm-65858d8c4c-v78nh 92m 3539Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 492Mi 23:26:12 DEBUG --- stderr --- 23:26:12 DEBUG 23:26:14 INFO 23:26:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:26:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:26:14 INFO [loop_until]: OK (rc = 0) 23:26:14 DEBUG --- stdout --- 23:26:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5390Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 4293Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5776Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 163m 1% 4840Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 148m 0% 4925Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 154m 0% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 73m 0% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 91m 0% 14278Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2013Mi 3% 23:26:14 DEBUG --- stderr --- 23:26:14 DEBUG 23:27:12 INFO 23:27:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:27:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:27:12 INFO [loop_until]: OK (rc = 0) 23:27:12 DEBUG --- stdout --- 23:27:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 3215Mi am-55f77847b7-nhzv4 10m 4691Mi am-55f77847b7-rpq9w 11m 4398Mi ds-cts-0 6m 391Mi ds-cts-1 5m 384Mi ds-cts-2 6m 410Mi ds-idrepo-0 104m 13754Mi ds-idrepo-1 37m 13672Mi ds-idrepo-2 26m 13676Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 98m 3675Mi idm-65858d8c4c-v78nh 98m 3546Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 34m 493Mi 23:27:12 DEBUG --- stderr --- 23:27:12 DEBUG 23:27:14 INFO 23:27:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:27:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:27:14 INFO [loop_until]: OK (rc = 0) 23:27:14 DEBUG --- stdout --- 23:27:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5393Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 4326Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 159m 1% 4848Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 168m 1% 4917Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 162m 1% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 14284Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 14292Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 2013Mi 3% 23:27:14 DEBUG --- stderr --- 23:27:14 DEBUG 23:28:12 INFO 23:28:12 INFO [loop_until]: kubectl --namespace=xlou top pods 23:28:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:28:12 INFO [loop_until]: OK (rc = 0) 23:28:12 DEBUG --- stdout --- 23:28:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 3253Mi am-55f77847b7-nhzv4 9m 4691Mi am-55f77847b7-rpq9w 11m 4398Mi ds-cts-0 6m 391Mi ds-cts-1 5m 384Mi ds-cts-2 7m 410Mi ds-idrepo-0 105m 13754Mi ds-idrepo-1 41m 13716Mi ds-idrepo-2 26m 13678Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 100m 3677Mi idm-65858d8c4c-v78nh 100m 3548Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 34m 493Mi 23:28:12 DEBUG --- stderr --- 23:28:12 DEBUG 23:28:14 INFO 23:28:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:28:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:28:14 INFO [loop_until]: OK (rc = 0) 23:28:14 DEBUG --- stdout --- 23:28:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5394Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 4361Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 169m 1% 4849Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 156m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 169m 1% 4921Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 155m 0% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 78m 0% 14285Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 89m 0% 14323Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 2014Mi 3% 23:28:14 DEBUG --- stderr --- 23:28:14 DEBUG 23:29:13 INFO 23:29:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:29:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:29:13 INFO [loop_until]: OK (rc = 0) 23:29:13 DEBUG --- stdout --- 23:29:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 3288Mi am-55f77847b7-nhzv4 9m 4691Mi am-55f77847b7-rpq9w 12m 4426Mi ds-cts-0 6m 391Mi ds-cts-1 5m 384Mi ds-cts-2 7m 410Mi ds-idrepo-0 102m 13754Mi ds-idrepo-1 35m 13717Mi ds-idrepo-2 24m 13677Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 91m 3675Mi idm-65858d8c4c-v78nh 98m 3556Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 493Mi 23:29:13 DEBUG --- stderr --- 23:29:13 DEBUG 23:29:14 INFO 23:29:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:29:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:29:14 INFO [loop_until]: OK (rc = 0) 23:29:14 DEBUG --- stdout --- 23:29:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5420Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 4395Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 176m 1% 4857Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 165m 1% 4918Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 157m 0% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 14285Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 85m 0% 14329Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2013Mi 3% 23:29:14 DEBUG --- stderr --- 23:29:14 DEBUG 23:30:13 INFO 23:30:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:30:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:30:13 INFO [loop_until]: OK (rc = 0) 23:30:13 DEBUG --- stdout --- 23:30:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 10m 3316Mi am-55f77847b7-nhzv4 10m 4691Mi am-55f77847b7-rpq9w 8m 4551Mi ds-cts-0 12m 390Mi ds-cts-1 11m 387Mi ds-cts-2 6m 410Mi ds-idrepo-0 98m 13765Mi ds-idrepo-1 35m 13718Mi ds-idrepo-2 27m 13678Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 83m 3680Mi idm-65858d8c4c-v78nh 101m 3546Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 29m 494Mi 23:30:13 DEBUG --- stderr --- 23:30:13 DEBUG 23:30:14 INFO 23:30:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:30:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:30:14 INFO [loop_until]: OK (rc = 0) 23:30:14 DEBUG --- stdout --- 23:30:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5543Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 4425Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 169m 1% 4846Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 147m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 152m 0% 4920Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 150m 0% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 14332Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2015Mi 3% 23:30:14 DEBUG --- stderr --- 23:30:14 DEBUG 23:31:13 INFO 23:31:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:31:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:31:13 INFO [loop_until]: OK (rc = 0) 23:31:13 DEBUG --- stdout --- 23:31:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 3353Mi am-55f77847b7-nhzv4 12m 4691Mi am-55f77847b7-rpq9w 8m 4551Mi ds-cts-0 6m 390Mi ds-cts-1 5m 387Mi ds-cts-2 7m 411Mi ds-idrepo-0 94m 13765Mi ds-idrepo-1 32m 13714Mi ds-idrepo-2 29m 13679Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 87m 3685Mi idm-65858d8c4c-v78nh 104m 3556Mi lodemon-7655dd7665-d26cm 1m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 493Mi 23:31:13 DEBUG --- stderr --- 23:31:13 DEBUG 23:31:14 INFO 23:31:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:31:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:31:14 INFO [loop_until]: OK (rc = 0) 23:31:14 DEBUG --- stdout --- 23:31:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 4454Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 172m 1% 4852Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 156m 0% 4926Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 151m 0% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 82m 0% 14290Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 14331Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2016Mi 3% 23:31:14 DEBUG --- stderr --- 23:31:14 DEBUG 23:32:13 INFO 23:32:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:32:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:32:13 INFO [loop_until]: OK (rc = 0) 23:32:13 DEBUG --- stdout --- 23:32:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 3377Mi am-55f77847b7-nhzv4 8m 4691Mi am-55f77847b7-rpq9w 10m 4551Mi ds-cts-0 6m 390Mi ds-cts-1 6m 387Mi ds-cts-2 6m 410Mi ds-idrepo-0 87m 13765Mi ds-idrepo-1 33m 13707Mi ds-idrepo-2 27m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 88m 3674Mi idm-65858d8c4c-v78nh 78m 3548Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 29m 494Mi 23:32:13 DEBUG --- stderr --- 23:32:13 DEBUG 23:32:14 INFO 23:32:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:32:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:32:14 INFO [loop_until]: OK (rc = 0) 23:32:14 DEBUG --- stdout --- 23:32:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1313Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 4486Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 149m 0% 4846Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 144m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 160m 1% 4914Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 139m 0% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 14292Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 82m 0% 14326Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 91m 0% 2016Mi 3% 23:32:14 DEBUG --- stderr --- 23:32:14 DEBUG 23:33:13 INFO 23:33:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:33:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:33:13 INFO [loop_until]: OK (rc = 0) 23:33:13 DEBUG --- stdout --- 23:33:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 3411Mi am-55f77847b7-nhzv4 8m 4691Mi am-55f77847b7-rpq9w 9m 4551Mi ds-cts-0 6m 390Mi ds-cts-1 5m 388Mi ds-cts-2 6m 410Mi ds-idrepo-0 215m 13774Mi ds-idrepo-1 32m 13702Mi ds-idrepo-2 27m 13680Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 80m 3681Mi idm-65858d8c4c-v78nh 92m 3555Mi lodemon-7655dd7665-d26cm 3m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 29m 494Mi 23:33:13 DEBUG --- stderr --- 23:33:13 DEBUG 23:33:14 INFO 23:33:14 INFO [loop_until]: kubectl --namespace=xlou top node 23:33:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:33:14 INFO [loop_until]: OK (rc = 0) 23:33:14 DEBUG --- stdout --- 23:33:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 4519Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5775Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 160m 1% 4855Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 146m 0% 4922Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 245m 1% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 78m 0% 14297Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 14322Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2017Mi 3% 23:33:14 DEBUG --- stderr --- 23:33:14 DEBUG 23:34:13 INFO 23:34:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:34:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:34:13 INFO [loop_until]: OK (rc = 0) 23:34:13 DEBUG --- stdout --- 23:34:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 3446Mi am-55f77847b7-nhzv4 8m 4691Mi am-55f77847b7-rpq9w 13m 4553Mi ds-cts-0 6m 390Mi ds-cts-1 5m 388Mi ds-cts-2 6m 410Mi ds-idrepo-0 100m 13774Mi ds-idrepo-1 41m 13693Mi ds-idrepo-2 30m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 104m 3683Mi idm-65858d8c4c-v78nh 97m 3552Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 494Mi 23:34:13 DEBUG --- stderr --- 23:34:13 DEBUG 23:34:15 INFO 23:34:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:34:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:34:15 INFO [loop_until]: OK (rc = 0) 23:34:15 DEBUG --- stdout --- 23:34:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 4555Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5774Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 170m 1% 4850Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 157m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 172m 1% 4925Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 158m 0% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 79m 0% 14298Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 14314Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 101m 0% 2014Mi 3% 23:34:15 DEBUG --- stderr --- 23:34:15 DEBUG 23:35:13 INFO 23:35:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:35:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:35:13 INFO [loop_until]: OK (rc = 0) 23:35:13 DEBUG --- stdout --- 23:35:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 11m 3485Mi am-55f77847b7-nhzv4 9m 4691Mi am-55f77847b7-rpq9w 9m 4553Mi ds-cts-0 6m 390Mi ds-cts-1 5m 388Mi ds-cts-2 6m 414Mi ds-idrepo-0 97m 13774Mi ds-idrepo-1 32m 13686Mi ds-idrepo-2 27m 13682Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 87m 3685Mi idm-65858d8c4c-v78nh 103m 3557Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 494Mi 23:35:13 DEBUG --- stderr --- 23:35:13 DEBUG 23:35:15 INFO 23:35:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:35:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:35:15 INFO [loop_until]: OK (rc = 0) 23:35:15 DEBUG --- stdout --- 23:35:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 4590Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 171m 1% 4857Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 155m 0% 4927Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 152m 0% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 14300Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 14310Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2015Mi 3% 23:35:15 DEBUG --- stderr --- 23:35:15 DEBUG 23:36:13 INFO 23:36:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:36:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:36:13 INFO [loop_until]: OK (rc = 0) 23:36:13 DEBUG --- stdout --- 23:36:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 8m 3500Mi am-55f77847b7-nhzv4 9m 4692Mi am-55f77847b7-rpq9w 7m 4553Mi ds-cts-0 7m 390Mi ds-cts-1 6m 388Mi ds-cts-2 6m 415Mi ds-idrepo-0 12m 13780Mi ds-idrepo-1 12m 13681Mi ds-idrepo-2 9m 13684Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 7m 3688Mi idm-65858d8c4c-v78nh 5m 3564Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 37m 102Mi 23:36:13 DEBUG --- stderr --- 23:36:13 DEBUG 23:36:15 INFO 23:36:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:36:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:36:15 INFO [loop_until]: OK (rc = 0) 23:36:15 DEBUG --- stdout --- 23:36:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 4609Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5776Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4864Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 4932Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14300Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14306Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 1628Mi 2% 23:36:15 DEBUG --- stderr --- 23:36:15 DEBUG 23:37:13 INFO 23:37:13 INFO [loop_until]: kubectl --namespace=xlou top pods 23:37:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:37:13 INFO [loop_until]: OK (rc = 0) 23:37:13 DEBUG --- stdout --- 23:37:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 3510Mi am-55f77847b7-nhzv4 7m 4691Mi am-55f77847b7-rpq9w 5m 4553Mi ds-cts-0 6m 390Mi ds-cts-1 5m 388Mi ds-cts-2 6m 415Mi ds-idrepo-0 11m 13780Mi ds-idrepo-1 19m 13675Mi ds-idrepo-2 10m 13683Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 9m 3688Mi idm-65858d8c4c-v78nh 5m 3564Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 102Mi 23:37:13 DEBUG --- stderr --- 23:37:13 DEBUG 23:37:15 INFO 23:37:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:37:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:37:15 INFO [loop_until]: OK (rc = 0) 23:37:15 DEBUG --- stdout --- 23:37:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 4621Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 58m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4866Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 4929Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14303Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 14300Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1630Mi 2% 23:37:15 DEBUG --- stderr --- 23:37:15 DEBUG 127.0.0.1 - - [11/Aug/2023 23:37:26] "GET /monitoring/average?start_time=23-08-11_22:06:54&stop_time=23-08-11_22:35:25 HTTP/1.1" 200 - 23:38:14 INFO 23:38:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:38:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:38:14 INFO [loop_until]: OK (rc = 0) 23:38:14 DEBUG --- stdout --- 23:38:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 8m 3522Mi am-55f77847b7-nhzv4 6m 4691Mi am-55f77847b7-rpq9w 6m 4553Mi ds-cts-0 6m 390Mi ds-cts-1 5m 388Mi ds-cts-2 6m 415Mi ds-idrepo-0 11m 13781Mi ds-idrepo-1 12m 13675Mi ds-idrepo-2 9m 13686Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 15m 3689Mi idm-65858d8c4c-v78nh 5m 3564Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1949m 336Mi 23:38:14 DEBUG --- stderr --- 23:38:14 DEBUG 23:38:15 INFO 23:38:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:38:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:38:15 INFO [loop_until]: OK (rc = 0) 23:38:15 DEBUG --- stdout --- 23:38:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 59m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 4631Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 4860Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 4930Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14305Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14301Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1899m 11% 1903Mi 3% 23:38:15 DEBUG --- stderr --- 23:38:15 DEBUG 23:39:14 INFO 23:39:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:39:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:39:14 INFO [loop_until]: OK (rc = 0) 23:39:14 DEBUG --- stdout --- 23:39:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 17m 3564Mi am-55f77847b7-nhzv4 10m 4689Mi am-55f77847b7-rpq9w 23m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 12m 386Mi ds-cts-2 6m 415Mi ds-idrepo-0 144m 13781Mi ds-idrepo-1 86m 13675Mi ds-idrepo-2 82m 13710Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 104m 3685Mi idm-65858d8c4c-v78nh 101m 3559Mi lodemon-7655dd7665-d26cm 4m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 116m 453Mi 23:39:14 DEBUG --- stderr --- 23:39:14 DEBUG 23:39:15 INFO 23:39:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:39:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:39:15 INFO [loop_until]: OK (rc = 0) 23:39:15 DEBUG --- stdout --- 23:39:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 4670Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 180m 1% 4868Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 162m 1% 4929Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 194m 1% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 126m 0% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 134m 0% 14302Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 175m 1% 1976Mi 3% 23:39:15 DEBUG --- stderr --- 23:39:15 DEBUG 23:40:14 INFO 23:40:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:40:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:40:14 INFO [loop_until]: OK (rc = 0) 23:40:14 DEBUG --- stdout --- 23:40:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 3608Mi am-55f77847b7-nhzv4 16m 4690Mi am-55f77847b7-rpq9w 16m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 5m 416Mi ds-idrepo-0 155m 13780Mi ds-idrepo-1 99m 13704Mi ds-idrepo-2 96m 13729Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 93m 3686Mi idm-65858d8c4c-v78nh 89m 3568Mi lodemon-7655dd7665-d26cm 1m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 85m 458Mi 23:40:14 DEBUG --- stderr --- 23:40:14 DEBUG 23:40:15 INFO 23:40:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:40:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:40:15 INFO [loop_until]: OK (rc = 0) 23:40:15 DEBUG --- stdout --- 23:40:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 4727Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 162m 1% 4867Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 158m 0% 4926Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 190m 1% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 142m 0% 14353Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 157m 0% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 146m 0% 1974Mi 3% 23:40:15 DEBUG --- stderr --- 23:40:15 DEBUG 23:41:14 INFO 23:41:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:41:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:41:14 INFO [loop_until]: OK (rc = 0) 23:41:14 DEBUG --- stdout --- 23:41:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 3658Mi am-55f77847b7-nhzv4 10m 4690Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 415Mi ds-idrepo-0 123m 13812Mi ds-idrepo-1 64m 13704Mi ds-idrepo-2 56m 13740Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 77m 3690Mi idm-65858d8c4c-v78nh 74m 3564Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 57m 465Mi 23:41:14 DEBUG --- stderr --- 23:41:14 DEBUG 23:41:15 INFO 23:41:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:41:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:41:15 INFO [loop_until]: OK (rc = 0) 23:41:15 DEBUG --- stdout --- 23:41:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 4775Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5774Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 140m 0% 4865Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 138m 0% 4927Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 175m 1% 14474Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 104m 0% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 114m 0% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 125m 0% 1986Mi 3% 23:41:15 DEBUG --- stderr --- 23:41:15 DEBUG 23:42:14 INFO 23:42:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:42:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:42:14 INFO [loop_until]: OK (rc = 0) 23:42:14 DEBUG --- stdout --- 23:42:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 3714Mi am-55f77847b7-nhzv4 10m 4690Mi am-55f77847b7-rpq9w 10m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 6m 385Mi ds-cts-2 6m 415Mi ds-idrepo-0 133m 13812Mi ds-idrepo-1 88m 13704Mi ds-idrepo-2 75m 13745Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 85m 3698Mi idm-65858d8c4c-v78nh 96m 3565Mi lodemon-7655dd7665-d26cm 4m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 46m 464Mi 23:42:14 DEBUG --- stderr --- 23:42:14 DEBUG 23:42:15 INFO 23:42:15 INFO [loop_until]: kubectl --namespace=xlou top node 23:42:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:42:16 INFO [loop_until]: OK (rc = 0) 23:42:16 DEBUG --- stdout --- 23:42:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 4824Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5774Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 160m 1% 4868Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 156m 0% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 153m 0% 4937Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 188m 1% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 128m 0% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 140m 0% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 111m 0% 1986Mi 3% 23:42:16 DEBUG --- stderr --- 23:42:16 DEBUG 23:43:14 INFO 23:43:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:43:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:43:14 INFO [loop_until]: OK (rc = 0) 23:43:14 DEBUG --- stdout --- 23:43:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 3769Mi am-55f77847b7-nhzv4 10m 4690Mi am-55f77847b7-rpq9w 17m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 5m 415Mi ds-idrepo-0 141m 13813Mi ds-idrepo-1 69m 13723Mi ds-idrepo-2 77m 13745Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 87m 3700Mi idm-65858d8c4c-v78nh 87m 3572Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 79m 466Mi 23:43:14 DEBUG --- stderr --- 23:43:14 DEBUG 23:43:16 INFO 23:43:16 INFO [loop_until]: kubectl --namespace=xlou top node 23:43:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:43:16 INFO [loop_until]: OK (rc = 0) 23:43:16 DEBUG --- stdout --- 23:43:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1304Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 4881Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 155m 0% 4872Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 156m 0% 4940Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 189m 1% 14480Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 127m 0% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 122m 0% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 140m 0% 1989Mi 3% 23:43:16 DEBUG --- stderr --- 23:43:16 DEBUG 23:44:14 INFO 23:44:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:44:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:44:14 INFO [loop_until]: OK (rc = 0) 23:44:14 DEBUG --- stdout --- 23:44:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 3820Mi am-55f77847b7-nhzv4 11m 4690Mi am-55f77847b7-rpq9w 10m 4555Mi ds-cts-0 5m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 415Mi ds-idrepo-0 111m 13813Mi ds-idrepo-1 66m 13723Mi ds-idrepo-2 72m 13745Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 88m 3700Mi idm-65858d8c4c-v78nh 86m 3568Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 49m 466Mi 23:44:14 DEBUG --- stderr --- 23:44:14 DEBUG 23:44:16 INFO 23:44:16 INFO [loop_until]: kubectl --namespace=xlou top node 23:44:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:44:16 INFO [loop_until]: OK (rc = 0) 23:44:16 DEBUG --- stdout --- 23:44:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 4932Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 154m 0% 4869Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 151m 0% 4944Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 167m 1% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 120m 0% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 112m 0% 14364Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 116m 0% 1990Mi 3% 23:44:16 DEBUG --- stderr --- 23:44:16 DEBUG 23:45:14 INFO 23:45:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:45:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:45:14 INFO [loop_until]: OK (rc = 0) 23:45:14 DEBUG --- stdout --- 23:45:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 3873Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 5m 391Mi ds-cts-1 5m 386Mi ds-cts-2 6m 415Mi ds-idrepo-0 223m 13814Mi ds-idrepo-1 85m 13736Mi ds-idrepo-2 67m 13745Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 77m 3702Mi idm-65858d8c4c-v78nh 78m 3570Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 38m 466Mi 23:45:14 DEBUG --- stderr --- 23:45:14 DEBUG 23:45:16 INFO 23:45:16 INFO [loop_until]: kubectl --namespace=xlou top node 23:45:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:45:16 INFO [loop_until]: OK (rc = 0) 23:45:16 DEBUG --- stdout --- 23:45:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 4977Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 145m 0% 4872Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 149m 0% 4948Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 277m 1% 14485Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 114m 0% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 142m 0% 14380Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 1991Mi 3% 23:45:16 DEBUG --- stderr --- 23:45:16 DEBUG 23:46:14 INFO 23:46:14 INFO [loop_until]: kubectl --namespace=xlou top pods 23:46:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:46:14 INFO [loop_until]: OK (rc = 0) 23:46:14 DEBUG --- stdout --- 23:46:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 3917Mi am-55f77847b7-nhzv4 11m 4690Mi am-55f77847b7-rpq9w 10m 4555Mi ds-cts-0 9m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 415Mi ds-idrepo-0 109m 13813Mi ds-idrepo-1 69m 13737Mi ds-idrepo-2 64m 13745Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 77m 3703Mi idm-65858d8c4c-v78nh 78m 3584Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 466Mi 23:46:14 DEBUG --- stderr --- 23:46:14 DEBUG 23:46:16 INFO 23:46:16 INFO [loop_until]: kubectl --namespace=xlou top node 23:46:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:46:16 INFO [loop_until]: OK (rc = 0) 23:46:16 DEBUG --- stdout --- 23:46:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 5027Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 150m 0% 4885Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 139m 0% 4941Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 162m 1% 14486Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 120m 0% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 119m 0% 14382Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 104m 0% 1989Mi 3% 23:46:16 DEBUG --- stderr --- 23:46:16 DEBUG 23:47:15 INFO 23:47:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:47:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:47:15 INFO [loop_until]: OK (rc = 0) 23:47:15 DEBUG --- stdout --- 23:47:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 3964Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 11m 393Mi ds-cts-1 6m 386Mi ds-cts-2 7m 415Mi ds-idrepo-0 223m 13814Mi ds-idrepo-1 114m 13738Mi ds-idrepo-2 62m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 85m 3705Mi idm-65858d8c4c-v78nh 83m 3573Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 52m 466Mi 23:47:15 DEBUG --- stderr --- 23:47:15 DEBUG 23:47:16 INFO 23:47:16 INFO [loop_until]: kubectl --namespace=xlou top node 23:47:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:47:16 INFO [loop_until]: OK (rc = 0) 23:47:16 DEBUG --- stdout --- 23:47:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5079Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5771Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 165m 1% 4868Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 147m 0% 4947Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 181m 1% 14486Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 114m 0% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 137m 0% 14385Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 120m 0% 1990Mi 3% 23:47:16 DEBUG --- stderr --- 23:47:16 DEBUG 23:48:15 INFO 23:48:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:48:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:48:15 INFO [loop_until]: OK (rc = 0) 23:48:15 DEBUG --- stdout --- 23:48:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 4017Mi am-55f77847b7-nhzv4 10m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 393Mi ds-cts-1 5m 385Mi ds-cts-2 7m 414Mi ds-idrepo-0 104m 13814Mi ds-idrepo-1 69m 13739Mi ds-idrepo-2 57m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 72m 3705Mi idm-65858d8c4c-v78nh 80m 3584Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 48m 471Mi 23:48:15 DEBUG --- stderr --- 23:48:15 DEBUG 23:48:16 INFO 23:48:16 INFO [loop_until]: kubectl --namespace=xlou top node 23:48:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:48:16 INFO [loop_until]: OK (rc = 0) 23:48:16 DEBUG --- stdout --- 23:48:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1303Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5125Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5776Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 144m 0% 4884Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 139m 0% 4943Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 158m 0% 14486Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 106m 0% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 121m 0% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 104m 0% 1989Mi 3% 23:48:16 DEBUG --- stderr --- 23:48:16 DEBUG 23:49:15 INFO 23:49:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:49:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:49:15 INFO [loop_until]: OK (rc = 0) 23:49:15 DEBUG --- stdout --- 23:49:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 4062Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 8m 393Mi ds-cts-1 5m 386Mi ds-cts-2 5m 414Mi ds-idrepo-0 125m 13814Mi ds-idrepo-1 74m 13739Mi ds-idrepo-2 54m 13745Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 78m 3707Mi idm-65858d8c4c-v78nh 71m 3577Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 471Mi 23:49:15 DEBUG --- stderr --- 23:49:15 DEBUG 23:49:16 INFO 23:49:16 INFO [loop_until]: kubectl --namespace=xlou top node 23:49:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:49:16 INFO [loop_until]: OK (rc = 0) 23:49:16 DEBUG --- stdout --- 23:49:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 5179Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5776Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 148m 0% 4879Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 150m 0% 4950Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 175m 1% 14488Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 114m 0% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 121m 0% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 104m 0% 1993Mi 3% 23:49:16 DEBUG --- stderr --- 23:49:16 DEBUG 23:50:15 INFO 23:50:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:50:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:50:15 INFO [loop_until]: OK (rc = 0) 23:50:15 DEBUG --- stdout --- 23:50:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 4112Mi am-55f77847b7-nhzv4 10m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 7m 393Mi ds-cts-1 5m 386Mi ds-cts-2 8m 415Mi ds-idrepo-0 103m 13814Mi ds-idrepo-1 66m 13739Mi ds-idrepo-2 54m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 72m 3709Mi idm-65858d8c4c-v78nh 69m 3582Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 471Mi 23:50:15 DEBUG --- stderr --- 23:50:15 DEBUG 23:50:16 INFO 23:50:16 INFO [loop_until]: kubectl --namespace=xlou top node 23:50:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:50:16 INFO [loop_until]: OK (rc = 0) 23:50:16 DEBUG --- stdout --- 23:50:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5228Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5772Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 146m 0% 4887Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 144m 0% 4949Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 159m 1% 14493Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 106m 0% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 120m 0% 14390Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 105m 0% 2002Mi 3% 23:50:16 DEBUG --- stderr --- 23:50:16 DEBUG 23:51:15 INFO 23:51:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:51:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:51:15 INFO [loop_until]: OK (rc = 0) 23:51:15 DEBUG --- stdout --- 23:51:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 4159Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 10m 398Mi ds-cts-1 5m 385Mi ds-cts-2 11m 414Mi ds-idrepo-0 117m 13814Mi ds-idrepo-1 71m 13739Mi ds-idrepo-2 54m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 77m 3709Mi idm-65858d8c4c-v78nh 78m 3585Mi lodemon-7655dd7665-d26cm 4m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 34m 471Mi 23:51:15 DEBUG --- stderr --- 23:51:15 DEBUG 23:51:17 INFO 23:51:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:51:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:51:17 INFO [loop_until]: OK (rc = 0) 23:51:17 DEBUG --- stdout --- 23:51:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5270Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5776Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 139m 0% 4886Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 144m 0% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 132m 0% 4950Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 163m 1% 14496Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 101m 0% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 118m 0% 14398Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 120m 0% 1999Mi 3% 23:51:17 DEBUG --- stderr --- 23:51:17 DEBUG 23:52:15 INFO 23:52:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:52:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:52:15 INFO [loop_until]: OK (rc = 0) 23:52:15 DEBUG --- stdout --- 23:52:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 4214Mi am-55f77847b7-nhzv4 13m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 5m 391Mi ds-cts-1 5m 386Mi ds-cts-2 7m 414Mi ds-idrepo-0 107m 13814Mi ds-idrepo-1 173m 13745Mi ds-idrepo-2 62m 13745Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 89m 3711Mi idm-65858d8c4c-v78nh 79m 3587Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 478Mi 23:52:15 DEBUG --- stderr --- 23:52:15 DEBUG 23:52:17 INFO 23:52:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:52:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:52:17 INFO [loop_until]: OK (rc = 0) 23:52:17 DEBUG --- stdout --- 23:52:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5322Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 144m 0% 4885Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 158m 0% 4951Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 165m 1% 14496Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 112m 0% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 203m 1% 14404Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 100m 0% 1997Mi 3% 23:52:17 DEBUG --- stderr --- 23:52:17 DEBUG 23:53:15 INFO 23:53:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:53:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:53:15 INFO [loop_until]: OK (rc = 0) 23:53:15 DEBUG --- stdout --- 23:53:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 4254Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 12m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 386Mi ds-cts-2 7m 414Mi ds-idrepo-0 256m 13815Mi ds-idrepo-1 73m 13746Mi ds-idrepo-2 61m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 74m 3713Mi idm-65858d8c4c-v78nh 76m 3589Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 41m 482Mi 23:53:15 DEBUG --- stderr --- 23:53:15 DEBUG 23:53:17 INFO 23:53:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:53:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:53:17 INFO [loop_until]: OK (rc = 0) 23:53:17 DEBUG --- stdout --- 23:53:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5370Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5776Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 142m 0% 4891Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 141m 0% 4956Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 288m 1% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 111m 0% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 120m 0% 14407Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 108m 0% 2002Mi 3% 23:53:17 DEBUG --- stderr --- 23:53:17 DEBUG 23:54:15 INFO 23:54:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:54:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:54:15 INFO [loop_until]: OK (rc = 0) 23:54:15 DEBUG --- stdout --- 23:54:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 20m 4304Mi am-55f77847b7-nhzv4 10m 4691Mi am-55f77847b7-rpq9w 14m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 414Mi ds-idrepo-0 110m 13815Mi ds-idrepo-1 87m 13761Mi ds-idrepo-2 59m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 79m 3717Mi idm-65858d8c4c-v78nh 77m 3589Mi lodemon-7655dd7665-d26cm 4m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 482Mi 23:54:15 DEBUG --- stderr --- 23:54:15 DEBUG 23:54:17 INFO 23:54:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:54:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:54:17 INFO [loop_until]: OK (rc = 0) 23:54:17 DEBUG --- stdout --- 23:54:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1314Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5779Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 145m 0% 4885Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 142m 0% 4957Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 158m 0% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 107m 0% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 137m 0% 14423Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 101m 0% 2005Mi 3% 23:54:17 DEBUG --- stderr --- 23:54:17 DEBUG 23:55:15 INFO 23:55:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:55:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:55:15 INFO [loop_until]: OK (rc = 0) 23:55:15 DEBUG --- stdout --- 23:55:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 4356Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 12m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 386Mi ds-cts-2 6m 414Mi ds-idrepo-0 128m 13812Mi ds-idrepo-1 74m 13792Mi ds-idrepo-2 61m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 82m 3717Mi idm-65858d8c4c-v78nh 112m 3592Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 483Mi 23:55:15 DEBUG --- stderr --- 23:55:15 DEBUG 23:55:17 INFO 23:55:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:55:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:55:17 INFO [loop_until]: OK (rc = 0) 23:55:17 DEBUG --- stdout --- 23:55:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5467Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 179m 1% 4890Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 152m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 146m 0% 4959Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 182m 1% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 110m 0% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 122m 0% 14451Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 101m 0% 2004Mi 3% 23:55:17 DEBUG --- stderr --- 23:55:17 DEBUG 23:56:15 INFO 23:56:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:56:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:56:15 INFO [loop_until]: OK (rc = 0) 23:56:15 DEBUG --- stdout --- 23:56:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 4409Mi am-55f77847b7-nhzv4 14m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 386Mi ds-cts-2 7m 414Mi ds-idrepo-0 109m 13749Mi ds-idrepo-1 68m 13792Mi ds-idrepo-2 59m 13745Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 73m 3719Mi idm-65858d8c4c-v78nh 101m 3594Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 482Mi 23:56:15 DEBUG --- stderr --- 23:56:15 DEBUG 23:56:17 INFO 23:56:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:56:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:56:17 INFO [loop_until]: OK (rc = 0) 23:56:17 DEBUG --- stdout --- 23:56:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1322Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5514Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 167m 1% 4894Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 142m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 141m 0% 4962Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 163m 1% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 105m 0% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 120m 0% 14457Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 100m 0% 2006Mi 3% 23:56:17 DEBUG --- stderr --- 23:56:17 DEBUG 23:57:15 INFO 23:57:15 INFO [loop_until]: kubectl --namespace=xlou top pods 23:57:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:57:15 INFO [loop_until]: OK (rc = 0) 23:57:15 DEBUG --- stdout --- 23:57:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 4449Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 7m 414Mi ds-idrepo-0 117m 13750Mi ds-idrepo-1 91m 13793Mi ds-idrepo-2 58m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 95m 3722Mi idm-65858d8c4c-v78nh 74m 3596Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 482Mi 23:57:15 DEBUG --- stderr --- 23:57:15 DEBUG 23:57:17 INFO 23:57:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:57:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:57:17 INFO [loop_until]: OK (rc = 0) 23:57:17 DEBUG --- stdout --- 23:57:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5567Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 141m 0% 4894Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 152m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 167m 1% 4963Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 169m 1% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 109m 0% 14413Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 126m 0% 14460Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2008Mi 3% 23:57:17 DEBUG --- stderr --- 23:57:17 DEBUG 23:58:16 INFO 23:58:16 INFO [loop_until]: kubectl --namespace=xlou top pods 23:58:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:58:16 INFO [loop_until]: OK (rc = 0) 23:58:16 DEBUG --- stdout --- 23:58:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 4507Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 12m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 414Mi ds-idrepo-0 109m 13750Mi ds-idrepo-1 78m 13793Mi ds-idrepo-2 63m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 86m 3724Mi idm-65858d8c4c-v78nh 80m 3604Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 483Mi 23:58:16 DEBUG --- stderr --- 23:58:16 DEBUG 23:58:17 INFO 23:58:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:58:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:58:17 INFO [loop_until]: OK (rc = 0) 23:58:17 DEBUG --- stdout --- 23:58:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5620Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 149m 0% 4909Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 148m 0% 4961Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 166m 1% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 114m 0% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 132m 0% 14461Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 99m 0% 2004Mi 3% 23:58:17 DEBUG --- stderr --- 23:58:17 DEBUG 23:59:16 INFO 23:59:16 INFO [loop_until]: kubectl --namespace=xlou top pods 23:59:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:59:16 INFO [loop_until]: OK (rc = 0) 23:59:16 DEBUG --- stdout --- 23:59:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 21m 4561Mi am-55f77847b7-nhzv4 11m 4691Mi am-55f77847b7-rpq9w 15m 4555Mi ds-cts-0 6m 392Mi ds-cts-1 5m 385Mi ds-cts-2 6m 414Mi ds-idrepo-0 110m 13750Mi ds-idrepo-1 88m 13794Mi ds-idrepo-2 61m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 90m 3724Mi idm-65858d8c4c-v78nh 85m 3594Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 483Mi 23:59:16 DEBUG --- stderr --- 23:59:16 DEBUG 23:59:17 INFO 23:59:17 INFO [loop_until]: kubectl --namespace=xlou top node 23:59:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 23:59:17 INFO [loop_until]: OK (rc = 0) 23:59:17 DEBUG --- stdout --- 23:59:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 5668Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 156m 0% 4896Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 151m 0% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 154m 0% 4963Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 159m 1% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 112m 0% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 132m 0% 14464Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2002Mi 3% 23:59:17 DEBUG --- stderr --- 23:59:18 DEBUG 00:00:16 INFO 00:00:16 INFO [loop_until]: kubectl --namespace=xlou top pods 00:00:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:00:16 INFO [loop_until]: OK (rc = 0) 00:00:16 DEBUG --- stdout --- 00:00:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 4603Mi am-55f77847b7-nhzv4 11m 4692Mi am-55f77847b7-rpq9w 12m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 414Mi ds-idrepo-0 245m 13752Mi ds-idrepo-1 77m 13794Mi ds-idrepo-2 58m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 104m 3731Mi idm-65858d8c4c-v78nh 75m 3600Mi lodemon-7655dd7665-d26cm 4m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 29m 483Mi 00:00:16 DEBUG --- stderr --- 00:00:16 DEBUG 00:00:18 INFO 00:00:18 INFO [loop_until]: kubectl --namespace=xlou top node 00:00:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:00:18 INFO [loop_until]: OK (rc = 0) 00:00:18 DEBUG --- stdout --- 00:00:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5718Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 144m 0% 4898Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 156m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 169m 1% 4967Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 295m 1% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 111m 0% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 130m 0% 14467Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2006Mi 3% 00:00:18 DEBUG --- stderr --- 00:00:18 DEBUG 00:01:16 INFO 00:01:16 INFO [loop_until]: kubectl --namespace=xlou top pods 00:01:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:01:16 INFO [loop_until]: OK (rc = 0) 00:01:16 DEBUG --- stdout --- 00:01:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 4662Mi am-55f77847b7-nhzv4 11m 4692Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 392Mi ds-cts-1 5m 385Mi ds-cts-2 6m 414Mi ds-idrepo-0 105m 13752Mi ds-idrepo-1 75m 13800Mi ds-idrepo-2 60m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 76m 3733Mi idm-65858d8c4c-v78nh 69m 3609Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 482Mi 00:01:16 DEBUG --- stderr --- 00:01:16 DEBUG 00:01:18 INFO 00:01:18 INFO [loop_until]: kubectl --namespace=xlou top node 00:01:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:01:18 INFO [loop_until]: OK (rc = 0) 00:01:18 DEBUG --- stdout --- 00:01:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5771Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 137m 0% 4908Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 146m 0% 4974Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 160m 1% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 110m 0% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 129m 0% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2001Mi 3% 00:01:18 DEBUG --- stderr --- 00:01:18 DEBUG 00:02:16 INFO 00:02:16 INFO [loop_until]: kubectl --namespace=xlou top pods 00:02:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:02:16 INFO [loop_until]: OK (rc = 0) 00:02:16 DEBUG --- stdout --- 00:02:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 4702Mi am-55f77847b7-nhzv4 11m 4692Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 7m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 414Mi ds-idrepo-0 125m 13794Mi ds-idrepo-1 70m 13800Mi ds-idrepo-2 56m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 77m 3730Mi idm-65858d8c4c-v78nh 80m 3602Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 483Mi 00:02:16 DEBUG --- stderr --- 00:02:16 DEBUG 00:02:18 INFO 00:02:18 INFO [loop_until]: kubectl --namespace=xlou top node 00:02:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:02:18 INFO [loop_until]: OK (rc = 0) 00:02:18 DEBUG --- stdout --- 00:02:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1311Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5812Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 155m 0% 4902Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 148m 0% 4975Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 179m 1% 14491Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 111m 0% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 123m 0% 14472Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 99m 0% 2005Mi 3% 00:02:18 DEBUG --- stderr --- 00:02:18 DEBUG 00:03:16 INFO 00:03:16 INFO [loop_until]: kubectl --namespace=xlou top pods 00:03:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:03:16 INFO [loop_until]: OK (rc = 0) 00:03:16 DEBUG --- stdout --- 00:03:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 18m 4764Mi am-55f77847b7-nhzv4 16m 4692Mi am-55f77847b7-rpq9w 16m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 414Mi ds-idrepo-0 111m 13794Mi ds-idrepo-1 97m 13812Mi ds-idrepo-2 59m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 89m 3734Mi idm-65858d8c4c-v78nh 68m 3604Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 34m 483Mi 00:03:16 DEBUG --- stderr --- 00:03:16 DEBUG 00:03:18 INFO 00:03:18 INFO [loop_until]: kubectl --namespace=xlou top node 00:03:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:03:18 INFO [loop_until]: OK (rc = 0) 00:03:18 DEBUG --- stdout --- 00:03:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5859Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5772Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 137m 0% 4904Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 149m 0% 4969Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 168m 1% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 118m 0% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 155m 0% 14490Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 100m 0% 2003Mi 3% 00:03:18 DEBUG --- stderr --- 00:03:18 DEBUG 00:04:16 INFO 00:04:16 INFO [loop_until]: kubectl --namespace=xlou top pods 00:04:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:04:16 INFO [loop_until]: OK (rc = 0) 00:04:16 DEBUG --- stdout --- 00:04:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 4805Mi am-55f77847b7-nhzv4 12m 4692Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 5m 414Mi ds-idrepo-0 119m 13795Mi ds-idrepo-1 75m 13741Mi ds-idrepo-2 59m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 74m 3741Mi idm-65858d8c4c-v78nh 78m 3615Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 483Mi 00:04:16 DEBUG --- stderr --- 00:04:16 DEBUG 00:04:18 INFO 00:04:18 INFO [loop_until]: kubectl --namespace=xlou top node 00:04:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:04:18 INFO [loop_until]: OK (rc = 0) 00:04:18 DEBUG --- stdout --- 00:04:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5909Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5775Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 151m 0% 4917Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 155m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 149m 0% 4983Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 164m 1% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 111m 0% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 130m 0% 14417Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2005Mi 3% 00:04:18 DEBUG --- stderr --- 00:04:18 DEBUG 00:05:16 INFO 00:05:16 INFO [loop_until]: kubectl --namespace=xlou top pods 00:05:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:05:16 INFO [loop_until]: OK (rc = 0) 00:05:16 DEBUG --- stdout --- 00:05:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 4858Mi am-55f77847b7-nhzv4 11m 4692Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 387Mi ds-cts-2 6m 414Mi ds-idrepo-0 113m 13795Mi ds-idrepo-1 78m 13742Mi ds-idrepo-2 56m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 66m 3743Mi idm-65858d8c4c-v78nh 75m 3609Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 482Mi 00:05:16 DEBUG --- stderr --- 00:05:16 DEBUG 00:05:18 INFO 00:05:18 INFO [loop_until]: kubectl --namespace=xlou top node 00:05:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:05:18 INFO [loop_until]: OK (rc = 0) 00:05:18 DEBUG --- stdout --- 00:05:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5955Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5775Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 145m 0% 4908Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 152m 0% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 126m 0% 4986Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 166m 1% 14497Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 110m 0% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 138m 0% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2006Mi 3% 00:05:18 DEBUG --- stderr --- 00:05:18 DEBUG 00:06:16 INFO 00:06:16 INFO [loop_until]: kubectl --namespace=xlou top pods 00:06:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:06:16 INFO [loop_until]: OK (rc = 0) 00:06:16 DEBUG --- stdout --- 00:06:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 13m 4899Mi am-55f77847b7-nhzv4 11m 4692Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 8m 392Mi ds-cts-1 6m 386Mi ds-cts-2 6m 414Mi ds-idrepo-0 111m 13795Mi ds-idrepo-1 81m 13742Mi ds-idrepo-2 59m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 80m 3738Mi idm-65858d8c4c-v78nh 75m 3610Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 483Mi 00:06:16 DEBUG --- stderr --- 00:06:16 DEBUG 00:06:18 INFO 00:06:18 INFO [loop_until]: kubectl --namespace=xlou top node 00:06:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:06:18 INFO [loop_until]: OK (rc = 0) 00:06:18 DEBUG --- stdout --- 00:06:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6004Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5779Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 141m 0% 4912Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 147m 0% 4981Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 166m 1% 14499Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 113m 0% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 130m 0% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 2005Mi 3% 00:06:18 DEBUG --- stderr --- 00:06:18 DEBUG 00:07:16 INFO 00:07:16 INFO [loop_until]: kubectl --namespace=xlou top pods 00:07:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:07:16 INFO [loop_until]: OK (rc = 0) 00:07:16 DEBUG --- stdout --- 00:07:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 4955Mi am-55f77847b7-nhzv4 13m 4692Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 392Mi ds-cts-1 5m 386Mi ds-cts-2 7m 414Mi ds-idrepo-0 110m 13796Mi ds-idrepo-1 83m 13742Mi ds-idrepo-2 59m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 68m 3740Mi idm-65858d8c4c-v78nh 77m 3616Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 32m 483Mi 00:07:16 DEBUG --- stderr --- 00:07:16 DEBUG 00:07:18 INFO 00:07:18 INFO [loop_until]: kubectl --namespace=xlou top node 00:07:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:07:19 INFO [loop_until]: OK (rc = 0) 00:07:19 DEBUG --- stdout --- 00:07:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6056Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 148m 0% 4917Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 137m 0% 4983Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 279m 1% 14501Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 108m 0% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 133m 0% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2003Mi 3% 00:07:19 DEBUG --- stderr --- 00:07:19 DEBUG 00:08:17 INFO 00:08:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:08:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:08:17 INFO [loop_until]: OK (rc = 0) 00:08:17 DEBUG --- stdout --- 00:08:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 12m 4999Mi am-55f77847b7-nhzv4 10m 4692Mi am-55f77847b7-rpq9w 11m 4555Mi ds-cts-0 6m 391Mi ds-cts-1 5m 385Mi ds-cts-2 6m 415Mi ds-idrepo-0 106m 13807Mi ds-idrepo-1 72m 13743Mi ds-idrepo-2 59m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 80m 3749Mi idm-65858d8c4c-v78nh 78m 3618Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 35m 482Mi 00:08:17 DEBUG --- stderr --- 00:08:17 DEBUG 00:08:19 INFO 00:08:19 INFO [loop_until]: kubectl --namespace=xlou top node 00:08:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:08:19 INFO [loop_until]: OK (rc = 0) 00:08:19 DEBUG --- stdout --- 00:08:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5563Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6103Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 147m 0% 4923Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 144m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 130m 0% 4989Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 169m 1% 14514Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 106m 0% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 117m 0% 14424Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 105m 0% 2003Mi 3% 00:08:19 DEBUG --- stderr --- 00:08:19 DEBUG 00:09:17 INFO 00:09:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:09:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:09:17 INFO [loop_until]: OK (rc = 0) 00:09:17 DEBUG --- stdout --- 00:09:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 8m 5006Mi am-55f77847b7-nhzv4 7m 4692Mi am-55f77847b7-rpq9w 7m 4555Mi ds-cts-0 7m 392Mi ds-cts-1 5m 387Mi ds-cts-2 8m 415Mi ds-idrepo-0 11m 13807Mi ds-idrepo-1 13m 13742Mi ds-idrepo-2 9m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 6m 3749Mi idm-65858d8c4c-v78nh 5m 3618Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 102Mi 00:09:17 DEBUG --- stderr --- 00:09:17 DEBUG 00:09:19 INFO 00:09:19 INFO [loop_until]: kubectl --namespace=xlou top node 00:09:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:09:19 INFO [loop_until]: OK (rc = 0) 00:09:19 DEBUG --- stdout --- 00:09:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6112Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5786Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4919Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 4991Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14513Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 55m 0% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1632Mi 2% 00:09:19 DEBUG --- stderr --- 00:09:19 DEBUG 127.0.0.1 - - [12/Aug/2023 00:09:57] "GET /monitoring/average?start_time=23-08-11_22:39:26&stop_time=23-08-11_23:07:56 HTTP/1.1" 200 - 00:10:17 INFO 00:10:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:10:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:10:17 INFO [loop_until]: OK (rc = 0) 00:10:17 DEBUG --- stdout --- 00:10:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 9m 5019Mi am-55f77847b7-nhzv4 8m 4692Mi am-55f77847b7-rpq9w 7m 4555Mi ds-cts-0 8m 392Mi ds-cts-1 5m 386Mi ds-cts-2 6m 414Mi ds-idrepo-0 14m 13806Mi ds-idrepo-1 13m 13742Mi ds-idrepo-2 9m 13746Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 5m 3748Mi idm-65858d8c4c-v78nh 5m 3618Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 102Mi 00:10:17 DEBUG --- stderr --- 00:10:17 DEBUG 00:10:19 INFO 00:10:19 INFO [loop_until]: kubectl --namespace=xlou top node 00:10:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:10:19 INFO [loop_until]: OK (rc = 0) 00:10:19 DEBUG --- stdout --- 00:10:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1305Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6125Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 5775Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 4919Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 116m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 4989Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14514Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14427Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1630Mi 2% 00:10:19 DEBUG --- stderr --- 00:10:19 DEBUG 00:11:17 INFO 00:11:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:11:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:11:17 INFO [loop_until]: OK (rc = 0) 00:11:17 DEBUG --- stdout --- 00:11:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 53m 5039Mi am-55f77847b7-nhzv4 28m 4698Mi am-55f77847b7-rpq9w 18m 4555Mi ds-cts-0 8m 393Mi ds-cts-1 6m 387Mi ds-cts-2 9m 415Mi ds-idrepo-0 78m 13808Mi ds-idrepo-1 54m 13742Mi ds-idrepo-2 158m 13748Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 48m 3751Mi idm-65858d8c4c-v78nh 73m 3621Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 154m 458Mi 00:11:17 DEBUG --- stderr --- 00:11:17 DEBUG 00:11:19 INFO 00:11:19 INFO [loop_until]: kubectl --namespace=xlou top node 00:11:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:11:19 INFO [loop_until]: OK (rc = 0) 00:11:19 DEBUG --- stdout --- 00:11:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 90m 0% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6147Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 5785Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 129m 0% 4925Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 139m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 121m 0% 4994Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 136m 0% 14514Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 222m 1% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 103m 0% 14431Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 154m 0% 1980Mi 3% 00:11:19 DEBUG --- stderr --- 00:11:19 DEBUG 00:12:17 INFO 00:12:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:12:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:12:17 INFO [loop_until]: OK (rc = 0) 00:12:17 DEBUG --- stdout --- 00:12:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 41m 5095Mi am-55f77847b7-nhzv4 43m 4692Mi am-55f77847b7-rpq9w 25m 4556Mi ds-cts-0 7m 392Mi ds-cts-1 5m 386Mi ds-cts-2 6m 415Mi ds-idrepo-0 115m 13807Mi ds-idrepo-1 60m 13748Mi ds-idrepo-2 58m 13747Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 69m 3752Mi idm-65858d8c4c-v78nh 71m 3624Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 87m 463Mi 00:12:17 DEBUG --- stderr --- 00:12:17 DEBUG 00:12:19 INFO 00:12:19 INFO [loop_until]: kubectl --namespace=xlou top node 00:12:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:12:19 INFO [loop_until]: OK (rc = 0) 00:12:19 DEBUG --- stdout --- 00:12:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6201Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 5779Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 150m 0% 4939Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 155m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 135m 0% 4995Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 172m 1% 14516Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 101m 0% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 114m 0% 14435Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 146m 0% 1984Mi 3% 00:12:19 DEBUG --- stderr --- 00:12:19 DEBUG 00:13:17 INFO 00:13:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:13:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:13:17 INFO [loop_until]: OK (rc = 0) 00:13:17 DEBUG --- stdout --- 00:13:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 30m 5155Mi am-55f77847b7-nhzv4 18m 4692Mi am-55f77847b7-rpq9w 30m 4558Mi ds-cts-0 7m 392Mi ds-cts-1 5m 386Mi ds-cts-2 6m 414Mi ds-idrepo-0 104m 13809Mi ds-idrepo-1 57m 13748Mi ds-idrepo-2 49m 13754Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 63m 3756Mi idm-65858d8c4c-v78nh 60m 3626Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 55m 468Mi 00:13:17 DEBUG --- stderr --- 00:13:17 DEBUG 00:13:19 INFO 00:13:19 INFO [loop_until]: kubectl --namespace=xlou top node 00:13:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:13:19 INFO [loop_until]: OK (rc = 0) 00:13:19 DEBUG --- stdout --- 00:13:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5554Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 6249Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 5780Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 128m 0% 4925Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 131m 0% 4998Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 164m 1% 14520Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 98m 0% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 112m 0% 14435Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 121m 0% 1989Mi 3% 00:13:19 DEBUG --- stderr --- 00:13:19 DEBUG 00:14:17 INFO 00:14:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:14:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:14:17 INFO [loop_until]: OK (rc = 0) 00:14:17 DEBUG --- stdout --- 00:14:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 33m 5198Mi am-55f77847b7-nhzv4 28m 4696Mi am-55f77847b7-rpq9w 27m 4554Mi ds-cts-0 7m 392Mi ds-cts-1 5m 386Mi ds-cts-2 7m 414Mi ds-idrepo-0 123m 13808Mi ds-idrepo-1 87m 13758Mi ds-idrepo-2 56m 13761Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 64m 3756Mi idm-65858d8c4c-v78nh 66m 3627Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 66m 479Mi 00:14:17 DEBUG --- stderr --- 00:14:17 DEBUG 00:14:19 INFO 00:14:19 INFO [loop_until]: kubectl --namespace=xlou top node 00:14:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:14:19 INFO [loop_until]: OK (rc = 0) 00:14:19 DEBUG --- stdout --- 00:14:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 84m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6306Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 5782Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 138m 0% 4929Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 131m 0% 4998Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 184m 1% 14522Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 108m 0% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 139m 0% 14449Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 132m 0% 2002Mi 3% 00:14:19 DEBUG --- stderr --- 00:14:19 DEBUG 00:15:17 INFO 00:15:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:15:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:15:17 INFO [loop_until]: OK (rc = 0) 00:15:17 DEBUG --- stdout --- 00:15:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 22m 5255Mi am-55f77847b7-nhzv4 20m 4690Mi am-55f77847b7-rpq9w 22m 4554Mi ds-cts-0 7m 392Mi ds-cts-1 5m 386Mi ds-cts-2 6m 415Mi ds-idrepo-0 112m 13809Mi ds-idrepo-1 73m 13782Mi ds-idrepo-2 53m 13819Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 67m 3759Mi idm-65858d8c4c-v78nh 71m 3625Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 46m 480Mi 00:15:17 DEBUG --- stderr --- 00:15:17 DEBUG 00:15:19 INFO 00:15:19 INFO [loop_until]: kubectl --namespace=xlou top node 00:15:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:15:19 INFO [loop_until]: OK (rc = 0) 00:15:19 DEBUG --- stdout --- 00:15:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 6363Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 5775Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 142m 0% 4926Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2129Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 135m 0% 5004Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 174m 1% 14524Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 102m 0% 14510Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 124m 0% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 113m 0% 2004Mi 3% 00:15:19 DEBUG --- stderr --- 00:15:19 DEBUG 00:16:17 INFO 00:16:17 INFO [loop_until]: kubectl --namespace=xlou top pods 00:16:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:16:17 INFO [loop_until]: OK (rc = 0) 00:16:17 DEBUG --- stdout --- 00:16:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 19m 5307Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 19m 4554Mi ds-cts-0 7m 393Mi ds-cts-1 5m 386Mi ds-cts-2 5m 415Mi ds-idrepo-0 107m 13809Mi ds-idrepo-1 56m 13783Mi ds-idrepo-2 39m 13803Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 58m 3759Mi idm-65858d8c4c-v78nh 65m 3626Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 41m 478Mi 00:16:17 DEBUG --- stderr --- 00:16:17 DEBUG 00:16:20 INFO 00:16:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:16:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:16:20 INFO [loop_until]: OK (rc = 0) 00:16:20 DEBUG --- stdout --- 00:16:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 6413Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5779Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 131m 0% 4925Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 152m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 117m 0% 5000Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 155m 0% 14525Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 89m 0% 14499Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 111m 0% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 107m 0% 2002Mi 3% 00:16:20 DEBUG --- stderr --- 00:16:20 DEBUG 00:17:18 INFO 00:17:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:17:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:17:18 INFO [loop_until]: OK (rc = 0) 00:17:18 DEBUG --- stdout --- 00:17:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 19m 5356Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 14m 400Mi ds-cts-1 5m 386Mi ds-cts-2 6m 415Mi ds-idrepo-0 119m 13813Mi ds-idrepo-1 55m 13783Mi ds-idrepo-2 37m 13789Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 63m 3761Mi idm-65858d8c4c-v78nh 63m 3626Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 44m 479Mi 00:17:18 DEBUG --- stderr --- 00:17:18 DEBUG 00:17:20 INFO 00:17:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:17:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:17:20 INFO [loop_until]: OK (rc = 0) 00:17:20 DEBUG --- stdout --- 00:17:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6465Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 135m 0% 4926Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 130m 0% 5002Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 68m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 171m 1% 14532Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 85m 0% 14485Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 104m 0% 14476Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 114m 0% 2002Mi 3% 00:17:20 DEBUG --- stderr --- 00:17:20 DEBUG 00:18:18 INFO 00:18:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:18:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:18:18 INFO [loop_until]: OK (rc = 0) 00:18:18 DEBUG --- stdout --- 00:18:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 18m 5408Mi am-55f77847b7-nhzv4 16m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 393Mi ds-cts-1 5m 386Mi ds-cts-2 6m 415Mi ds-idrepo-0 102m 13814Mi ds-idrepo-1 56m 13784Mi ds-idrepo-2 41m 13784Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 55m 3761Mi idm-65858d8c4c-v78nh 60m 3628Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 40m 479Mi 00:18:18 DEBUG --- stderr --- 00:18:18 DEBUG 00:18:20 INFO 00:18:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:18:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:18:20 INFO [loop_until]: OK (rc = 0) 00:18:20 DEBUG --- stdout --- 00:18:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5554Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 6512Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 134m 0% 4930Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 157m 0% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 124m 0% 5003Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 163m 1% 14534Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 93m 0% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 108m 0% 14483Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 108m 0% 2001Mi 3% 00:18:20 DEBUG --- stderr --- 00:18:20 DEBUG 00:19:18 INFO 00:19:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:19:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:19:18 INFO [loop_until]: OK (rc = 0) 00:19:18 DEBUG --- stdout --- 00:19:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 19m 5453Mi am-55f77847b7-nhzv4 16m 4693Mi am-55f77847b7-rpq9w 17m 4554Mi ds-cts-0 7m 393Mi ds-cts-1 5m 386Mi ds-cts-2 13m 417Mi ds-idrepo-0 105m 13814Mi ds-idrepo-1 51m 13784Mi ds-idrepo-2 32m 13784Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 56m 3763Mi idm-65858d8c4c-v78nh 56m 3628Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 34m 479Mi 00:19:18 DEBUG --- stderr --- 00:19:18 DEBUG 00:19:20 INFO 00:19:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:19:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:19:20 INFO [loop_until]: OK (rc = 0) 00:19:20 DEBUG --- stdout --- 00:19:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1313Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6563Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5779Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 132m 0% 4931Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 142m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 124m 0% 5003Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 153m 0% 14535Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 81m 0% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 106m 0% 14484Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 99m 0% 2005Mi 3% 00:19:20 DEBUG --- stderr --- 00:19:20 DEBUG 00:20:18 INFO 00:20:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:20:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:20:18 INFO [loop_until]: OK (rc = 0) 00:20:18 DEBUG --- stdout --- 00:20:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 19m 5503Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 393Mi ds-cts-1 5m 386Mi ds-cts-2 6m 417Mi ds-idrepo-0 112m 13814Mi ds-idrepo-1 53m 13784Mi ds-idrepo-2 36m 13784Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 55m 3765Mi idm-65858d8c4c-v78nh 61m 3630Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 64m 481Mi 00:20:18 DEBUG --- stderr --- 00:20:18 DEBUG 00:20:20 INFO 00:20:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:20:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:20:20 INFO [loop_until]: OK (rc = 0) 00:20:20 DEBUG --- stdout --- 00:20:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 6612Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5776Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 122m 0% 4932Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 123m 0% 5007Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 160m 1% 14537Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 85m 0% 14486Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 106m 0% 14488Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 129m 0% 2013Mi 3% 00:20:20 DEBUG --- stderr --- 00:20:20 DEBUG 00:21:18 INFO 00:21:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:21:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:21:18 INFO [loop_until]: OK (rc = 0) 00:21:18 DEBUG --- stdout --- 00:21:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 20m 5556Mi am-55f77847b7-nhzv4 21m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 393Mi ds-cts-1 8m 387Mi ds-cts-2 6m 417Mi ds-idrepo-0 113m 13814Mi ds-idrepo-1 56m 13784Mi ds-idrepo-2 39m 13789Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 59m 3765Mi idm-65858d8c4c-v78nh 61m 3636Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 34m 486Mi 00:21:18 DEBUG --- stderr --- 00:21:18 DEBUG 00:21:20 INFO 00:21:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:21:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:21:20 INFO [loop_until]: OK (rc = 0) 00:21:20 DEBUG --- stdout --- 00:21:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1313Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 6673Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 5782Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 127m 0% 4935Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 126m 0% 5006Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 164m 1% 14540Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 86m 0% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 103m 0% 14491Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2010Mi 3% 00:21:20 DEBUG --- stderr --- 00:21:20 DEBUG 00:22:18 INFO 00:22:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:22:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:22:18 INFO [loop_until]: OK (rc = 0) 00:22:18 DEBUG --- stdout --- 00:22:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 19m 5610Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 392Mi ds-cts-1 5m 387Mi ds-cts-2 6m 417Mi ds-idrepo-0 121m 13814Mi ds-idrepo-1 244m 13784Mi ds-idrepo-2 37m 13789Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 59m 3766Mi idm-65858d8c4c-v78nh 61m 3632Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 70m 487Mi 00:22:18 DEBUG --- stderr --- 00:22:18 DEBUG 00:22:20 INFO 00:22:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:22:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:22:20 INFO [loop_until]: OK (rc = 0) 00:22:20 DEBUG --- stdout --- 00:22:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 6719Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5781Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 129m 0% 4933Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 126m 0% 5004Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 168m 1% 14543Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 88m 0% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 325m 2% 14487Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 132m 0% 2009Mi 3% 00:22:20 DEBUG --- stderr --- 00:22:20 DEBUG 00:23:18 INFO 00:23:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:23:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:23:18 INFO [loop_until]: OK (rc = 0) 00:23:18 DEBUG --- stdout --- 00:23:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 20m 5661Mi am-55f77847b7-nhzv4 18m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 393Mi ds-cts-1 5m 387Mi ds-cts-2 6m 418Mi ds-idrepo-0 583m 13821Mi ds-idrepo-1 73m 13818Mi ds-idrepo-2 617m 13813Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 66m 3768Mi idm-65858d8c4c-v78nh 61m 3632Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 487Mi 00:23:18 DEBUG --- stderr --- 00:23:18 DEBUG 00:23:20 INFO 00:23:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:23:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:23:20 INFO [loop_until]: OK (rc = 0) 00:23:20 DEBUG --- stdout --- 00:23:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6770Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 5780Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 124m 0% 4936Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 127m 0% 5010Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 543m 3% 14551Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 601m 3% 14515Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 115m 0% 14524Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 2010Mi 3% 00:23:20 DEBUG --- stderr --- 00:23:20 DEBUG 00:24:18 INFO 00:24:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:24:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:24:18 INFO [loop_until]: OK (rc = 0) 00:24:18 DEBUG --- stdout --- 00:24:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 5714Mi am-55f77847b7-nhzv4 18m 4693Mi am-55f77847b7-rpq9w 16m 4554Mi ds-cts-0 6m 393Mi ds-cts-1 5m 387Mi ds-cts-2 6m 417Mi ds-idrepo-0 101m 13821Mi ds-idrepo-1 68m 13819Mi ds-idrepo-2 37m 13814Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 48m 3760Mi idm-65858d8c4c-v78nh 57m 3633Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 44m 493Mi 00:24:18 DEBUG --- stderr --- 00:24:18 DEBUG 00:24:20 INFO 00:24:20 INFO [loop_until]: kubectl --namespace=xlou top node 00:24:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:24:21 INFO [loop_until]: OK (rc = 0) 00:24:21 DEBUG --- stdout --- 00:24:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1318Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 5782Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 126m 0% 4932Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 144m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 115m 0% 5003Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 146m 0% 14551Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 90m 0% 14532Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 112m 0% 14527Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 94m 0% 2015Mi 3% 00:24:21 DEBUG --- stderr --- 00:24:21 DEBUG 00:25:18 INFO 00:25:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:25:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:25:18 INFO [loop_until]: OK (rc = 0) 00:25:18 DEBUG --- stdout --- 00:25:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 5714Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 10m 395Mi ds-cts-1 4m 387Mi ds-cts-2 9m 417Mi ds-idrepo-0 111m 13822Mi ds-idrepo-1 59m 13819Mi ds-idrepo-2 41m 13815Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 56m 3760Mi idm-65858d8c4c-v78nh 57m 3641Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 27m 493Mi 00:25:18 DEBUG --- stderr --- 00:25:18 DEBUG 00:25:21 INFO 00:25:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:25:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:25:21 INFO [loop_until]: OK (rc = 0) 00:25:21 DEBUG --- stdout --- 00:25:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5780Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 128m 0% 4942Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 152m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 120m 0% 5000Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 164m 1% 14551Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 83m 0% 14521Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 106m 0% 14528Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 2026Mi 3% 00:25:21 DEBUG --- stderr --- 00:25:21 DEBUG 00:26:18 INFO 00:26:18 INFO [loop_until]: kubectl --namespace=xlou top pods 00:26:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:26:18 INFO [loop_until]: OK (rc = 0) 00:26:18 DEBUG --- stdout --- 00:26:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 17m 5714Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 395Mi ds-cts-1 5m 387Mi ds-cts-2 6m 417Mi ds-idrepo-0 330m 13822Mi ds-idrepo-1 62m 13820Mi ds-idrepo-2 39m 13815Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 57m 3771Mi idm-65858d8c4c-v78nh 57m 3635Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 29m 493Mi 00:26:18 DEBUG --- stderr --- 00:26:18 DEBUG 00:26:21 INFO 00:26:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:26:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:26:21 INFO [loop_until]: OK (rc = 0) 00:26:21 DEBUG --- stdout --- 00:26:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5779Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 122m 0% 4936Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 154m 0% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 124m 0% 5012Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 310m 1% 14556Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 89m 0% 14523Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 111m 0% 14533Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2016Mi 3% 00:26:21 DEBUG --- stderr --- 00:26:21 DEBUG 00:27:19 INFO 00:27:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:27:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:27:19 INFO [loop_until]: OK (rc = 0) 00:27:19 DEBUG --- stdout --- 00:27:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 16m 5714Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 395Mi ds-cts-1 5m 387Mi ds-cts-2 6m 417Mi ds-idrepo-0 116m 13822Mi ds-idrepo-1 62m 13821Mi ds-idrepo-2 38m 13815Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 52m 3762Mi idm-65858d8c4c-v78nh 66m 3635Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 31m 493Mi 00:27:19 DEBUG --- stderr --- 00:27:19 DEBUG 00:27:21 INFO 00:27:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:27:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:27:21 INFO [loop_until]: OK (rc = 0) 00:27:21 DEBUG --- stdout --- 00:27:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1318Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 137m 0% 4936Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 163m 1% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 118m 0% 5004Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 168m 1% 14556Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 86m 0% 14524Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 112m 0% 14534Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2017Mi 3% 00:27:21 DEBUG --- stderr --- 00:27:21 DEBUG 00:28:19 INFO 00:28:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:28:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:28:19 INFO [loop_until]: OK (rc = 0) 00:28:19 DEBUG --- stdout --- 00:28:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 5714Mi am-55f77847b7-nhzv4 18m 4693Mi am-55f77847b7-rpq9w 19m 4554Mi ds-cts-0 6m 395Mi ds-cts-1 5m 387Mi ds-cts-2 6m 417Mi ds-idrepo-0 250m 13823Mi ds-idrepo-1 65m 13816Mi ds-idrepo-2 122m 13815Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 57m 3762Mi idm-65858d8c4c-v78nh 65m 3637Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 27m 493Mi 00:28:19 DEBUG --- stderr --- 00:28:19 DEBUG 00:28:21 INFO 00:28:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:28:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:28:21 INFO [loop_until]: OK (rc = 0) 00:28:21 DEBUG --- stdout --- 00:28:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5779Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 127m 0% 4939Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 153m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 124m 0% 5004Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 295m 1% 14559Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 224m 1% 14527Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 113m 0% 14533Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2018Mi 3% 00:28:21 DEBUG --- stderr --- 00:28:21 DEBUG 00:29:19 INFO 00:29:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:29:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:29:19 INFO [loop_until]: OK (rc = 0) 00:29:19 DEBUG --- stdout --- 00:29:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 17m 5714Mi am-55f77847b7-nhzv4 16m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 395Mi ds-cts-1 5m 388Mi ds-cts-2 6m 417Mi ds-idrepo-0 104m 13822Mi ds-idrepo-1 52m 13818Mi ds-idrepo-2 37m 13815Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 61m 3771Mi idm-65858d8c4c-v78nh 68m 3637Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 27m 493Mi 00:29:19 DEBUG --- stderr --- 00:29:19 DEBUG 00:29:21 INFO 00:29:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:29:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:29:21 INFO [loop_until]: OK (rc = 0) 00:29:21 DEBUG --- stdout --- 00:29:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 137m 0% 4941Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 156m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 129m 0% 5011Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 160m 1% 14562Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 86m 0% 14530Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 113m 0% 14535Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 94m 0% 2016Mi 3% 00:29:21 DEBUG --- stderr --- 00:29:21 DEBUG 00:30:19 INFO 00:30:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:30:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:30:19 INFO [loop_until]: OK (rc = 0) 00:30:19 DEBUG --- stdout --- 00:30:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 5714Mi am-55f77847b7-nhzv4 16m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 394Mi ds-cts-1 5m 387Mi ds-cts-2 6m 414Mi ds-idrepo-0 108m 13822Mi ds-idrepo-1 220m 13822Mi ds-idrepo-2 40m 13815Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 60m 3771Mi idm-65858d8c4c-v78nh 59m 3639Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 28m 493Mi 00:30:19 DEBUG --- stderr --- 00:30:19 DEBUG 00:30:21 INFO 00:30:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:30:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:30:21 INFO [loop_until]: OK (rc = 0) 00:30:21 DEBUG --- stdout --- 00:30:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 130m 0% 4940Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 153m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 122m 0% 5011Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 160m 1% 14561Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 85m 0% 14531Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 257m 1% 14540Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 92m 0% 2021Mi 3% 00:30:21 DEBUG --- stderr --- 00:30:21 DEBUG 00:31:19 INFO 00:31:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:31:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:31:19 INFO [loop_until]: OK (rc = 0) 00:31:19 DEBUG --- stdout --- 00:31:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 5714Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 17m 4554Mi ds-cts-0 6m 394Mi ds-cts-1 5m 387Mi ds-cts-2 6m 414Mi ds-idrepo-0 200m 13823Mi ds-idrepo-1 51m 13822Mi ds-idrepo-2 39m 13815Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 51m 3772Mi idm-65858d8c4c-v78nh 58m 3639Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 27m 493Mi 00:31:19 DEBUG --- stderr --- 00:31:19 DEBUG 00:31:21 INFO 00:31:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:31:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:31:21 INFO [loop_until]: OK (rc = 0) 00:31:21 DEBUG --- stdout --- 00:31:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1311Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 5779Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 128m 0% 4937Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 141m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 125m 0% 5012Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 158m 0% 14564Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 86m 0% 14533Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 102m 0% 14544Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 94m 0% 2018Mi 3% 00:31:21 DEBUG --- stderr --- 00:31:21 DEBUG 00:32:19 INFO 00:32:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:32:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:32:19 INFO [loop_until]: OK (rc = 0) 00:32:19 DEBUG --- stdout --- 00:32:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 14m 5714Mi am-55f77847b7-nhzv4 17m 4693Mi am-55f77847b7-rpq9w 17m 4554Mi ds-cts-0 6m 394Mi ds-cts-1 5m 387Mi ds-cts-2 6m 414Mi ds-idrepo-0 98m 13822Mi ds-idrepo-1 58m 13821Mi ds-idrepo-2 36m 13815Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 54m 3773Mi idm-65858d8c4c-v78nh 57m 3641Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 27m 493Mi 00:32:19 DEBUG --- stderr --- 00:32:19 DEBUG 00:32:21 INFO 00:32:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:32:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:32:21 INFO [loop_until]: OK (rc = 0) 00:32:21 DEBUG --- stdout --- 00:32:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 126m 0% 4940Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 153m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 128m 0% 5019Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 159m 1% 14568Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 84m 0% 14536Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 104m 0% 14545Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 94m 0% 2017Mi 3% 00:32:21 DEBUG --- stderr --- 00:32:21 DEBUG 00:33:19 INFO 00:33:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:33:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:33:19 INFO [loop_until]: OK (rc = 0) 00:33:19 DEBUG --- stdout --- 00:33:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 16m 5714Mi am-55f77847b7-nhzv4 18m 4693Mi am-55f77847b7-rpq9w 19m 4554Mi ds-cts-0 6m 394Mi ds-cts-1 5m 387Mi ds-cts-2 6m 414Mi ds-idrepo-0 253m 13823Mi ds-idrepo-1 76m 13818Mi ds-idrepo-2 45m 13816Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 74m 3780Mi idm-65858d8c4c-v78nh 62m 3648Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 493Mi 00:33:19 DEBUG --- stderr --- 00:33:19 DEBUG 00:33:21 INFO 00:33:21 INFO [loop_until]: kubectl --namespace=xlou top node 00:33:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:33:21 INFO [loop_until]: OK (rc = 0) 00:33:21 DEBUG --- stdout --- 00:33:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 123m 0% 4949Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 149m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 132m 0% 5022Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 316m 1% 14569Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 95m 0% 14536Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 133m 0% 14543Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 99m 0% 2018Mi 3% 00:33:21 DEBUG --- stderr --- 00:33:21 DEBUG 00:34:19 INFO 00:34:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:34:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:34:19 INFO [loop_until]: OK (rc = 0) 00:34:19 DEBUG --- stdout --- 00:34:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 17m 5714Mi am-55f77847b7-nhzv4 18m 4693Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 394Mi ds-cts-1 5m 387Mi ds-cts-2 5m 414Mi ds-idrepo-0 113m 13823Mi ds-idrepo-1 53m 13821Mi ds-idrepo-2 41m 13816Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 52m 3781Mi idm-65858d8c4c-v78nh 66m 3649Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 493Mi 00:34:19 DEBUG --- stderr --- 00:34:19 DEBUG 00:34:22 INFO 00:34:22 INFO [loop_until]: kubectl --namespace=xlou top node 00:34:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:34:22 INFO [loop_until]: OK (rc = 0) 00:34:22 DEBUG --- stdout --- 00:34:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1306Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 5780Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 136m 0% 4951Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 118m 0% 5021Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 167m 1% 14566Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 87m 0% 14537Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 103m 0% 14546Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 93m 0% 2016Mi 3% 00:34:22 DEBUG --- stderr --- 00:34:22 DEBUG 00:35:19 INFO 00:35:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:35:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:35:19 INFO [loop_until]: OK (rc = 0) 00:35:19 DEBUG --- stdout --- 00:35:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 5714Mi am-55f77847b7-nhzv4 17m 4696Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 394Mi ds-cts-1 5m 387Mi ds-cts-2 6m 414Mi ds-idrepo-0 107m 13823Mi ds-idrepo-1 52m 13823Mi ds-idrepo-2 37m 13816Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 61m 3782Mi idm-65858d8c4c-v78nh 44m 3649Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 30m 493Mi 00:35:19 DEBUG --- stderr --- 00:35:19 DEBUG 00:35:22 INFO 00:35:22 INFO [loop_until]: kubectl --namespace=xlou top node 00:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:35:22 INFO [loop_until]: OK (rc = 0) 00:35:22 DEBUG --- stdout --- 00:35:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 125m 0% 4957Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 131m 0% 5021Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 163m 1% 14569Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 85m 0% 14540Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 258m 1% 14547Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2014Mi 3% 00:35:22 DEBUG --- stderr --- 00:35:22 DEBUG 00:36:19 INFO 00:36:19 INFO [loop_until]: kubectl --namespace=xlou top pods 00:36:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:36:20 INFO [loop_until]: OK (rc = 0) 00:36:20 DEBUG --- stdout --- 00:36:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 16m 5714Mi am-55f77847b7-nhzv4 19m 4711Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 7m 395Mi ds-cts-1 5m 387Mi ds-cts-2 6m 414Mi ds-idrepo-0 107m 13824Mi ds-idrepo-1 53m 13823Mi ds-idrepo-2 44m 13817Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 55m 3783Mi idm-65858d8c4c-v78nh 61m 3650Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 29m 492Mi 00:36:20 DEBUG --- stderr --- 00:36:20 DEBUG 00:36:22 INFO 00:36:22 INFO [loop_until]: kubectl --namespace=xlou top node 00:36:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:36:22 INFO [loop_until]: OK (rc = 0) 00:36:22 DEBUG --- stdout --- 00:36:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1308Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5800Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 135m 0% 4949Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 145m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 121m 0% 5025Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 156m 0% 14573Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 90m 0% 14544Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 107m 0% 14552Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2014Mi 3% 00:36:22 DEBUG --- stderr --- 00:36:22 DEBUG 00:37:20 INFO 00:37:20 INFO [loop_until]: kubectl --namespace=xlou top pods 00:37:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:37:20 INFO [loop_until]: OK (rc = 0) 00:37:20 DEBUG --- stdout --- 00:37:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 16m 5715Mi am-55f77847b7-nhzv4 18m 4769Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 395Mi ds-cts-1 5m 388Mi ds-cts-2 6m 414Mi ds-idrepo-0 112m 13824Mi ds-idrepo-1 59m 13823Mi ds-idrepo-2 197m 13817Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 56m 3788Mi idm-65858d8c4c-v78nh 63m 3651Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 28m 493Mi 00:37:20 DEBUG --- stderr --- 00:37:20 DEBUG 00:37:22 INFO 00:37:22 INFO [loop_until]: kubectl --namespace=xlou top node 00:37:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:37:22 INFO [loop_until]: OK (rc = 0) 00:37:22 DEBUG --- stdout --- 00:37:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5847Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 136m 0% 4950Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 119m 0% 5030Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 165m 1% 14577Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 227m 1% 14544Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 109m 0% 14552Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 94m 0% 2019Mi 3% 00:37:22 DEBUG --- stderr --- 00:37:22 DEBUG 00:38:20 INFO 00:38:20 INFO [loop_until]: kubectl --namespace=xlou top pods 00:38:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:38:20 INFO [loop_until]: OK (rc = 0) 00:38:20 DEBUG --- stdout --- 00:38:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 5714Mi am-55f77847b7-nhzv4 19m 4818Mi am-55f77847b7-rpq9w 18m 4554Mi ds-cts-0 6m 395Mi ds-cts-1 5m 388Mi ds-cts-2 6m 414Mi ds-idrepo-0 244m 13824Mi ds-idrepo-1 52m 13824Mi ds-idrepo-2 35m 13817Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 56m 3790Mi idm-65858d8c4c-v78nh 52m 3657Mi lodemon-7655dd7665-d26cm 2m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 28m 494Mi 00:38:20 DEBUG --- stderr --- 00:38:20 DEBUG 00:38:22 INFO 00:38:22 INFO [loop_until]: kubectl --namespace=xlou top node 00:38:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:38:22 INFO [loop_until]: OK (rc = 0) 00:38:22 DEBUG --- stdout --- 00:38:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 5899Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 124m 0% 4955Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 144m 0% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 129m 0% 5040Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 307m 1% 14576Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 88m 0% 14547Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 106m 0% 14556Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 98m 0% 2018Mi 3% 00:38:22 DEBUG --- stderr --- 00:38:22 DEBUG 00:39:20 INFO 00:39:20 INFO [loop_until]: kubectl --namespace=xlou top pods 00:39:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:39:20 INFO [loop_until]: OK (rc = 0) 00:39:20 DEBUG --- stdout --- 00:39:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 5714Mi am-55f77847b7-nhzv4 19m 4853Mi am-55f77847b7-rpq9w 17m 4554Mi ds-cts-0 6m 395Mi ds-cts-1 5m 388Mi ds-cts-2 6m 414Mi ds-idrepo-0 112m 13824Mi ds-idrepo-1 54m 13824Mi ds-idrepo-2 47m 13817Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 59m 3790Mi idm-65858d8c4c-v78nh 52m 3659Mi lodemon-7655dd7665-d26cm 5m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 28m 493Mi 00:39:20 DEBUG --- stderr --- 00:39:20 DEBUG 00:39:22 INFO 00:39:22 INFO [loop_until]: kubectl --namespace=xlou top node 00:39:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:39:22 INFO [loop_until]: OK (rc = 0) 00:39:22 DEBUG --- stdout --- 00:39:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 5949Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 125m 0% 4958Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 122m 0% 5030Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 167m 1% 14580Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 101m 0% 14551Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 108m 0% 14553Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2019Mi 3% 00:39:22 DEBUG --- stderr --- 00:39:22 DEBUG 00:40:20 INFO 00:40:20 INFO [loop_until]: kubectl --namespace=xlou top pods 00:40:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:40:20 INFO [loop_until]: OK (rc = 0) 00:40:20 DEBUG --- stdout --- 00:40:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 15m 5714Mi am-55f77847b7-nhzv4 18m 4908Mi am-55f77847b7-rpq9w 17m 4554Mi ds-cts-0 5m 395Mi ds-cts-1 5m 388Mi ds-cts-2 6m 414Mi ds-idrepo-0 104m 13824Mi ds-idrepo-1 551m 13698Mi ds-idrepo-2 47m 13817Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 59m 3791Mi idm-65858d8c4c-v78nh 54m 3654Mi lodemon-7655dd7665-d26cm 7m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 28m 493Mi 00:40:20 DEBUG --- stderr --- 00:40:20 DEBUG 00:40:22 INFO 00:40:22 INFO [loop_until]: kubectl --namespace=xlou top node 00:40:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:40:22 INFO [loop_until]: OK (rc = 0) 00:40:22 DEBUG --- stdout --- 00:40:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1310Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 5994Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 123m 0% 4955Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 148m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 125m 0% 5031Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 158m 0% 14580Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 100m 0% 14547Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 587m 3% 14431Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 95m 0% 2020Mi 3% 00:40:22 DEBUG --- stderr --- 00:40:22 DEBUG 00:41:20 INFO 00:41:20 INFO [loop_until]: kubectl --namespace=xlou top pods 00:41:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:41:20 INFO [loop_until]: OK (rc = 0) 00:41:20 DEBUG --- stdout --- 00:41:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 7m 5714Mi am-55f77847b7-nhzv4 9m 4925Mi am-55f77847b7-rpq9w 8m 4554Mi ds-cts-0 7m 395Mi ds-cts-1 5m 388Mi ds-cts-2 7m 414Mi ds-idrepo-0 149m 13681Mi ds-idrepo-1 13m 13685Mi ds-idrepo-2 115m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 6m 3792Mi idm-65858d8c4c-v78nh 7m 3655Mi lodemon-7655dd7665-d26cm 1m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 33m 495Mi 00:41:20 DEBUG --- stderr --- 00:41:20 DEBUG 00:41:22 INFO 00:41:22 INFO [loop_until]: kubectl --namespace=xlou top node 00:41:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:41:23 INFO [loop_until]: OK (rc = 0) 00:41:23 DEBUG --- stdout --- 00:41:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1307Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6011Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4956Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 5030Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 162m 1% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 159m 1% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 168m 1% 14420Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 96m 0% 2021Mi 3% 00:41:23 DEBUG --- stderr --- 00:41:23 DEBUG 00:42:20 INFO 00:42:20 INFO [loop_until]: kubectl --namespace=xlou top pods 00:42:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:42:20 INFO [loop_until]: OK (rc = 0) 00:42:20 DEBUG --- stdout --- 00:42:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 8m 5714Mi am-55f77847b7-nhzv4 10m 4936Mi am-55f77847b7-rpq9w 8m 4554Mi ds-cts-0 6m 395Mi ds-cts-1 6m 388Mi ds-cts-2 6m 414Mi ds-idrepo-0 21m 13681Mi ds-idrepo-1 14m 13685Mi ds-idrepo-2 8m 13681Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 6m 3791Mi idm-65858d8c4c-v78nh 8m 3654Mi lodemon-7655dd7665-d26cm 6m 66Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 1m 103Mi 00:42:20 DEBUG --- stderr --- 00:42:20 DEBUG 00:42:23 INFO 00:42:23 INFO [loop_until]: kubectl --namespace=xlou top node 00:42:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:42:23 INFO [loop_until]: OK (rc = 0) 00:42:23 DEBUG --- stdout --- 00:42:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6017Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4951Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 146m 0% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 5034Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1636Mi 2% 00:42:23 DEBUG --- stderr --- 00:42:23 DEBUG 127.0.0.1 - - [12/Aug/2023 00:42:28] "GET /monitoring/average?start_time=23-08-11_23:11:57&stop_time=23-08-11_23:40:28 HTTP/1.1" 200 - 00:43:20 INFO 00:43:20 INFO [loop_until]: kubectl --namespace=xlou top pods 00:43:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:43:20 INFO [loop_until]: OK (rc = 0) 00:43:20 DEBUG --- stdout --- 00:43:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 4Mi am-55f77847b7-7kvs5 6m 5714Mi am-55f77847b7-nhzv4 9m 4946Mi am-55f77847b7-rpq9w 8m 4554Mi ds-cts-0 7m 395Mi ds-cts-1 5m 388Mi ds-cts-2 5m 414Mi ds-idrepo-0 12m 13682Mi ds-idrepo-1 13m 13685Mi ds-idrepo-2 8m 13683Mi end-user-ui-6845bc78c7-kj9rz 1m 4Mi idm-65858d8c4c-h7xxp 5m 3791Mi idm-65858d8c4c-v78nh 7m 3654Mi lodemon-7655dd7665-d26cm 8m 67Mi login-ui-74d6fb46c-ncp99 1m 3Mi overseer-0-56868bb8f7-f7jz9 2m 103Mi 00:43:20 DEBUG --- stderr --- 00:43:20 DEBUG 00:43:23 INFO 00:43:23 INFO [loop_until]: kubectl --namespace=xlou top node 00:43:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:43:23 INFO [loop_until]: OK (rc = 0) 00:43:23 DEBUG --- stdout --- 00:43:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1309Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 6030Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4956Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 131m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 5033Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 69m 0% 14453Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14418Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 77m 0% 1633Mi 2% 00:43:23 DEBUG --- stderr --- 00:43:23 DEBUG 00:44:20 INFO 00:44:20 INFO [loop_until]: kubectl --namespace=xlou top pods 00:44:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:44:20 INFO [loop_until]: OK (rc = 0) 00:44:20 DEBUG --- stdout --- 00:44:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-sdd42 1m 6Mi am-55f77847b7-7kvs5 8m 5715Mi am-55f77847b7-nhzv4 9m 4957Mi am-55f77847b7-rpq9w 8m 4554Mi ds-cts-0 7m 395Mi ds-cts-1 4m 388Mi ds-cts-2 6m 414Mi ds-idrepo-0 274m 13683Mi ds-idrepo-1 126m 13686Mi ds-idrepo-2 235m 13682Mi end-user-ui-6845bc78c7-kj9rz 1m 5Mi idm-65858d8c4c-h7xxp 5m 3791Mi idm-65858d8c4c-v78nh 6m 3654Mi lodemon-7655dd7665-d26cm 5m 67Mi login-ui-74d6fb46c-ncp99 1m 4Mi overseer-0-56868bb8f7-f7jz9 544m 110Mi 00:44:20 DEBUG --- stderr --- 00:44:20 DEBUG 00:44:23 INFO 00:44:23 INFO [loop_until]: kubectl --namespace=xlou top node 00:44:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 00:44:23 INFO [loop_until]: OK (rc = 0) 00:44:23 DEBUG --- stdout --- 00:44:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1312Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 5552Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6044Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4956Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 5030Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 263m 1% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 162m 1% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 191m 1% 14425Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 530m 3% 1642Mi 2% 00:44:23 DEBUG --- stderr --- 00:44:23 DEBUG 00:44:52 INFO Finished: True 00:44:52 INFO Waiting for threads to register finish flag 00:45:23 INFO Done. Have a nice day! :) 127.0.0.1 - - [12/Aug/2023 00:45:23] "GET /monitoring/stop HTTP/1.1" 200 - 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Cpu_cores_used_per_pod.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Memory_usage_per_pod.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Disk_tps_read_per_pod.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Disk_tps_writes_per_pod.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Cpu_cores_used_per_node.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Memory_usage_used_per_node.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Cpu_iowait_per_node.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Network_receive_per_node.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Network_transmit_per_node.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/am_cts_task_count_token_session.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/am_authentication_rate.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/am_authentication_count_per_pod.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/Cts_reaper_Deletion_count.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/AM_oauth2_authorization_codes.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/ds_pods_replication_delay.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/am_cts_reaper_cache_size.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/node_disk_read_bytes_total.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/node_disk_written_bytes_total.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/ds_backend_entry_count.json does not exist. Skipping... 00:45:26 INFO File /tmp/lodemon_data-23-08-11_22:08:26/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [12/Aug/2023 00:45:28] "GET /monitoring/process HTTP/1.1" 200 -