==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-65c77dbb64-7jwvp Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sun, 13 Aug 2023 04:29:04 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=65c77dbb64 skaffold.dev/run-id=bdd62a18-a508-48ed-b7f3-c8f0bd40130f Annotations: Status: Running IP: 10.106.45.104 IPs: IP: 10.106.45.104 Controlled By: ReplicaSet/lodemon-65c77dbb64 Containers: lodemon: Container ID: containerd://3bdd51e5855f67a460962a8d569c6aa9e60ffb0a1c5717c074f6143d03077ec2 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sun, 13 Aug 2023 04:29:05 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zjd6p (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-zjd6p: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 05:29:05 INFO 05:29:05 INFO --------------------- Get expected number of pods --------------------- 05:29:05 INFO 05:29:05 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 05:29:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG 3 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO ---------------------------- Get pod list ---------------------------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 05:29:06 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG am-55f77847b7-bb6x8 am-55f77847b7-ch6mt am-55f77847b7-gbbjq 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO -------------- Check pod am-55f77847b7-bb6x8 is running -------------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-bb6x8 -o=jsonpath={.status.phase} | grep "Running" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG Running 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-bb6x8 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG true 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-bb6x8 --output jsonpath={.status.startTime} 05:29:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG 2023-08-13T04:19:26Z 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO ------- Check pod am-55f77847b7-bb6x8 filesystem is accessible ------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-bb6x8 --container openam -- ls / | grep "bin" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO ------------- Check pod am-55f77847b7-bb6x8 restart count ------------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-bb6x8 --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG 0 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO Pod am-55f77847b7-bb6x8 has been restarted 0 times. 05:29:06 INFO 05:29:06 INFO -------------- Check pod am-55f77847b7-ch6mt is running -------------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-ch6mt -o=jsonpath={.status.phase} | grep "Running" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG Running 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-ch6mt -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG true 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-ch6mt --output jsonpath={.status.startTime} 05:29:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG 2023-08-13T04:19:26Z 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO ------- Check pod am-55f77847b7-ch6mt filesystem is accessible ------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-ch6mt --container openam -- ls / | grep "bin" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO ------------- Check pod am-55f77847b7-ch6mt restart count ------------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-ch6mt --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG 0 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO Pod am-55f77847b7-ch6mt has been restarted 0 times. 05:29:06 INFO 05:29:06 INFO -------------- Check pod am-55f77847b7-gbbjq is running -------------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-gbbjq -o=jsonpath={.status.phase} | grep "Running" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG Running 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-gbbjq -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG true 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-gbbjq --output jsonpath={.status.startTime} 05:29:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:06 INFO [loop_until]: OK (rc = 0) 05:29:06 DEBUG --- stdout --- 05:29:06 DEBUG 2023-08-13T04:19:26Z 05:29:06 DEBUG --- stderr --- 05:29:06 DEBUG 05:29:06 INFO 05:29:06 INFO ------- Check pod am-55f77847b7-gbbjq filesystem is accessible ------- 05:29:06 INFO 05:29:06 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-gbbjq --container openam -- ls / | grep "bin" 05:29:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO ------------- Check pod am-55f77847b7-gbbjq restart count ------------- 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-gbbjq --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG 0 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO Pod am-55f77847b7-gbbjq has been restarted 0 times. 05:29:07 INFO 05:29:07 INFO --------------------- Get expected number of pods --------------------- 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 05:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG 2 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO ---------------------------- Get pod list ---------------------------- 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 05:29:07 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG idm-65858d8c4c-9pfjc idm-65858d8c4c-h9wbp 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO -------------- Check pod idm-65858d8c4c-9pfjc is running -------------- 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-9pfjc -o=jsonpath={.status.phase} | grep "Running" 05:29:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG Running 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-9pfjc -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG true 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-9pfjc --output jsonpath={.status.startTime} 05:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG 2023-08-13T04:19:26Z 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO ------- Check pod idm-65858d8c4c-9pfjc filesystem is accessible ------- 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-9pfjc --container openidm -- ls / | grep "bin" 05:29:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO ------------ Check pod idm-65858d8c4c-9pfjc restart count ------------ 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-9pfjc --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG 0 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO Pod idm-65858d8c4c-9pfjc has been restarted 0 times. 05:29:07 INFO 05:29:07 INFO -------------- Check pod idm-65858d8c4c-h9wbp is running -------------- 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-h9wbp -o=jsonpath={.status.phase} | grep "Running" 05:29:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG Running 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-h9wbp -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG true 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-h9wbp --output jsonpath={.status.startTime} 05:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG 2023-08-13T04:19:26Z 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO ------- Check pod idm-65858d8c4c-h9wbp filesystem is accessible ------- 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-h9wbp --container openidm -- ls / | grep "bin" 05:29:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO 05:29:07 INFO ------------ Check pod idm-65858d8c4c-h9wbp restart count ------------ 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-h9wbp --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:07 INFO [loop_until]: OK (rc = 0) 05:29:07 DEBUG --- stdout --- 05:29:07 DEBUG 0 05:29:07 DEBUG --- stderr --- 05:29:07 DEBUG 05:29:07 INFO Pod idm-65858d8c4c-h9wbp has been restarted 0 times. 05:29:07 INFO 05:29:07 INFO --------------------- Get expected number of pods --------------------- 05:29:07 INFO 05:29:07 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 05:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG 3 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO ---------------------------- Get pod list ---------------------------- 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 05:29:08 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG Running 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG true 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 05:29:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG 2023-08-13T03:46:12Z 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG 0 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO Pod ds-idrepo-0 has been restarted 0 times. 05:29:08 INFO 05:29:08 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG Running 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG true 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 05:29:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG 2023-08-13T03:57:17Z 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG 0 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO Pod ds-idrepo-1 has been restarted 0 times. 05:29:08 INFO 05:29:08 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG Running 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG true 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 05:29:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG 2023-08-13T04:08:12Z 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 05:29:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:08 INFO [loop_until]: OK (rc = 0) 05:29:08 DEBUG --- stdout --- 05:29:08 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:29:08 DEBUG --- stderr --- 05:29:08 DEBUG 05:29:08 INFO 05:29:08 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 05:29:08 INFO 05:29:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG 0 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO Pod ds-idrepo-2 has been restarted 0 times. 05:29:09 INFO 05:29:09 INFO --------------------- Get expected number of pods --------------------- 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 05:29:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG 3 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO ---------------------------- Get pod list ---------------------------- 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 05:29:09 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO -------------------- Check pod ds-cts-0 is running -------------------- 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG Running 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG true 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 05:29:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG 2023-08-13T03:46:12Z 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG 0 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO Pod ds-cts-0 has been restarted 0 times. 05:29:09 INFO 05:29:09 INFO -------------------- Check pod ds-cts-1 is running -------------------- 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG Running 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG true 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 05:29:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG 2023-08-13T03:46:42Z 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG 0 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO Pod ds-cts-1 has been restarted 0 times. 05:29:09 INFO 05:29:09 INFO -------------------- Check pod ds-cts-2 is running -------------------- 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG Running 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG true 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 05:29:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG 2023-08-13T03:47:09Z 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 05:29:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:29:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:29:09 INFO [loop_until]: OK (rc = 0) 05:29:09 DEBUG --- stdout --- 05:29:09 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:29:09 DEBUG --- stderr --- 05:29:09 DEBUG 05:29:09 INFO 05:29:09 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 05:29:09 INFO 05:29:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 05:29:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:10 INFO [loop_until]: OK (rc = 0) 05:29:10 DEBUG --- stdout --- 05:29:10 DEBUG 0 05:29:10 DEBUG --- stderr --- 05:29:10 DEBUG 05:29:10 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.104:8080 Press CTRL+C to quit 05:29:40 INFO 05:29:40 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:40 INFO [loop_until]: OK (rc = 0) 05:29:40 DEBUG --- stdout --- 05:29:40 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:40 DEBUG --- stderr --- 05:29:40 DEBUG 05:29:41 INFO 05:29:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:41 INFO [loop_until]: OK (rc = 0) 05:29:41 DEBUG --- stdout --- 05:29:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:41 DEBUG --- stderr --- 05:29:41 DEBUG 05:29:41 INFO 05:29:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:41 INFO [loop_until]: OK (rc = 0) 05:29:41 DEBUG --- stdout --- 05:29:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:41 DEBUG --- stderr --- 05:29:41 DEBUG 05:29:41 INFO 05:29:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:41 INFO [loop_until]: OK (rc = 0) 05:29:41 DEBUG --- stdout --- 05:29:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:41 DEBUG --- stderr --- 05:29:41 DEBUG 05:29:41 INFO 05:29:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:41 INFO [loop_until]: OK (rc = 0) 05:29:41 DEBUG --- stdout --- 05:29:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:41 DEBUG --- stderr --- 05:29:41 DEBUG 05:29:41 INFO 05:29:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:41 INFO [loop_until]: OK (rc = 0) 05:29:41 DEBUG --- stdout --- 05:29:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:41 DEBUG --- stderr --- 05:29:41 DEBUG 05:29:41 INFO 05:29:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:41 INFO [loop_until]: OK (rc = 0) 05:29:41 DEBUG --- stdout --- 05:29:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:41 DEBUG --- stderr --- 05:29:41 DEBUG 05:29:41 INFO 05:29:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:41 INFO [loop_until]: OK (rc = 0) 05:29:41 DEBUG --- stdout --- 05:29:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:41 DEBUG --- stderr --- 05:29:41 DEBUG 05:29:41 INFO 05:29:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:42 INFO [loop_until]: OK (rc = 0) 05:29:42 DEBUG --- stdout --- 05:29:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:42 DEBUG --- stderr --- 05:29:42 DEBUG 05:29:42 INFO 05:29:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:42 INFO [loop_until]: OK (rc = 0) 05:29:42 DEBUG --- stdout --- 05:29:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:42 DEBUG --- stderr --- 05:29:42 DEBUG 05:29:42 INFO 05:29:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:42 INFO [loop_until]: OK (rc = 0) 05:29:42 DEBUG --- stdout --- 05:29:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:42 DEBUG --- stderr --- 05:29:42 DEBUG 05:29:42 INFO 05:29:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:42 INFO [loop_until]: OK (rc = 0) 05:29:42 DEBUG --- stdout --- 05:29:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:42 DEBUG --- stderr --- 05:29:42 DEBUG 05:29:42 INFO 05:29:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:42 INFO [loop_until]: OK (rc = 0) 05:29:42 DEBUG --- stdout --- 05:29:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:42 DEBUG --- stderr --- 05:29:42 DEBUG 05:29:42 INFO 05:29:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:42 INFO [loop_until]: OK (rc = 0) 05:29:42 DEBUG --- stdout --- 05:29:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:42 DEBUG --- stderr --- 05:29:42 DEBUG 05:29:42 INFO 05:29:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:42 INFO [loop_until]: OK (rc = 0) 05:29:42 DEBUG --- stdout --- 05:29:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:42 DEBUG --- stderr --- 05:29:42 DEBUG 05:29:42 INFO 05:29:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:42 INFO [loop_until]: OK (rc = 0) 05:29:42 DEBUG --- stdout --- 05:29:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:42 DEBUG --- stderr --- 05:29:42 DEBUG 05:29:42 INFO 05:29:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:43 INFO [loop_until]: OK (rc = 0) 05:29:43 DEBUG --- stdout --- 05:29:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:43 DEBUG --- stderr --- 05:29:43 DEBUG 05:29:43 INFO 05:29:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:43 INFO [loop_until]: OK (rc = 0) 05:29:43 DEBUG --- stdout --- 05:29:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:43 DEBUG --- stderr --- 05:29:43 DEBUG 05:29:43 INFO 05:29:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:43 INFO [loop_until]: OK (rc = 0) 05:29:43 DEBUG --- stdout --- 05:29:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:43 DEBUG --- stderr --- 05:29:43 DEBUG 05:29:43 INFO 05:29:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:43 INFO [loop_until]: OK (rc = 0) 05:29:43 DEBUG --- stdout --- 05:29:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:43 DEBUG --- stderr --- 05:29:43 DEBUG 05:29:43 INFO 05:29:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:43 INFO [loop_until]: OK (rc = 0) 05:29:43 DEBUG --- stdout --- 05:29:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:43 DEBUG --- stderr --- 05:29:43 DEBUG 05:29:43 INFO 05:29:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:43 INFO [loop_until]: OK (rc = 0) 05:29:43 DEBUG --- stdout --- 05:29:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:43 DEBUG --- stderr --- 05:29:43 DEBUG 05:29:43 INFO 05:29:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:43 INFO [loop_until]: OK (rc = 0) 05:29:43 DEBUG --- stdout --- 05:29:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:43 DEBUG --- stderr --- 05:29:43 DEBUG 05:29:43 INFO 05:29:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:43 INFO [loop_until]: OK (rc = 0) 05:29:43 DEBUG --- stdout --- 05:29:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:43 DEBUG --- stderr --- 05:29:43 DEBUG 05:29:44 INFO 05:29:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:44 INFO [loop_until]: OK (rc = 0) 05:29:44 DEBUG --- stdout --- 05:29:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:44 DEBUG --- stderr --- 05:29:44 DEBUG 05:29:44 INFO 05:29:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:44 INFO [loop_until]: OK (rc = 0) 05:29:44 DEBUG --- stdout --- 05:29:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:44 DEBUG --- stderr --- 05:29:44 DEBUG 05:29:44 INFO 05:29:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:29:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:44 INFO [loop_until]: OK (rc = 0) 05:29:44 DEBUG --- stdout --- 05:29:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:29:44 DEBUG --- stderr --- 05:29:44 DEBUG 05:29:44 INFO Initializing monitoring instance threads 05:29:44 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 05:29:44 INFO Starting instance threads 05:29:44 INFO 05:29:44 INFO Thread started 05:29:44 INFO [loop_until]: kubectl --namespace=xlou top node 05:29:44 INFO 05:29:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:44 INFO Thread started 05:29:44 INFO [loop_until]: kubectl --namespace=xlou top pods 05:29:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984" 05:29:44 INFO Thread started Exception in thread Thread-23: 05:29:44 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-24: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 05:29:44 INFO Thread started self.run() 05:29:44 INFO Thread started 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691900984" Exception in thread Thread-25: 05:29:44 INFO Thread started self.run() 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691900984" Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 910, in run 05:29:44 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/usr/local/lib/python3.9/threading.py", line 910, in run 05:29:44 INFO Thread started Exception in thread Thread-28: 05:29:44 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984" 05:29:44 INFO Thread started self.run() 05:29:44 INFO All threads has been started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self._target(*self._args, **self._kwargs) 127.0.0.1 - - [13/Aug/2023 05:29:44] "GET /monitoring/start HTTP/1.1" 200 - self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self.run() self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/threading.py", line 910, in run instance.run() 05:29:44 INFO [loop_until]: OK (rc = 0) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 05:29:44 DEBUG --- stdout --- 05:29:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 21m 2584Mi am-55f77847b7-ch6mt 12m 3141Mi am-55f77847b7-gbbjq 16m 4437Mi ds-cts-0 7m 369Mi ds-cts-1 7m 373Mi ds-cts-2 7m 452Mi ds-idrepo-0 22m 10276Mi ds-idrepo-1 18m 10289Mi ds-idrepo-2 43m 10267Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11m 3403Mi idm-65858d8c4c-h9wbp 7m 1200Mi lodemon-65c77dbb64-7jwvp 433m 60Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 15Mi File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run 05:29:44 DEBUG --- stderr --- 05:29:44 DEBUG self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop KeyError: 'functions' instance.run() instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: if self.prom_data['functions']: KeyError: 'functions' KeyError: 'functions' if self.prom_data['functions']: KeyError: 'functions' 05:29:44 INFO [loop_until]: OK (rc = 0) 05:29:44 DEBUG --- stdout --- 05:29:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 331m 2% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 76m 0% 4191Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 5587Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 3760Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2540Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 4682Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 97m 0% 10936Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1190Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 10929Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 10937Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1632Mi 2% 05:29:44 DEBUG --- stderr --- 05:29:44 DEBUG 05:29:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:45 WARNING Response is NONE 05:29:45 DEBUG Exception is preset. Setting retry_loop to true 05:29:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:47 WARNING Response is NONE 05:29:47 WARNING Response is NONE 05:29:47 DEBUG Exception is preset. Setting retry_loop to true 05:29:47 DEBUG Exception is preset. Setting retry_loop to true 05:29:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:51 WARNING Response is NONE 05:29:51 WARNING Response is NONE 05:29:51 WARNING Response is NONE 05:29:51 WARNING Response is NONE 05:29:51 DEBUG Exception is preset. Setting retry_loop to true 05:29:51 DEBUG Exception is preset. Setting retry_loop to true 05:29:51 DEBUG Exception is preset. Setting retry_loop to true 05:29:51 DEBUG Exception is preset. Setting retry_loop to true 05:29:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:56 WARNING Response is NONE 05:29:56 DEBUG Exception is preset. Setting retry_loop to true 05:29:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:58 WARNING Response is NONE 05:29:58 DEBUG Exception is preset. Setting retry_loop to true 05:29:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:29:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:29:58 WARNING Response is NONE 05:29:58 DEBUG Exception is preset. Setting retry_loop to true 05:29:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:00 WARNING Response is NONE 05:30:00 WARNING Response is NONE 05:30:00 DEBUG Exception is preset. Setting retry_loop to true 05:30:00 DEBUG Exception is preset. Setting retry_loop to true 05:30:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:02 WARNING Response is NONE 05:30:02 DEBUG Exception is preset. Setting retry_loop to true 05:30:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:04 WARNING Response is NONE 05:30:04 WARNING Response is NONE 05:30:04 DEBUG Exception is preset. Setting retry_loop to true 05:30:04 DEBUG Exception is preset. Setting retry_loop to true 05:30:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:07 WARNING Response is NONE 05:30:07 DEBUG Exception is preset. Setting retry_loop to true 05:30:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:09 WARNING Response is NONE 05:30:09 DEBUG Exception is preset. Setting retry_loop to true 05:30:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:11 WARNING Response is NONE 05:30:11 DEBUG Exception is preset. Setting retry_loop to true 05:30:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:11 WARNING Response is NONE 05:30:11 DEBUG Exception is preset. Setting retry_loop to true 05:30:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:13 WARNING Response is NONE 05:30:13 DEBUG Exception is preset. Setting retry_loop to true 05:30:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:15 WARNING Response is NONE 05:30:15 DEBUG Exception is preset. Setting retry_loop to true 05:30:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:16 WARNING Response is NONE 05:30:16 DEBUG Exception is preset. Setting retry_loop to true 05:30:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:18 WARNING Response is NONE 05:30:18 DEBUG Exception is preset. Setting retry_loop to true 05:30:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:20 WARNING Response is NONE 05:30:20 DEBUG Exception is preset. Setting retry_loop to true 05:30:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:22 WARNING Response is NONE 05:30:22 DEBUG Exception is preset. Setting retry_loop to true 05:30:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:24 WARNING Response is NONE 05:30:24 DEBUG Exception is preset. Setting retry_loop to true 05:30:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:25 WARNING Response is NONE 05:30:25 DEBUG Exception is preset. Setting retry_loop to true 05:30:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:26 WARNING Response is NONE 05:30:26 DEBUG Exception is preset. Setting retry_loop to true 05:30:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:27 WARNING Response is NONE 05:30:27 DEBUG Exception is preset. Setting retry_loop to true 05:30:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:29 WARNING Response is NONE 05:30:29 DEBUG Exception is preset. Setting retry_loop to true 05:30:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:31 WARNING Response is NONE 05:30:31 DEBUG Exception is preset. Setting retry_loop to true 05:30:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:33 WARNING Response is NONE 05:30:33 DEBUG Exception is preset. Setting retry_loop to true 05:30:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:36 WARNING Response is NONE 05:30:36 DEBUG Exception is preset. Setting retry_loop to true 05:30:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:37 WARNING Response is NONE 05:30:37 DEBUG Exception is preset. Setting retry_loop to true 05:30:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:39 WARNING Response is NONE 05:30:39 DEBUG Exception is preset. Setting retry_loop to true 05:30:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:40 WARNING Response is NONE 05:30:40 DEBUG Exception is preset. Setting retry_loop to true 05:30:40 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:30:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:42 WARNING Response is NONE 05:30:42 DEBUG Exception is preset. Setting retry_loop to true 05:30:42 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:30:44 INFO 05:30:44 INFO [loop_until]: kubectl --namespace=xlou top pods 05:30:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:30:44 INFO 05:30:44 INFO [loop_until]: kubectl --namespace=xlou top node 05:30:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:30:44 INFO [loop_until]: OK (rc = 0) 05:30:44 DEBUG --- stdout --- 05:30:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 22m 2588Mi am-55f77847b7-ch6mt 14m 3141Mi am-55f77847b7-gbbjq 16m 4438Mi ds-cts-0 187m 372Mi ds-cts-1 145m 374Mi ds-cts-2 156m 456Mi ds-idrepo-0 852m 10287Mi ds-idrepo-1 131m 10290Mi ds-idrepo-2 232m 10283Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 3404Mi idm-65858d8c4c-h9wbp 9m 1202Mi lodemon-65c77dbb64-7jwvp 3m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 138m 150Mi 05:30:44 DEBUG --- stderr --- 05:30:44 DEBUG 05:30:44 INFO [loop_until]: OK (rc = 0) 05:30:44 DEBUG --- stdout --- 05:30:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 4194Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 5586Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 3768Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 2554Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 4687Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 133m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 292m 1% 10954Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 251m 1% 1195Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 1366m 8% 10943Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 246m 1% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 265m 1% 10939Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 251m 1% 1737Mi 2% 05:30:44 DEBUG --- stderr --- 05:30:44 DEBUG 05:30:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:44 WARNING Response is NONE 05:30:44 DEBUG Exception is preset. Setting retry_loop to true 05:30:44 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:30:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:46 WARNING Response is NONE 05:30:46 DEBUG Exception is preset. Setting retry_loop to true 05:30:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:47 WARNING Response is NONE 05:30:47 DEBUG Exception is preset. Setting retry_loop to true 05:30:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:48 WARNING Response is NONE 05:30:48 DEBUG Exception is preset. Setting retry_loop to true 05:30:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:50 WARNING Response is NONE 05:30:50 DEBUG Exception is preset. Setting retry_loop to true 05:30:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:50 WARNING Response is NONE 05:30:50 DEBUG Exception is preset. Setting retry_loop to true 05:30:50 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:30:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:52 WARNING Response is NONE 05:30:52 DEBUG Exception is preset. Setting retry_loop to true 05:30:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:52 WARNING Response is NONE 05:30:52 DEBUG Exception is preset. Setting retry_loop to true 05:30:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:57 WARNING Response is NONE 05:30:57 DEBUG Exception is preset. Setting retry_loop to true 05:30:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:58 WARNING Response is NONE 05:30:58 DEBUG Exception is preset. Setting retry_loop to true 05:30:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:30:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:30:59 WARNING Response is NONE 05:30:59 DEBUG Exception is preset. Setting retry_loop to true 05:30:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:01 WARNING Response is NONE 05:31:01 DEBUG Exception is preset. Setting retry_loop to true 05:31:01 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:31:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:03 WARNING Response is NONE 05:31:03 DEBUG Exception is preset. Setting retry_loop to true 05:31:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:05 WARNING Response is NONE 05:31:05 DEBUG Exception is preset. Setting retry_loop to true 05:31:05 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:31:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:08 WARNING Response is NONE 05:31:08 DEBUG Exception is preset. Setting retry_loop to true 05:31:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:09 WARNING Response is NONE 05:31:09 DEBUG Exception is preset. Setting retry_loop to true 05:31:09 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:31:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:10 WARNING Response is NONE 05:31:10 DEBUG Exception is preset. Setting retry_loop to true 05:31:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:14 WARNING Response is NONE 05:31:14 DEBUG Exception is preset. Setting retry_loop to true 05:31:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:19 WARNING Response is NONE 05:31:19 DEBUG Exception is preset. Setting retry_loop to true 05:31:19 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:31:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:21 WARNING Response is NONE 05:31:21 DEBUG Exception is preset. Setting retry_loop to true 05:31:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:25 WARNING Response is NONE 05:31:25 DEBUG Exception is preset. Setting retry_loop to true 05:31:25 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:31:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:32 WARNING Response is NONE 05:31:32 DEBUG Exception is preset. Setting retry_loop to true 05:31:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:31:44 WARNING Response is NONE 05:31:44 DEBUG Exception is preset. Setting retry_loop to true 05:31:44 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:31:44 INFO 05:31:44 INFO [loop_until]: kubectl --namespace=xlou top pods 05:31:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:31:44 INFO [loop_until]: OK (rc = 0) 05:31:44 DEBUG --- stdout --- 05:31:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 16m 2580Mi am-55f77847b7-ch6mt 14m 3142Mi am-55f77847b7-gbbjq 15m 4438Mi ds-cts-0 7m 374Mi ds-cts-1 7m 374Mi ds-cts-2 7m 456Mi ds-idrepo-0 17m 10287Mi ds-idrepo-1 18m 10293Mi ds-idrepo-2 25m 10284Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11m 3406Mi idm-65858d8c4c-h9wbp 9m 1202Mi lodemon-65c77dbb64-7jwvp 3m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 48Mi 05:31:44 DEBUG --- stderr --- 05:31:44 DEBUG 05:31:44 INFO 05:31:44 INFO [loop_until]: kubectl --namespace=xlou top node 05:31:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:31:44 INFO [loop_until]: OK (rc = 0) 05:31:44 DEBUG --- stdout --- 05:31:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4194Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 5586Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 3759Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2544Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4680Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 10954Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1196Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 70m 0% 10942Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 10944Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1630Mi 2% 05:31:44 DEBUG --- stderr --- 05:31:44 DEBUG 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Response is NONE 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 WARNING Response is NONE 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 DEBUG Exception is preset. Setting retry_loop to true 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:31:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:07 WARNING Response is NONE 05:32:07 DEBUG Exception is preset. Setting retry_loop to true 05:32:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:09 WARNING Response is NONE 05:32:09 WARNING Response is NONE 05:32:09 DEBUG Exception is preset. Setting retry_loop to true 05:32:09 DEBUG Exception is preset. Setting retry_loop to true 05:32:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:13 WARNING Response is NONE 05:32:13 WARNING Response is NONE 05:32:13 WARNING Response is NONE 05:32:13 WARNING Response is NONE 05:32:13 WARNING Response is NONE 05:32:13 WARNING Response is NONE 05:32:13 DEBUG Exception is preset. Setting retry_loop to true 05:32:13 DEBUG Exception is preset. Setting retry_loop to true 05:32:13 DEBUG Exception is preset. Setting retry_loop to true 05:32:13 DEBUG Exception is preset. Setting retry_loop to true 05:32:13 DEBUG Exception is preset. Setting retry_loop to true 05:32:13 DEBUG Exception is preset. Setting retry_loop to true 05:32:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:18 WARNING Response is NONE 05:32:18 DEBUG Exception is preset. Setting retry_loop to true 05:32:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:20 WARNING Response is NONE 05:32:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:20 DEBUG Exception is preset. Setting retry_loop to true 05:32:20 WARNING Response is NONE 05:32:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:20 DEBUG Exception is preset. Setting retry_loop to true 05:32:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:21 WARNING Response is NONE 05:32:21 WARNING Response is NONE 05:32:21 DEBUG Exception is preset. Setting retry_loop to true 05:32:21 DEBUG Exception is preset. Setting retry_loop to true 05:32:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:24 WARNING Response is NONE 05:32:24 DEBUG Exception is preset. Setting retry_loop to true 05:32:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:26 WARNING Response is NONE 05:32:26 WARNING Response is NONE 05:32:26 DEBUG Exception is preset. Setting retry_loop to true 05:32:26 DEBUG Exception is preset. Setting retry_loop to true 05:32:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:29 WARNING Response is NONE 05:32:29 DEBUG Exception is preset. Setting retry_loop to true 05:32:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:31 WARNING Response is NONE 05:32:31 DEBUG Exception is preset. Setting retry_loop to true 05:32:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:32 WARNING Response is NONE 05:32:32 DEBUG Exception is preset. Setting retry_loop to true 05:32:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:33 WARNING Response is NONE 05:32:33 DEBUG Exception is preset. Setting retry_loop to true 05:32:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:35 WARNING Response is NONE 05:32:35 DEBUG Exception is preset. Setting retry_loop to true 05:32:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:37 WARNING Response is NONE 05:32:37 DEBUG Exception is preset. Setting retry_loop to true 05:32:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:38 WARNING Response is NONE 05:32:38 DEBUG Exception is preset. Setting retry_loop to true 05:32:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:40 WARNING Response is NONE 05:32:40 DEBUG Exception is preset. Setting retry_loop to true 05:32:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:42 WARNING Response is NONE 05:32:42 DEBUG Exception is preset. Setting retry_loop to true 05:32:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:44 WARNING Response is NONE 05:32:44 DEBUG Exception is preset. Setting retry_loop to true 05:32:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:44 INFO 05:32:44 INFO [loop_until]: kubectl --namespace=xlou top pods 05:32:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:32:44 INFO 05:32:44 INFO [loop_until]: kubectl --namespace=xlou top node 05:32:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:32:44 INFO [loop_until]: OK (rc = 0) 05:32:44 DEBUG --- stdout --- 05:32:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 19m 2581Mi am-55f77847b7-ch6mt 14m 3142Mi am-55f77847b7-gbbjq 12m 4438Mi ds-cts-0 12m 374Mi ds-cts-1 9m 374Mi ds-cts-2 8m 456Mi ds-idrepo-0 20m 10290Mi ds-idrepo-1 26m 10292Mi ds-idrepo-2 47m 10282Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 3406Mi idm-65858d8c4c-h9wbp 7m 1202Mi lodemon-65c77dbb64-7jwvp 3m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 116m 171Mi 05:32:44 DEBUG --- stderr --- 05:32:44 DEBUG 05:32:44 INFO [loop_until]: OK (rc = 0) 05:32:44 DEBUG --- stdout --- 05:32:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 4196Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 5583Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 3759Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2544Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4682Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 92m 0% 10946Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1198Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 10947Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 78m 0% 10942Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 119m 0% 1736Mi 2% 05:32:44 DEBUG --- stderr --- 05:32:44 DEBUG 05:32:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:46 WARNING Response is NONE 05:32:46 DEBUG Exception is preset. Setting retry_loop to true 05:32:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:47 WARNING Response is NONE 05:32:47 DEBUG Exception is preset. Setting retry_loop to true 05:32:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:48 WARNING Response is NONE 05:32:48 DEBUG Exception is preset. Setting retry_loop to true 05:32:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:49 WARNING Response is NONE 05:32:49 DEBUG Exception is preset. Setting retry_loop to true 05:32:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:51 WARNING Response is NONE 05:32:51 DEBUG Exception is preset. Setting retry_loop to true 05:32:51 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:32:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:53 WARNING Response is NONE 05:32:53 DEBUG Exception is preset. Setting retry_loop to true 05:32:53 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:32:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:55 WARNING Response is NONE 05:32:55 DEBUG Exception is preset. Setting retry_loop to true 05:32:55 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:32:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:58 WARNING Response is NONE 05:32:58 DEBUG Exception is preset. Setting retry_loop to true 05:32:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:32:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:32:59 WARNING Response is NONE 05:32:59 DEBUG Exception is preset. Setting retry_loop to true 05:32:59 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:33:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:00 WARNING Response is NONE 05:33:00 DEBUG Exception is preset. Setting retry_loop to true 05:33:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:01 WARNING Response is NONE 05:33:01 DEBUG Exception is preset. Setting retry_loop to true 05:33:01 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:33:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:08 WARNING Response is NONE 05:33:08 DEBUG Exception is preset. Setting retry_loop to true 05:33:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:09 WARNING Response is NONE 05:33:09 DEBUG Exception is preset. Setting retry_loop to true 05:33:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:10 WARNING Response is NONE 05:33:10 WARNING Response is NONE 05:33:10 DEBUG Exception is preset. Setting retry_loop to true 05:33:10 DEBUG Exception is preset. Setting retry_loop to true 05:33:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:11 WARNING Response is NONE 05:33:11 DEBUG Exception is preset. Setting retry_loop to true 05:33:11 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:33:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:14 WARNING Response is NONE 05:33:14 DEBUG Exception is preset. Setting retry_loop to true 05:33:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:19 WARNING Response is NONE 05:33:19 DEBUG Exception is preset. Setting retry_loop to true 05:33:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:20 WARNING Response is NONE 05:33:20 DEBUG Exception is preset. Setting retry_loop to true 05:33:20 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:33:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:21 WARNING Response is NONE 05:33:21 WARNING Response is NONE 05:33:21 DEBUG Exception is preset. Setting retry_loop to true 05:33:21 DEBUG Exception is preset. Setting retry_loop to true 05:33:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:25 WARNING Response is NONE 05:33:25 DEBUG Exception is preset. Setting retry_loop to true 05:33:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:27 WARNING Response is NONE 05:33:27 WARNING Response is NONE 05:33:27 DEBUG Exception is preset. Setting retry_loop to true 05:33:27 DEBUG Exception is preset. Setting retry_loop to true 05:33:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:30 WARNING Response is NONE 05:33:30 DEBUG Exception is preset. Setting retry_loop to true 05:33:30 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:33:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:32 WARNING Response is NONE 05:33:32 WARNING Response is NONE 05:33:32 DEBUG Exception is preset. Setting retry_loop to true 05:33:32 DEBUG Exception is preset. Setting retry_loop to true 05:33:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:36 WARNING Response is NONE 05:33:36 DEBUG Exception is preset. Setting retry_loop to true 05:33:36 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:33:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:38 WARNING Response is NONE 05:33:38 WARNING Response is NONE 05:33:38 DEBUG Exception is preset. Setting retry_loop to true 05:33:38 DEBUG Exception is preset. Setting retry_loop to true 05:33:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:43 WARNING Response is NONE 05:33:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:43 DEBUG Exception is preset. Setting retry_loop to true 05:33:43 WARNING Response is NONE 05:33:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:43 DEBUG Exception is preset. Setting retry_loop to true 05:33:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:44 INFO 05:33:44 INFO [loop_until]: kubectl --namespace=xlou top pods 05:33:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:33:44 INFO 05:33:44 INFO [loop_until]: kubectl --namespace=xlou top node 05:33:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:33:44 INFO [loop_until]: OK (rc = 0) 05:33:44 DEBUG --- stdout --- 05:33:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 10m 2581Mi am-55f77847b7-ch6mt 12m 3143Mi am-55f77847b7-gbbjq 10m 4439Mi ds-cts-0 12m 374Mi ds-cts-1 8m 374Mi ds-cts-2 7m 457Mi ds-idrepo-0 28m 10289Mi ds-idrepo-1 27m 10294Mi ds-idrepo-2 27m 10285Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 3407Mi idm-65858d8c4c-h9wbp 6m 1202Mi lodemon-65c77dbb64-7jwvp 3m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 98Mi 05:33:44 DEBUG --- stderr --- 05:33:44 DEBUG 05:33:44 INFO [loop_until]: OK (rc = 0) 05:33:44 DEBUG --- stdout --- 05:33:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4193Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5588Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3763Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2544Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4683Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 74m 0% 10948Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1198Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 10948Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 10946Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1632Mi 2% 05:33:44 DEBUG --- stderr --- 05:33:44 DEBUG 05:33:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:49 WARNING Response is NONE 05:33:49 WARNING Response is NONE 05:33:49 DEBUG Exception is preset. Setting retry_loop to true 05:33:49 DEBUG Exception is preset. Setting retry_loop to true 05:33:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:33:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:33:54 WARNING Response is NONE 05:33:54 WARNING Response is NONE 05:33:54 DEBUG Exception is preset. Setting retry_loop to true 05:33:54 DEBUG Exception is preset. Setting retry_loop to true 05:33:54 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: 05:33:54 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): self.run() File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable TypeError: 'LodestarLogger' object is not callable 05:34:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:34:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691900984 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:34:00 WARNING Response is NONE 05:34:00 WARNING Response is NONE 05:34:00 DEBUG Exception is preset. Setting retry_loop to true 05:34:00 DEBUG Exception is preset. Setting retry_loop to true 05:34:00 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: 05:34:00 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) raise FailException('Failed to obtain response from server...') File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:34:44 INFO 05:34:44 INFO [loop_until]: kubectl --namespace=xlou top pods 05:34:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:34:44 INFO 05:34:44 INFO [loop_until]: kubectl --namespace=xlou top node 05:34:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:34:44 INFO [loop_until]: OK (rc = 0) 05:34:44 DEBUG --- stdout --- 05:34:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 12m 2582Mi am-55f77847b7-ch6mt 11m 3143Mi am-55f77847b7-gbbjq 16m 4440Mi ds-cts-0 9m 374Mi ds-cts-1 9m 375Mi ds-cts-2 10m 456Mi ds-idrepo-0 21m 10292Mi ds-idrepo-1 16m 10295Mi ds-idrepo-2 25m 10287Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 3407Mi idm-65858d8c4c-h9wbp 7m 1202Mi lodemon-65c77dbb64-7jwvp 3m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 98Mi 05:34:44 DEBUG --- stderr --- 05:34:44 DEBUG 05:34:45 INFO [loop_until]: OK (rc = 0) 05:34:45 DEBUG --- stdout --- 05:34:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4192Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 5590Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3761Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2544Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2129Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 4687Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 79m 0% 10954Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1195Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 73m 0% 10949Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 10947Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1632Mi 2% 05:34:45 DEBUG --- stderr --- 05:34:45 DEBUG 05:35:45 INFO 05:35:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:35:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:35:45 INFO 05:35:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:35:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:35:45 INFO [loop_until]: OK (rc = 0) 05:35:45 DEBUG --- stdout --- 05:35:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 11m 2601Mi am-55f77847b7-ch6mt 17m 3184Mi am-55f77847b7-gbbjq 17m 4459Mi ds-cts-0 538m 376Mi ds-cts-1 150m 372Mi ds-cts-2 153m 459Mi ds-idrepo-0 3110m 13293Mi ds-idrepo-1 311m 10298Mi ds-idrepo-2 287m 10279Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11m 3434Mi idm-65858d8c4c-h9wbp 9m 1209Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1016m 349Mi 05:35:45 DEBUG --- stderr --- 05:35:45 DEBUG 05:35:45 INFO [loop_until]: OK (rc = 0) 05:35:45 DEBUG --- stdout --- 05:35:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4231Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 5607Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3781Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 2546Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 133m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 4714Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 461m 2% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 359m 2% 10950Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 269m 1% 1197Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 3284m 20% 13849Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 280m 1% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 344m 2% 10951Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1105m 6% 1879Mi 3% 05:35:45 DEBUG --- stderr --- 05:35:45 DEBUG 05:36:45 INFO 05:36:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:36:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:36:45 INFO 05:36:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:36:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:36:45 INFO [loop_until]: OK (rc = 0) 05:36:45 DEBUG --- stdout --- 05:36:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 12m 2601Mi am-55f77847b7-ch6mt 11m 3184Mi am-55f77847b7-gbbjq 14m 4459Mi ds-cts-0 6m 373Mi ds-cts-1 8m 372Mi ds-cts-2 7m 460Mi ds-idrepo-0 2742m 13401Mi ds-idrepo-1 26m 10299Mi ds-idrepo-2 19m 10279Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11m 3434Mi idm-65858d8c4c-h9wbp 7m 1210Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1093m 349Mi 05:36:45 DEBUG --- stderr --- 05:36:45 DEBUG 05:36:45 INFO [loop_until]: OK (rc = 0) 05:36:45 DEBUG --- stdout --- 05:36:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4234Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 5605Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3782Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2550Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 4712Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 10949Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1198Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 2763m 17% 13933Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 10949Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1202m 7% 1883Mi 3% 05:36:45 DEBUG --- stderr --- 05:36:45 DEBUG 05:37:45 INFO 05:37:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:37:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:37:45 INFO 05:37:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:37:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:37:45 INFO [loop_until]: OK (rc = 0) 05:37:45 DEBUG --- stdout --- 05:37:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 33m 2605Mi am-55f77847b7-ch6mt 14m 3184Mi am-55f77847b7-gbbjq 8m 4459Mi ds-cts-0 9m 374Mi ds-cts-1 8m 373Mi ds-cts-2 7m 459Mi ds-idrepo-0 2853m 13419Mi ds-idrepo-1 19m 10309Mi ds-idrepo-2 26m 10282Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 12m 3435Mi idm-65858d8c4c-h9wbp 8m 1210Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1154m 349Mi 05:37:45 DEBUG --- stderr --- 05:37:45 DEBUG 05:37:45 INFO [loop_until]: OK (rc = 0) 05:37:45 DEBUG --- stdout --- 05:37:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 4234Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5604Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3785Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2553Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 82m 0% 4725Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 75m 0% 10950Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1202Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 2945m 18% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 10960Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1247m 7% 1896Mi 3% 05:37:45 DEBUG --- stderr --- 05:37:45 DEBUG 05:38:45 INFO 05:38:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:38:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:38:45 INFO 05:38:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:38:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:38:45 INFO [loop_until]: OK (rc = 0) 05:38:45 DEBUG --- stdout --- 05:38:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 11m 2606Mi am-55f77847b7-ch6mt 8m 3185Mi am-55f77847b7-gbbjq 10m 4459Mi ds-cts-0 7m 374Mi ds-cts-1 8m 373Mi ds-cts-2 8m 459Mi ds-idrepo-0 2804m 13484Mi ds-idrepo-1 19m 10310Mi ds-idrepo-2 22m 10281Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 12m 3435Mi idm-65858d8c4c-h9wbp 6m 1210Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1239m 349Mi 05:38:45 DEBUG --- stderr --- 05:38:45 DEBUG 05:38:45 INFO [loop_until]: OK (rc = 0) 05:38:45 DEBUG --- stdout --- 05:38:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4233Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5605Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 3785Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2549Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2128Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 4713Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 10950Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1199Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 2901m 18% 14058Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 10958Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1343m 8% 1885Mi 3% 05:38:45 DEBUG --- stderr --- 05:38:45 DEBUG 05:39:45 INFO 05:39:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:39:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:39:45 INFO 05:39:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:39:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:39:45 INFO [loop_until]: OK (rc = 0) 05:39:45 DEBUG --- stdout --- 05:39:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 10m 2606Mi am-55f77847b7-ch6mt 10m 3186Mi am-55f77847b7-gbbjq 9m 4459Mi ds-cts-0 9m 374Mi ds-cts-1 9m 373Mi ds-cts-2 10m 461Mi ds-idrepo-0 2969m 13663Mi ds-idrepo-1 12m 10310Mi ds-idrepo-2 20m 10282Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 17m 3435Mi idm-65858d8c4c-h9wbp 9m 1210Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1330m 350Mi 05:39:45 DEBUG --- stderr --- 05:39:45 DEBUG 05:39:45 INFO [loop_until]: OK (rc = 0) 05:39:45 DEBUG --- stdout --- 05:39:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4236Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 5603Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3784Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2549Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 80m 0% 4709Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 72m 0% 10947Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1199Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 3038m 19% 14223Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 10963Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1402m 8% 1882Mi 3% 05:39:45 DEBUG --- stderr --- 05:39:45 DEBUG 05:40:45 INFO 05:40:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:40:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:40:45 INFO [loop_until]: OK (rc = 0) 05:40:45 DEBUG --- stdout --- 05:40:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 10m 2606Mi am-55f77847b7-ch6mt 11m 3186Mi am-55f77847b7-gbbjq 9m 4459Mi ds-cts-0 8m 374Mi ds-cts-1 9m 373Mi ds-cts-2 7m 461Mi ds-idrepo-0 13m 13663Mi ds-idrepo-1 15m 10310Mi ds-idrepo-2 20m 10282Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 17m 3438Mi idm-65858d8c4c-h9wbp 6m 1210Mi lodemon-65c77dbb64-7jwvp 1m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 98Mi 05:40:45 DEBUG --- stderr --- 05:40:45 DEBUG 05:40:45 INFO 05:40:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:40:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:40:45 INFO [loop_until]: OK (rc = 0) 05:40:45 DEBUG --- stdout --- 05:40:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4235Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5607Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3784Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2552Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 80m 0% 4715Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 70m 0% 10953Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1200Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14225Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 10964Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1633Mi 2% 05:40:45 DEBUG --- stderr --- 05:40:45 DEBUG 05:41:45 INFO 05:41:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:41:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:41:45 INFO [loop_until]: OK (rc = 0) 05:41:45 DEBUG --- stdout --- 05:41:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 10m 2607Mi am-55f77847b7-ch6mt 10m 3187Mi am-55f77847b7-gbbjq 21m 4456Mi ds-cts-0 8m 374Mi ds-cts-1 7m 373Mi ds-cts-2 7m 461Mi ds-idrepo-0 20m 13663Mi ds-idrepo-1 2670m 12358Mi ds-idrepo-2 21m 10285Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 12m 3434Mi idm-65858d8c4c-h9wbp 8m 1209Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 931m 373Mi 05:41:45 DEBUG --- stderr --- 05:41:45 DEBUG 05:41:45 INFO 05:41:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:41:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:41:45 INFO [loop_until]: OK (rc = 0) 05:41:45 DEBUG --- stdout --- 05:41:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4236Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 5603Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3785Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2550Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 4711Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 72m 0% 10956Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1199Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 14226Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2793m 17% 13177Mi 22% gke-xlou-cdm-frontend-a8771548-k40m 1019m 6% 1905Mi 3% 05:41:45 DEBUG --- stderr --- 05:41:45 DEBUG 05:42:45 INFO 05:42:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:42:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:42:45 INFO 05:42:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:42:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:42:45 INFO [loop_until]: OK (rc = 0) 05:42:45 DEBUG --- stdout --- 05:42:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 15m 2608Mi am-55f77847b7-ch6mt 11m 3187Mi am-55f77847b7-gbbjq 13m 4456Mi ds-cts-0 6m 374Mi ds-cts-1 8m 374Mi ds-cts-2 8m 461Mi ds-idrepo-0 13m 13663Mi ds-idrepo-1 2837m 13371Mi ds-idrepo-2 17m 10287Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11m 3434Mi idm-65858d8c4c-h9wbp 6m 1209Mi lodemon-65c77dbb64-7jwvp 8m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1089m 373Mi 05:42:45 DEBUG --- stderr --- 05:42:45 DEBUG 05:42:45 INFO [loop_until]: OK (rc = 0) 05:42:45 DEBUG --- stdout --- 05:42:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4238Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 5606Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3787Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2549Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4712Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 72m 0% 10963Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14226Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2831m 17% 13898Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1134m 7% 1904Mi 3% 05:42:45 DEBUG --- stderr --- 05:42:45 DEBUG 05:43:45 INFO 05:43:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:43:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:43:45 INFO 05:43:45 INFO [loop_until]: kubectl --namespace=xlou top node 05:43:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:43:45 INFO [loop_until]: OK (rc = 0) 05:43:45 DEBUG --- stdout --- 05:43:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 20m 2607Mi am-55f77847b7-ch6mt 10m 3187Mi am-55f77847b7-gbbjq 13m 4456Mi ds-cts-0 7m 374Mi ds-cts-1 7m 373Mi ds-cts-2 10m 461Mi ds-idrepo-0 12m 13663Mi ds-idrepo-1 2833m 13419Mi ds-idrepo-2 20m 10289Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11m 3429Mi idm-65858d8c4c-h9wbp 7m 1210Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1127m 373Mi 05:43:45 DEBUG --- stderr --- 05:43:45 DEBUG 05:43:45 INFO [loop_until]: OK (rc = 0) 05:43:45 DEBUG --- stdout --- 05:43:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4238Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 5603Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 3785Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2547Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 115m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 4707Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 69m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1197Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 14226Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2954m 18% 13989Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1204m 7% 1907Mi 3% 05:43:45 DEBUG --- stderr --- 05:43:45 DEBUG 05:44:45 INFO 05:44:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:44:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:44:45 INFO [loop_until]: OK (rc = 0) 05:44:45 DEBUG --- stdout --- 05:44:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 7m 2607Mi am-55f77847b7-ch6mt 11m 3188Mi am-55f77847b7-gbbjq 9m 4457Mi ds-cts-0 12m 377Mi ds-cts-1 7m 373Mi ds-cts-2 6m 459Mi ds-idrepo-0 14m 13663Mi ds-idrepo-1 2915m 13420Mi ds-idrepo-2 20m 10291Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 14m 3429Mi idm-65858d8c4c-h9wbp 9m 1210Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1187m 373Mi 05:44:45 DEBUG --- stderr --- 05:44:45 DEBUG 05:44:46 INFO 05:44:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:44:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:44:46 INFO [loop_until]: OK (rc = 0) 05:44:46 DEBUG --- stdout --- 05:44:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 4239Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5608Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 3788Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2550Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 81m 0% 4709Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 76m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1198Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 14220Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2981m 18% 13982Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1291m 8% 1904Mi 3% 05:44:46 DEBUG --- stderr --- 05:44:46 DEBUG 05:45:45 INFO 05:45:45 INFO [loop_until]: kubectl --namespace=xlou top pods 05:45:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:45:46 INFO [loop_until]: OK (rc = 0) 05:45:46 DEBUG --- stdout --- 05:45:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 8m 2608Mi am-55f77847b7-ch6mt 12m 3188Mi am-55f77847b7-gbbjq 11m 4458Mi ds-cts-0 6m 377Mi ds-cts-1 7m 376Mi ds-cts-2 7m 458Mi ds-idrepo-0 13m 13663Mi ds-idrepo-1 3045m 13620Mi ds-idrepo-2 22m 10288Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11m 3434Mi idm-65858d8c4c-h9wbp 9m 1211Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1217m 373Mi 05:45:46 DEBUG --- stderr --- 05:45:46 DEBUG 05:45:46 INFO 05:45:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:45:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:45:46 INFO [loop_until]: OK (rc = 0) 05:45:46 DEBUG --- stdout --- 05:45:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1394Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4237Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5607Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3789Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2552Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4716Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 10960Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1196Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14223Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3065m 19% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1356m 8% 1907Mi 3% 05:45:46 DEBUG --- stderr --- 05:45:46 DEBUG 05:46:46 INFO 05:46:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:46:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:46:46 INFO [loop_until]: OK (rc = 0) 05:46:46 DEBUG --- stdout --- 05:46:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 8m 2611Mi am-55f77847b7-ch6mt 19m 3186Mi am-55f77847b7-gbbjq 10m 4458Mi ds-cts-0 7m 378Mi ds-cts-1 9m 375Mi ds-cts-2 7m 458Mi ds-idrepo-0 12m 13663Mi ds-idrepo-1 13m 13647Mi ds-idrepo-2 16m 10289Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 3434Mi idm-65858d8c4c-h9wbp 6m 1211Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 98Mi 05:46:46 DEBUG --- stderr --- 05:46:46 DEBUG 05:46:46 INFO 05:46:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:46:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:46:46 INFO [loop_until]: OK (rc = 0) 05:46:46 DEBUG --- stdout --- 05:46:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4237Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5605Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 3802Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2551Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 4715Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1196Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14227Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14205Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1634Mi 2% 05:46:46 DEBUG --- stderr --- 05:46:46 DEBUG 05:47:46 INFO 05:47:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:47:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:47:46 INFO [loop_until]: OK (rc = 0) 05:47:46 DEBUG --- stdout --- 05:47:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 11m 2611Mi am-55f77847b7-ch6mt 9m 3187Mi am-55f77847b7-gbbjq 9m 4459Mi ds-cts-0 7m 377Mi ds-cts-1 6m 376Mi ds-cts-2 7m 458Mi ds-idrepo-0 12m 13663Mi ds-idrepo-1 13m 13647Mi ds-idrepo-2 1859m 12146Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 3434Mi idm-65858d8c4c-h9wbp 6m 1211Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 870m 385Mi 05:47:46 DEBUG --- stderr --- 05:47:46 DEBUG 05:47:46 INFO 05:47:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:47:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:47:46 INFO [loop_until]: OK (rc = 0) 05:47:46 DEBUG --- stdout --- 05:47:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 4234Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5606Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3789Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2554Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 4715Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2539m 15% 12856Mi 21% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1197Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14222Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14208Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1352m 8% 1920Mi 3% 05:47:46 DEBUG --- stderr --- 05:47:46 DEBUG 05:48:46 INFO 05:48:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:48:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:48:46 INFO [loop_until]: OK (rc = 0) 05:48:46 DEBUG --- stdout --- 05:48:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 8m 2611Mi am-55f77847b7-ch6mt 9m 3187Mi am-55f77847b7-gbbjq 12m 4468Mi ds-cts-0 7m 378Mi ds-cts-1 7m 376Mi ds-cts-2 6m 458Mi ds-idrepo-0 12m 13663Mi ds-idrepo-1 23m 13646Mi ds-idrepo-2 2809m 13397Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 3435Mi idm-65858d8c4c-h9wbp 6m 1211Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1093m 385Mi 05:48:46 DEBUG --- stderr --- 05:48:46 DEBUG 05:48:46 INFO 05:48:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:48:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:48:46 INFO [loop_until]: OK (rc = 0) 05:48:46 DEBUG --- stdout --- 05:48:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 4236Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 5626Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3787Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2554Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4711Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2846m 17% 13981Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1196Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 14226Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1169m 7% 1918Mi 3% 05:48:46 DEBUG --- stderr --- 05:48:46 DEBUG 05:49:46 INFO 05:49:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:49:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:49:46 INFO [loop_until]: OK (rc = 0) 05:49:46 DEBUG --- stdout --- 05:49:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 9m 2611Mi am-55f77847b7-ch6mt 9m 3187Mi am-55f77847b7-gbbjq 8m 4468Mi ds-cts-0 8m 377Mi ds-cts-1 7m 376Mi ds-cts-2 6m 458Mi ds-idrepo-0 12m 13663Mi ds-idrepo-1 10m 13646Mi ds-idrepo-2 2705m 13318Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 3445Mi idm-65858d8c4c-h9wbp 12m 1211Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1124m 386Mi 05:49:46 DEBUG --- stderr --- 05:49:46 DEBUG 05:49:46 INFO 05:49:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:49:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:49:46 INFO [loop_until]: OK (rc = 0) 05:49:46 DEBUG --- stdout --- 05:49:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4237Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5611Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3790Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 2551Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 145m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 4718Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2936m 18% 13974Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1196Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14228Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14205Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1172m 7% 1917Mi 3% 05:49:46 DEBUG --- stderr --- 05:49:46 DEBUG 05:50:46 INFO 05:50:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:50:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:50:46 INFO [loop_until]: OK (rc = 0) 05:50:46 DEBUG --- stdout --- 05:50:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 9m 2616Mi am-55f77847b7-ch6mt 28m 3197Mi am-55f77847b7-gbbjq 30m 4471Mi ds-cts-0 8m 377Mi ds-cts-1 8m 376Mi ds-cts-2 7m 458Mi ds-idrepo-0 12m 13664Mi ds-idrepo-1 16m 13649Mi ds-idrepo-2 2712m 13391Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 13m 3445Mi idm-65858d8c4c-h9wbp 9m 1212Mi lodemon-65c77dbb64-7jwvp 1m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1186m 386Mi 05:50:46 DEBUG --- stderr --- 05:50:46 DEBUG 05:50:46 INFO 05:50:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:50:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:50:46 INFO [loop_until]: OK (rc = 0) 05:50:46 DEBUG --- stdout --- 05:50:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 91m 0% 4248Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 5620Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3792Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2553Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4723Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2884m 18% 13975Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1195Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14228Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14207Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1273m 8% 1919Mi 3% 05:50:46 DEBUG --- stderr --- 05:50:46 DEBUG 05:51:46 INFO 05:51:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:51:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:51:46 INFO [loop_until]: OK (rc = 0) 05:51:46 DEBUG --- stdout --- 05:51:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 8m 2623Mi am-55f77847b7-ch6mt 12m 3197Mi am-55f77847b7-gbbjq 10m 4472Mi ds-cts-0 7m 378Mi ds-cts-1 7m 376Mi ds-cts-2 7m 458Mi ds-idrepo-0 13m 13663Mi ds-idrepo-1 10m 13648Mi ds-idrepo-2 2804m 13564Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5m 3445Mi idm-65858d8c4c-h9wbp 16m 1216Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1233m 386Mi 05:51:46 DEBUG --- stderr --- 05:51:46 DEBUG 05:51:46 INFO 05:51:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:51:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:51:46 INFO [loop_until]: OK (rc = 0) 05:51:46 DEBUG --- stdout --- 05:51:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 4249Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5619Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3801Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 2557Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 4722Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2930m 18% 14139Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1199Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14227Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14208Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1345m 8% 1917Mi 3% 05:51:46 DEBUG --- stderr --- 05:51:46 DEBUG 05:52:46 INFO 05:52:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:52:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:52:46 INFO [loop_until]: OK (rc = 0) 05:52:46 DEBUG --- stdout --- 05:52:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 8m 2633Mi am-55f77847b7-ch6mt 9m 3197Mi am-55f77847b7-gbbjq 10m 4472Mi ds-cts-0 11m 378Mi ds-cts-1 6m 377Mi ds-cts-2 6m 458Mi ds-idrepo-0 13m 13663Mi ds-idrepo-1 11m 13648Mi ds-idrepo-2 1038m 13580Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 6m 3445Mi idm-65858d8c4c-h9wbp 9m 1219Mi lodemon-65c77dbb64-7jwvp 1m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 176m 99Mi 05:52:46 DEBUG --- stderr --- 05:52:46 DEBUG 05:52:46 INFO 05:52:46 INFO [loop_until]: kubectl --namespace=xlou top node 05:52:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:52:46 INFO [loop_until]: OK (rc = 0) 05:52:46 DEBUG --- stdout --- 05:52:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4246Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5620Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3814Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 2566Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 4720Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1246m 7% 14154Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1198Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14222Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14212Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 585m 3% 1631Mi 2% 05:52:46 DEBUG --- stderr --- 05:52:46 DEBUG 05:53:46 INFO 05:53:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:53:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:53:46 INFO [loop_until]: OK (rc = 0) 05:53:46 DEBUG --- stdout --- 05:53:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 7m 2644Mi am-55f77847b7-ch6mt 9m 3197Mi am-55f77847b7-gbbjq 9m 4472Mi ds-cts-0 8m 378Mi ds-cts-1 6m 377Mi ds-cts-2 6m 458Mi ds-idrepo-0 12m 13663Mi ds-idrepo-1 17m 13643Mi ds-idrepo-2 15m 13580Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 6m 3446Mi idm-65858d8c4c-h9wbp 7m 1231Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 328m 306Mi 05:53:46 DEBUG --- stderr --- 05:53:46 DEBUG 05:53:47 INFO 05:53:47 INFO [loop_until]: kubectl --namespace=xlou top node 05:53:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:53:47 INFO [loop_until]: OK (rc = 0) 05:53:47 DEBUG --- stdout --- 05:53:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4246Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5620Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3825Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4724Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 14157Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14228Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1520m 9% 2026Mi 3% 05:53:47 DEBUG --- stderr --- 05:53:47 DEBUG 05:54:46 INFO 05:54:46 INFO [loop_until]: kubectl --namespace=xlou top pods 05:54:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:54:47 INFO [loop_until]: OK (rc = 0) 05:54:47 DEBUG --- stdout --- 05:54:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 125m 4026Mi am-55f77847b7-ch6mt 127m 4060Mi am-55f77847b7-gbbjq 124m 4668Mi ds-cts-0 8m 380Mi ds-cts-1 8m 381Mi ds-cts-2 7m 459Mi ds-idrepo-0 8123m 13667Mi ds-idrepo-1 1862m 13660Mi ds-idrepo-2 2231m 13749Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11925m 4102Mi idm-65858d8c4c-h9wbp 9779m 4228Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1224m 529Mi 05:54:47 DEBUG --- stderr --- 05:54:47 DEBUG 05:54:47 INFO 05:54:47 INFO [loop_until]: kubectl --namespace=xlou top node 05:54:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:54:47 INFO [loop_until]: OK (rc = 0) 05:54:47 DEBUG --- stdout --- 05:54:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 193m 1% 5315Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 184m 1% 5817Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 184m 1% 5063Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 10188m 64% 5555Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2390m 15% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 11421m 71% 5378Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2201m 13% 14239Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1198Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8752m 55% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2151m 13% 14211Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1433m 9% 2052Mi 3% 05:54:47 DEBUG --- stderr --- 05:54:47 DEBUG 05:55:47 INFO 05:55:47 INFO [loop_until]: kubectl --namespace=xlou top pods 05:55:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:55:47 INFO [loop_until]: OK (rc = 0) 05:55:47 DEBUG --- stdout --- 05:55:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 114m 5671Mi am-55f77847b7-ch6mt 146m 5720Mi am-55f77847b7-gbbjq 98m 4663Mi ds-cts-0 7m 380Mi ds-cts-1 8m 379Mi ds-cts-2 7m 460Mi ds-idrepo-0 8922m 13823Mi ds-idrepo-1 2567m 13783Mi ds-idrepo-2 2365m 13816Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10086m 4151Mi idm-65858d8c4c-h9wbp 8723m 4248Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1194m 528Mi 05:55:47 DEBUG --- stderr --- 05:55:47 DEBUG 05:55:47 INFO 05:55:47 INFO [loop_until]: kubectl --namespace=xlou top node 05:55:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:55:47 INFO [loop_until]: OK (rc = 0) 05:55:47 DEBUG --- stdout --- 05:55:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 187m 1% 6764Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 160m 1% 5821Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 179m 1% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9011m 56% 5573Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2430m 15% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10507m 66% 5421Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2688m 16% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1200Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9192m 57% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2524m 15% 14319Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1273m 8% 2057Mi 3% 05:55:47 DEBUG --- stderr --- 05:55:47 DEBUG 05:56:47 INFO 05:56:47 INFO [loop_until]: kubectl --namespace=xlou top pods 05:56:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:56:47 INFO [loop_until]: OK (rc = 0) 05:56:47 DEBUG --- stdout --- 05:56:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 89m 5730Mi am-55f77847b7-ch6mt 98m 5721Mi am-55f77847b7-gbbjq 106m 4816Mi ds-cts-0 7m 380Mi ds-cts-1 10m 379Mi ds-cts-2 6m 459Mi ds-idrepo-0 9719m 13822Mi ds-idrepo-1 2691m 13823Mi ds-idrepo-2 2433m 13823Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10046m 4170Mi idm-65858d8c4c-h9wbp 9171m 4259Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1187m 529Mi 05:56:47 DEBUG --- stderr --- 05:56:47 DEBUG 05:56:47 INFO 05:56:47 INFO [loop_until]: kubectl --namespace=xlou top node 05:56:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:56:47 INFO [loop_until]: OK (rc = 0) 05:56:47 DEBUG --- stdout --- 05:56:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 166m 1% 6783Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 168m 1% 6058Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9219m 58% 5587Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2455m 15% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10686m 67% 5444Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2786m 17% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1200Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10143m 63% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2987m 18% 14353Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1272m 8% 2057Mi 3% 05:56:47 DEBUG --- stderr --- 05:56:47 DEBUG 05:57:47 INFO 05:57:47 INFO [loop_until]: kubectl --namespace=xlou top pods 05:57:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:57:47 INFO [loop_until]: OK (rc = 0) 05:57:47 DEBUG --- stdout --- 05:57:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 106m 5734Mi am-55f77847b7-ch6mt 125m 5729Mi am-55f77847b7-gbbjq 119m 5774Mi ds-cts-0 6m 380Mi ds-cts-1 10m 379Mi ds-cts-2 6m 459Mi ds-idrepo-0 9516m 13823Mi ds-idrepo-1 2616m 13818Mi ds-idrepo-2 2467m 13813Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10200m 4214Mi idm-65858d8c4c-h9wbp 8715m 4269Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1163m 531Mi 05:57:47 DEBUG --- stderr --- 05:57:47 DEBUG 05:57:47 INFO 05:57:47 INFO [loop_until]: kubectl --namespace=xlou top node 05:57:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:57:47 INFO [loop_until]: OK (rc = 0) 05:57:47 DEBUG --- stdout --- 05:57:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 183m 1% 6771Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 165m 1% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 168m 1% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9063m 57% 5595Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2430m 15% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10699m 67% 5481Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2700m 16% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1203Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9801m 61% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2770m 17% 14353Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1262m 7% 2059Mi 3% 05:57:47 DEBUG --- stderr --- 05:57:47 DEBUG 05:58:47 INFO 05:58:47 INFO [loop_until]: kubectl --namespace=xlou top pods 05:58:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:58:47 INFO [loop_until]: OK (rc = 0) 05:58:47 DEBUG --- stdout --- 05:58:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 91m 5765Mi am-55f77847b7-ch6mt 97m 5765Mi am-55f77847b7-gbbjq 95m 5774Mi ds-cts-0 9m 380Mi ds-cts-1 8m 380Mi ds-cts-2 6m 459Mi ds-idrepo-0 10772m 13812Mi ds-idrepo-1 3319m 13798Mi ds-idrepo-2 3136m 13819Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9892m 4254Mi idm-65858d8c4c-h9wbp 8808m 4285Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1202m 532Mi 05:58:47 DEBUG --- stderr --- 05:58:47 DEBUG 05:58:47 INFO 05:58:47 INFO [loop_until]: kubectl --namespace=xlou top node 05:58:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:58:47 INFO [loop_until]: OK (rc = 0) 05:58:47 DEBUG --- stdout --- 05:58:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 164m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6920Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6937Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8754m 55% 5611Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2405m 15% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10438m 65% 5532Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2955m 18% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1199Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10507m 66% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3551m 22% 14332Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1273m 8% 2059Mi 3% 05:58:47 DEBUG --- stderr --- 05:58:47 DEBUG 05:59:47 INFO 05:59:47 INFO [loop_until]: kubectl --namespace=xlou top pods 05:59:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:59:47 INFO [loop_until]: OK (rc = 0) 05:59:47 DEBUG --- stdout --- 05:59:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 95m 5766Mi am-55f77847b7-ch6mt 100m 5766Mi am-55f77847b7-gbbjq 99m 5783Mi ds-cts-0 8m 380Mi ds-cts-1 12m 379Mi ds-cts-2 8m 459Mi ds-idrepo-0 10283m 13806Mi ds-idrepo-1 2895m 13805Mi ds-idrepo-2 3083m 13822Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10025m 4310Mi idm-65858d8c4c-h9wbp 8485m 4298Mi lodemon-65c77dbb64-7jwvp 1m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1193m 536Mi 05:59:47 DEBUG --- stderr --- 05:59:47 DEBUG 05:59:47 INFO 05:59:47 INFO [loop_until]: kubectl --namespace=xlou top node 05:59:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:59:47 INFO [loop_until]: OK (rc = 0) 05:59:47 DEBUG --- stdout --- 05:59:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6930Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 152m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8885m 55% 5627Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2354m 14% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10673m 67% 5581Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3059m 19% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1198Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10737m 67% 14335Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3095m 19% 14334Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1257m 7% 2063Mi 3% 05:59:47 DEBUG --- stderr --- 05:59:47 DEBUG 06:00:47 INFO 06:00:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:00:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:00:47 INFO 06:00:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:00:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:00:47 INFO [loop_until]: OK (rc = 0) 06:00:47 DEBUG --- stdout --- 06:00:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 96m 5770Mi am-55f77847b7-ch6mt 105m 5770Mi am-55f77847b7-gbbjq 100m 5785Mi ds-cts-0 7m 380Mi ds-cts-1 11m 380Mi ds-cts-2 7m 460Mi ds-idrepo-0 10219m 13809Mi ds-idrepo-1 2907m 13846Mi ds-idrepo-2 2974m 13829Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10385m 4357Mi idm-65858d8c4c-h9wbp 9431m 4314Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1217m 537Mi 06:00:47 DEBUG --- stderr --- 06:00:47 DEBUG 06:00:47 INFO [loop_until]: OK (rc = 0) 06:00:47 DEBUG --- stdout --- 06:00:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 164m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6928Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9219m 58% 5649Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2443m 15% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10677m 67% 5631Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3057m 19% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1203Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10341m 65% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3079m 19% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1239m 7% 2063Mi 3% 06:00:47 DEBUG --- stderr --- 06:00:47 DEBUG 06:01:47 INFO 06:01:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:01:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:01:47 INFO 06:01:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:01:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:01:47 INFO [loop_until]: OK (rc = 0) 06:01:47 DEBUG --- stdout --- 06:01:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 91m 5770Mi am-55f77847b7-ch6mt 97m 5770Mi am-55f77847b7-gbbjq 95m 5785Mi ds-cts-0 7m 380Mi ds-cts-1 8m 380Mi ds-cts-2 6m 460Mi ds-idrepo-0 10310m 13815Mi ds-idrepo-1 3497m 13851Mi ds-idrepo-2 3239m 13801Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9975m 4410Mi idm-65858d8c4c-h9wbp 8640m 4337Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1165m 538Mi 06:01:47 DEBUG --- stderr --- 06:01:47 DEBUG 06:01:48 INFO [loop_until]: OK (rc = 0) 06:01:48 DEBUG --- stdout --- 06:01:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6929Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9341m 58% 5670Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2363m 14% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10064m 63% 5679Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3247m 20% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1202Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10629m 66% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3278m 20% 14374Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1246m 7% 2060Mi 3% 06:01:48 DEBUG --- stderr --- 06:01:48 DEBUG 06:02:47 INFO 06:02:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:02:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:02:48 INFO 06:02:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:02:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:02:48 INFO [loop_until]: OK (rc = 0) 06:02:48 DEBUG --- stdout --- 06:02:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 94m 5770Mi am-55f77847b7-ch6mt 99m 5770Mi am-55f77847b7-gbbjq 102m 5795Mi ds-cts-0 6m 380Mi ds-cts-1 8m 380Mi ds-cts-2 6m 460Mi ds-idrepo-0 10585m 13824Mi ds-idrepo-1 3177m 13849Mi ds-idrepo-2 3159m 13787Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10247m 4456Mi idm-65858d8c4c-h9wbp 9346m 4363Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1148m 538Mi 06:02:48 DEBUG --- stderr --- 06:02:48 DEBUG 06:02:48 INFO [loop_until]: OK (rc = 0) 06:02:48 DEBUG --- stdout --- 06:02:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 161m 1% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9595m 60% 5689Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2426m 15% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10566m 66% 5728Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3365m 21% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1203Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10259m 64% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3177m 19% 14332Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1247m 7% 2064Mi 3% 06:02:48 DEBUG --- stderr --- 06:02:48 DEBUG 06:03:48 INFO 06:03:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:03:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:03:48 INFO [loop_until]: OK (rc = 0) 06:03:48 DEBUG --- stdout --- 06:03:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 90m 5771Mi am-55f77847b7-ch6mt 119m 5771Mi am-55f77847b7-gbbjq 89m 5789Mi ds-cts-0 8m 381Mi ds-cts-1 11m 380Mi ds-cts-2 6m 460Mi ds-idrepo-0 10298m 13802Mi ds-idrepo-1 2873m 13811Mi ds-idrepo-2 2871m 13800Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10072m 4509Mi idm-65858d8c4c-h9wbp 8923m 4380Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1175m 539Mi 06:03:48 DEBUG --- stderr --- 06:03:48 DEBUG 06:03:48 INFO 06:03:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:03:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:03:48 INFO [loop_until]: OK (rc = 0) 06:03:48 DEBUG --- stdout --- 06:03:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 171m 1% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9378m 59% 5713Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2369m 14% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10673m 67% 5775Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2983m 18% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1199Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10713m 67% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2918m 18% 14316Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1251m 7% 2061Mi 3% 06:03:48 DEBUG --- stderr --- 06:03:48 DEBUG 06:04:48 INFO 06:04:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:04:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:04:48 INFO [loop_until]: OK (rc = 0) 06:04:48 DEBUG --- stdout --- 06:04:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 96m 5774Mi am-55f77847b7-ch6mt 104m 5771Mi am-55f77847b7-gbbjq 95m 5789Mi ds-cts-0 7m 380Mi ds-cts-1 9m 381Mi ds-cts-2 6m 460Mi ds-idrepo-0 10007m 13821Mi ds-idrepo-1 2434m 13826Mi ds-idrepo-2 2326m 13837Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10379m 4554Mi idm-65858d8c4c-h9wbp 8851m 4399Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1181m 539Mi 06:04:48 DEBUG --- stderr --- 06:04:48 DEBUG 06:04:48 INFO 06:04:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:04:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:04:48 INFO [loop_until]: OK (rc = 0) 06:04:48 DEBUG --- stdout --- 06:04:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 168m 1% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9320m 58% 5739Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2445m 15% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10646m 66% 5830Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2526m 15% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1200Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10407m 65% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2549m 16% 14341Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1254m 7% 2062Mi 3% 06:04:48 DEBUG --- stderr --- 06:04:48 DEBUG 06:05:48 INFO 06:05:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:05:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:05:48 INFO [loop_until]: OK (rc = 0) 06:05:48 DEBUG --- stdout --- 06:05:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 98m 5782Mi am-55f77847b7-ch6mt 102m 5777Mi am-55f77847b7-gbbjq 88m 5789Mi ds-cts-0 7m 380Mi ds-cts-1 9m 380Mi ds-cts-2 6m 460Mi ds-idrepo-0 10755m 13807Mi ds-idrepo-1 3841m 13860Mi ds-idrepo-2 3533m 13822Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10018m 4605Mi idm-65858d8c4c-h9wbp 8620m 4419Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1159m 540Mi 06:05:48 DEBUG --- stderr --- 06:05:48 DEBUG 06:05:48 INFO 06:05:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:05:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:05:48 INFO [loop_until]: OK (rc = 0) 06:05:48 DEBUG --- stdout --- 06:05:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9116m 57% 5749Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2391m 15% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10544m 66% 5874Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3647m 22% 14318Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10858m 68% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3328m 20% 14300Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1248m 7% 2061Mi 3% 06:05:48 DEBUG --- stderr --- 06:05:48 DEBUG 06:06:48 INFO 06:06:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:06:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:06:48 INFO [loop_until]: OK (rc = 0) 06:06:48 DEBUG --- stdout --- 06:06:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 90m 5781Mi am-55f77847b7-ch6mt 98m 5777Mi am-55f77847b7-gbbjq 94m 5789Mi ds-cts-0 6m 380Mi ds-cts-1 8m 381Mi ds-cts-2 6m 460Mi ds-idrepo-0 10107m 13807Mi ds-idrepo-1 3097m 13857Mi ds-idrepo-2 3021m 13828Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10240m 4643Mi idm-65858d8c4c-h9wbp 8984m 4441Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1203m 540Mi 06:06:48 DEBUG --- stderr --- 06:06:48 DEBUG 06:06:48 INFO 06:06:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:06:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:06:48 INFO [loop_until]: OK (rc = 0) 06:06:48 DEBUG --- stdout --- 06:06:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 160m 1% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6933Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6955Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9266m 58% 5771Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2418m 15% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10623m 66% 5921Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3488m 21% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10256m 64% 14323Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3183m 20% 14340Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1285m 8% 2061Mi 3% 06:06:48 DEBUG --- stderr --- 06:06:48 DEBUG 06:07:48 INFO 06:07:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:07:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:07:48 INFO [loop_until]: OK (rc = 0) 06:07:48 DEBUG --- stdout --- 06:07:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 99m 5782Mi am-55f77847b7-ch6mt 99m 5778Mi am-55f77847b7-gbbjq 93m 5789Mi ds-cts-0 6m 380Mi ds-cts-1 9m 381Mi ds-cts-2 6m 462Mi ds-idrepo-0 9956m 13816Mi ds-idrepo-1 2931m 13846Mi ds-idrepo-2 2669m 13825Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9750m 4696Mi idm-65858d8c4c-h9wbp 8712m 4464Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1176m 542Mi 06:07:48 DEBUG --- stderr --- 06:07:48 DEBUG 06:07:48 INFO 06:07:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:07:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:07:48 INFO [loop_until]: OK (rc = 0) 06:07:48 DEBUG --- stdout --- 06:07:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 161m 1% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 156m 0% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 158m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9096m 57% 5793Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2393m 15% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10426m 65% 5964Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2862m 18% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10300m 64% 14337Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2855m 17% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1201m 7% 2063Mi 3% 06:07:48 DEBUG --- stderr --- 06:07:48 DEBUG 06:08:48 INFO 06:08:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:08:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:08:48 INFO [loop_until]: OK (rc = 0) 06:08:48 DEBUG --- stdout --- 06:08:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 91m 5783Mi am-55f77847b7-ch6mt 101m 5779Mi am-55f77847b7-gbbjq 93m 5790Mi ds-cts-0 8m 381Mi ds-cts-1 8m 381Mi ds-cts-2 7m 460Mi ds-idrepo-0 10556m 13820Mi ds-idrepo-1 2774m 13819Mi ds-idrepo-2 3093m 13822Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10139m 4734Mi idm-65858d8c4c-h9wbp 8760m 4484Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1155m 542Mi 06:08:48 DEBUG --- stderr --- 06:08:48 DEBUG 06:08:48 INFO 06:08:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:08:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:08:48 INFO [loop_until]: OK (rc = 0) 06:08:48 DEBUG --- stdout --- 06:08:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 164m 1% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8914m 56% 5813Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2428m 15% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10220m 64% 6013Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3141m 19% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10303m 64% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3020m 19% 14291Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1251m 7% 2065Mi 3% 06:08:48 DEBUG --- stderr --- 06:08:48 DEBUG 06:09:48 INFO 06:09:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:09:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:09:48 INFO [loop_until]: OK (rc = 0) 06:09:48 DEBUG --- stdout --- 06:09:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 102m 5789Mi am-55f77847b7-ch6mt 102m 5779Mi am-55f77847b7-gbbjq 91m 5790Mi ds-cts-0 6m 380Mi ds-cts-1 10m 381Mi ds-cts-2 6m 460Mi ds-idrepo-0 10241m 13813Mi ds-idrepo-1 3346m 13855Mi ds-idrepo-2 3123m 13828Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10013m 4793Mi idm-65858d8c4c-h9wbp 8789m 4530Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1146m 542Mi 06:09:48 DEBUG --- stderr --- 06:09:48 DEBUG 06:09:48 INFO 06:09:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:09:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:09:49 INFO [loop_until]: OK (rc = 0) 06:09:49 DEBUG --- stdout --- 06:09:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 166m 1% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 160m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9248m 58% 5856Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2436m 15% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10553m 66% 6058Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3276m 20% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1200Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10541m 66% 14347Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3180m 20% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1256m 7% 2065Mi 3% 06:09:49 DEBUG --- stderr --- 06:09:49 DEBUG 06:10:48 INFO 06:10:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:10:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:10:48 INFO [loop_until]: OK (rc = 0) 06:10:48 DEBUG --- stdout --- 06:10:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 98m 5790Mi am-55f77847b7-ch6mt 101m 5779Mi am-55f77847b7-gbbjq 92m 5790Mi ds-cts-0 6m 381Mi ds-cts-1 8m 381Mi ds-cts-2 6m 460Mi ds-idrepo-0 10406m 13824Mi ds-idrepo-1 2517m 13842Mi ds-idrepo-2 2607m 13824Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10275m 4839Mi idm-65858d8c4c-h9wbp 8655m 4568Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1172m 543Mi 06:10:48 DEBUG --- stderr --- 06:10:48 DEBUG 06:10:49 INFO 06:10:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:10:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:10:49 INFO [loop_until]: OK (rc = 0) 06:10:49 DEBUG --- stdout --- 06:10:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 152m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9116m 57% 5897Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2423m 15% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10665m 67% 6106Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2712m 17% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1200Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10301m 64% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2509m 15% 14363Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1256m 7% 2064Mi 3% 06:10:49 DEBUG --- stderr --- 06:10:49 DEBUG 06:11:48 INFO 06:11:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:11:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:11:48 INFO [loop_until]: OK (rc = 0) 06:11:48 DEBUG --- stdout --- 06:11:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 93m 5785Mi am-55f77847b7-ch6mt 100m 5779Mi am-55f77847b7-gbbjq 97m 5790Mi ds-cts-0 7m 382Mi ds-cts-1 8m 379Mi ds-cts-2 6m 460Mi ds-idrepo-0 10250m 13823Mi ds-idrepo-1 2822m 13848Mi ds-idrepo-2 2926m 13858Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10216m 4892Mi idm-65858d8c4c-h9wbp 8808m 4613Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1193m 544Mi 06:11:48 DEBUG --- stderr --- 06:11:48 DEBUG 06:11:49 INFO 06:11:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:11:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:11:49 INFO [loop_until]: OK (rc = 0) 06:11:49 DEBUG --- stdout --- 06:11:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 165m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9377m 59% 5941Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2467m 15% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10610m 66% 6156Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3029m 19% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1200Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10349m 65% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3061m 19% 14365Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1269m 7% 2067Mi 3% 06:11:49 DEBUG --- stderr --- 06:11:49 DEBUG 06:12:48 INFO 06:12:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:12:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:12:49 INFO [loop_until]: OK (rc = 0) 06:12:49 DEBUG --- stdout --- 06:12:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 99m 5785Mi am-55f77847b7-ch6mt 102m 5779Mi am-55f77847b7-gbbjq 95m 5790Mi ds-cts-0 7m 380Mi ds-cts-1 8m 379Mi ds-cts-2 8m 461Mi ds-idrepo-0 10224m 13817Mi ds-idrepo-1 2859m 13854Mi ds-idrepo-2 2657m 13824Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10152m 4943Mi idm-65858d8c4c-h9wbp 8845m 4651Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1192m 544Mi 06:12:49 DEBUG --- stderr --- 06:12:49 DEBUG 06:12:49 INFO 06:12:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:12:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:12:49 INFO [loop_until]: OK (rc = 0) 06:12:49 DEBUG --- stdout --- 06:12:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 164m 1% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6933Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 156m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9179m 57% 5977Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2438m 15% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10587m 66% 6212Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2691m 16% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1202Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10541m 66% 14348Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2439m 15% 14378Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1262m 7% 2078Mi 3% 06:12:49 DEBUG --- stderr --- 06:12:49 DEBUG 06:13:49 INFO 06:13:49 INFO [loop_until]: kubectl --namespace=xlou top pods 06:13:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:13:49 INFO [loop_until]: OK (rc = 0) 06:13:49 DEBUG --- stdout --- 06:13:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 100m 5785Mi am-55f77847b7-ch6mt 105m 5779Mi am-55f77847b7-gbbjq 94m 5790Mi ds-cts-0 9m 381Mi ds-cts-1 8m 379Mi ds-cts-2 7m 460Mi ds-idrepo-0 9748m 13823Mi ds-idrepo-1 2390m 13847Mi ds-idrepo-2 2535m 13855Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9897m 4984Mi idm-65858d8c4c-h9wbp 8775m 4688Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1161m 545Mi 06:13:49 DEBUG --- stderr --- 06:13:49 DEBUG 06:13:49 INFO 06:13:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:13:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:13:49 INFO [loop_until]: OK (rc = 0) 06:13:49 DEBUG --- stdout --- 06:13:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 168m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6929Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 156m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9125m 57% 6017Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2413m 15% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10399m 65% 6267Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2586m 16% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1203Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9893m 62% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2537m 15% 14370Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1242m 7% 2067Mi 3% 06:13:49 DEBUG --- stderr --- 06:13:49 DEBUG 06:14:49 INFO 06:14:49 INFO [loop_until]: kubectl --namespace=xlou top pods 06:14:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:14:49 INFO [loop_until]: OK (rc = 0) 06:14:49 DEBUG --- stdout --- 06:14:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 90m 5785Mi am-55f77847b7-ch6mt 97m 5779Mi am-55f77847b7-gbbjq 86m 5790Mi ds-cts-0 9m 381Mi ds-cts-1 7m 381Mi ds-cts-2 7m 460Mi ds-idrepo-0 10128m 13850Mi ds-idrepo-1 3087m 13853Mi ds-idrepo-2 3261m 13856Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10117m 5032Mi idm-65858d8c4c-h9wbp 8816m 4728Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1187m 545Mi 06:14:49 DEBUG --- stderr --- 06:14:49 DEBUG 06:14:49 INFO 06:14:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:14:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:14:49 INFO [loop_until]: OK (rc = 0) 06:14:49 DEBUG --- stdout --- 06:14:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 167m 1% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9130m 57% 6058Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2344m 14% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10569m 66% 6302Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3459m 21% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1200Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10584m 66% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3008m 18% 14369Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1247m 7% 2069Mi 3% 06:14:49 DEBUG --- stderr --- 06:14:49 DEBUG 06:15:49 INFO 06:15:49 INFO [loop_until]: kubectl --namespace=xlou top pods 06:15:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:15:49 INFO [loop_until]: OK (rc = 0) 06:15:49 DEBUG --- stdout --- 06:15:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 94m 5785Mi am-55f77847b7-ch6mt 100m 5779Mi am-55f77847b7-gbbjq 92m 5790Mi ds-cts-0 6m 381Mi ds-cts-1 8m 381Mi ds-cts-2 10m 461Mi ds-idrepo-0 10001m 13838Mi ds-idrepo-1 2385m 13858Mi ds-idrepo-2 2444m 13847Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9971m 5085Mi idm-65858d8c4c-h9wbp 8695m 4768Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1165m 551Mi 06:15:49 DEBUG --- stderr --- 06:15:49 DEBUG 06:15:49 INFO 06:15:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:15:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:15:49 INFO [loop_until]: OK (rc = 0) 06:15:49 DEBUG --- stdout --- 06:15:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 161m 1% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9149m 57% 6096Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2421m 15% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10526m 66% 6352Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2505m 15% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1202Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10116m 63% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2424m 15% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1265m 7% 2066Mi 3% 06:15:49 DEBUG --- stderr --- 06:15:49 DEBUG 06:16:49 INFO 06:16:49 INFO [loop_until]: kubectl --namespace=xlou top pods 06:16:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:16:49 INFO [loop_until]: OK (rc = 0) 06:16:49 DEBUG --- stdout --- 06:16:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 96m 5786Mi am-55f77847b7-ch6mt 99m 5779Mi am-55f77847b7-gbbjq 94m 5790Mi ds-cts-0 8m 382Mi ds-cts-1 7m 382Mi ds-cts-2 8m 460Mi ds-idrepo-0 9448m 13852Mi ds-idrepo-1 2464m 13858Mi ds-idrepo-2 2302m 13856Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9997m 5123Mi idm-65858d8c4c-h9wbp 8714m 4809Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1145m 544Mi 06:16:49 DEBUG --- stderr --- 06:16:49 DEBUG 06:16:49 INFO 06:16:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:16:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:16:49 INFO [loop_until]: OK (rc = 0) 06:16:49 DEBUG --- stdout --- 06:16:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1397Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 154m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9166m 57% 6133Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2347m 14% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10566m 66% 6400Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2415m 15% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9701m 61% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2433m 15% 14378Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1243m 7% 2065Mi 3% 06:16:49 DEBUG --- stderr --- 06:16:49 DEBUG 06:17:49 INFO 06:17:49 INFO [loop_until]: kubectl --namespace=xlou top pods 06:17:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:17:49 INFO [loop_until]: OK (rc = 0) 06:17:49 DEBUG --- stdout --- 06:17:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 89m 5785Mi am-55f77847b7-ch6mt 98m 5779Mi am-55f77847b7-gbbjq 95m 5791Mi ds-cts-0 6m 381Mi ds-cts-1 8m 381Mi ds-cts-2 9m 460Mi ds-idrepo-0 10878m 13836Mi ds-idrepo-1 3093m 13835Mi ds-idrepo-2 3523m 13830Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10262m 5180Mi idm-65858d8c4c-h9wbp 8819m 4847Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1188m 544Mi 06:17:49 DEBUG --- stderr --- 06:17:49 DEBUG 06:17:49 INFO 06:17:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:17:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:17:50 INFO [loop_until]: OK (rc = 0) 06:17:50 DEBUG --- stdout --- 06:17:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 156m 0% 6931Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9247m 58% 6176Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2434m 15% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10558m 66% 6446Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3271m 20% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1202Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10736m 67% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3387m 21% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1212m 7% 2065Mi 3% 06:17:50 DEBUG --- stderr --- 06:17:50 DEBUG 06:18:49 INFO 06:18:49 INFO [loop_until]: kubectl --namespace=xlou top pods 06:18:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:18:49 INFO [loop_until]: OK (rc = 0) 06:18:49 DEBUG --- stdout --- 06:18:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 89m 5786Mi am-55f77847b7-ch6mt 92m 5779Mi am-55f77847b7-gbbjq 92m 5791Mi ds-cts-0 6m 382Mi ds-cts-1 8m 381Mi ds-cts-2 10m 461Mi ds-idrepo-0 10403m 13864Mi ds-idrepo-1 2460m 13861Mi ds-idrepo-2 2551m 13844Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10105m 5224Mi idm-65858d8c4c-h9wbp 8896m 4889Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1156m 545Mi 06:18:49 DEBUG --- stderr --- 06:18:49 DEBUG 06:18:50 INFO 06:18:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:18:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:18:50 INFO [loop_until]: OK (rc = 0) 06:18:50 DEBUG --- stdout --- 06:18:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6933Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8926m 56% 6217Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2351m 14% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10606m 66% 6499Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2713m 17% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10332m 65% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2655m 16% 14381Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1254m 7% 2066Mi 3% 06:18:50 DEBUG --- stderr --- 06:18:50 DEBUG 06:19:49 INFO 06:19:49 INFO [loop_until]: kubectl --namespace=xlou top pods 06:19:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:19:49 INFO [loop_until]: OK (rc = 0) 06:19:49 DEBUG --- stdout --- 06:19:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 97m 5788Mi am-55f77847b7-ch6mt 102m 5779Mi am-55f77847b7-gbbjq 107m 5791Mi ds-cts-0 7m 381Mi ds-cts-1 8m 381Mi ds-cts-2 7m 461Mi ds-idrepo-0 9710m 13830Mi ds-idrepo-1 2559m 13857Mi ds-idrepo-2 2566m 13861Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10169m 5284Mi idm-65858d8c4c-h9wbp 9027m 4931Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1211m 546Mi 06:19:49 DEBUG --- stderr --- 06:19:49 DEBUG 06:19:50 INFO 06:19:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:19:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:19:50 INFO [loop_until]: OK (rc = 0) 06:19:50 DEBUG --- stdout --- 06:19:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 168m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 155m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9280m 58% 6258Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2449m 15% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10605m 66% 6547Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2500m 15% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10007m 62% 14336Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2508m 15% 14381Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1268m 7% 2065Mi 3% 06:19:50 DEBUG --- stderr --- 06:19:50 DEBUG 06:20:49 INFO 06:20:49 INFO [loop_until]: kubectl --namespace=xlou top pods 06:20:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:20:49 INFO [loop_until]: OK (rc = 0) 06:20:49 DEBUG --- stdout --- 06:20:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 101m 5788Mi am-55f77847b7-ch6mt 103m 5780Mi am-55f77847b7-gbbjq 92m 5792Mi ds-cts-0 7m 381Mi ds-cts-1 8m 381Mi ds-cts-2 8m 462Mi ds-idrepo-0 9363m 13847Mi ds-idrepo-1 2410m 13853Mi ds-idrepo-2 2819m 13861Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9983m 5331Mi idm-65858d8c4c-h9wbp 8755m 4967Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1174m 546Mi 06:20:49 DEBUG --- stderr --- 06:20:49 DEBUG 06:20:50 INFO 06:20:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:20:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:20:50 INFO [loop_until]: OK (rc = 0) 06:20:50 DEBUG --- stdout --- 06:20:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 165m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6937Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 155m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9070m 57% 6300Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2348m 14% 2184Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10611m 66% 6600Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2741m 17% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9634m 60% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2778m 17% 14371Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1252m 7% 2067Mi 3% 06:20:50 DEBUG --- stderr --- 06:20:50 DEBUG 06:21:50 INFO 06:21:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:21:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:21:50 INFO [loop_until]: OK (rc = 0) 06:21:50 DEBUG --- stdout --- 06:21:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 94m 5788Mi am-55f77847b7-ch6mt 97m 5780Mi am-55f77847b7-gbbjq 94m 5792Mi ds-cts-0 6m 381Mi ds-cts-1 8m 381Mi ds-cts-2 6m 461Mi ds-idrepo-0 10528m 13867Mi ds-idrepo-1 3032m 13834Mi ds-idrepo-2 3327m 13841Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10073m 5384Mi idm-65858d8c4c-h9wbp 9413m 5014Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1167m 547Mi 06:21:50 DEBUG --- stderr --- 06:21:50 DEBUG 06:21:50 INFO 06:21:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:21:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:21:50 INFO [loop_until]: OK (rc = 0) 06:21:50 DEBUG --- stdout --- 06:21:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 152m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9108m 57% 6350Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2454m 15% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10560m 66% 6652Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3242m 20% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10158m 63% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2899m 18% 14361Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1266m 7% 2069Mi 3% 06:21:50 DEBUG --- stderr --- 06:21:50 DEBUG 06:22:50 INFO 06:22:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:22:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:22:50 INFO [loop_until]: OK (rc = 0) 06:22:50 DEBUG --- stdout --- 06:22:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 95m 5789Mi am-55f77847b7-ch6mt 99m 5780Mi am-55f77847b7-gbbjq 93m 5793Mi ds-cts-0 8m 381Mi ds-cts-1 8m 381Mi ds-cts-2 8m 461Mi ds-idrepo-0 10091m 13823Mi ds-idrepo-1 2842m 13841Mi ds-idrepo-2 2755m 13865Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10080m 5435Mi idm-65858d8c4c-h9wbp 8860m 5061Mi lodemon-65c77dbb64-7jwvp 1m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1165m 548Mi 06:22:50 DEBUG --- stderr --- 06:22:50 DEBUG 06:22:50 INFO 06:22:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:22:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:22:50 INFO [loop_until]: OK (rc = 0) 06:22:50 DEBUG --- stdout --- 06:22:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 159m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 154m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9231m 58% 6394Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2352m 14% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10489m 66% 6697Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2873m 18% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10426m 65% 14351Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2774m 17% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1251m 7% 2065Mi 3% 06:22:50 DEBUG --- stderr --- 06:22:50 DEBUG 06:23:50 INFO 06:23:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:23:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:23:50 INFO [loop_until]: OK (rc = 0) 06:23:50 DEBUG --- stdout --- 06:23:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 80m 5789Mi am-55f77847b7-ch6mt 96m 5780Mi am-55f77847b7-gbbjq 94m 5792Mi ds-cts-0 10m 381Mi ds-cts-1 8m 381Mi ds-cts-2 8m 461Mi ds-idrepo-0 9397m 13835Mi ds-idrepo-1 2577m 13842Mi ds-idrepo-2 2325m 13854Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10400m 5436Mi idm-65858d8c4c-h9wbp 8871m 5099Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1181m 549Mi 06:23:50 DEBUG --- stderr --- 06:23:50 DEBUG 06:23:50 INFO 06:23:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:23:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:23:50 INFO [loop_until]: OK (rc = 0) 06:23:50 DEBUG --- stdout --- 06:23:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 157m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8117m 51% 6431Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2386m 15% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10163m 63% 6704Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2367m 14% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9963m 62% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2610m 16% 14381Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1118m 7% 2063Mi 3% 06:23:50 DEBUG --- stderr --- 06:23:50 DEBUG 06:24:50 INFO 06:24:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:24:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:24:50 INFO [loop_until]: OK (rc = 0) 06:24:50 DEBUG --- stdout --- 06:24:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 6m 5789Mi am-55f77847b7-ch6mt 8m 5780Mi am-55f77847b7-gbbjq 12m 5793Mi ds-cts-0 15m 381Mi ds-cts-1 6m 381Mi ds-cts-2 9m 464Mi ds-idrepo-0 12m 13825Mi ds-idrepo-1 35m 13822Mi ds-idrepo-2 220m 13806Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 7m 5435Mi idm-65858d8c4c-h9wbp 8m 5103Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 41m 112Mi 06:24:50 DEBUG --- stderr --- 06:24:50 DEBUG 06:24:50 INFO 06:24:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:24:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:24:50 INFO [loop_until]: OK (rc = 0) 06:24:50 DEBUG --- stdout --- 06:24:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 6435Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 6706Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 88m 0% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 85m 0% 14352Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1638Mi 2% 06:24:50 DEBUG --- stderr --- 06:24:50 DEBUG 127.0.0.1 - - [13/Aug/2023 06:25:34] "GET /monitoring/average?start_time=23-08-13_04:55:02&stop_time=23-08-13_05:23:33 HTTP/1.1" 200 - 06:25:50 INFO 06:25:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:25:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:25:50 INFO [loop_until]: OK (rc = 0) 06:25:50 DEBUG --- stdout --- 06:25:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 6m 5789Mi am-55f77847b7-ch6mt 9m 5780Mi am-55f77847b7-gbbjq 8m 5792Mi ds-cts-0 8m 381Mi ds-cts-1 7m 381Mi ds-cts-2 8m 464Mi ds-idrepo-0 12m 13826Mi ds-idrepo-1 10m 13822Mi ds-idrepo-2 13m 13807Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 7m 5434Mi idm-65858d8c4c-h9wbp 8m 5102Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 112Mi 06:25:50 DEBUG --- stderr --- 06:25:50 DEBUG 06:25:50 INFO 06:25:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:25:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:25:50 INFO [loop_until]: OK (rc = 0) 06:25:50 DEBUG --- stdout --- 06:25:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 6433Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 6705Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14353Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1637Mi 2% 06:25:50 DEBUG --- stderr --- 06:25:50 DEBUG 06:26:50 INFO 06:26:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:26:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:26:50 INFO [loop_until]: OK (rc = 0) 06:26:50 DEBUG --- stdout --- 06:26:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 90m 5791Mi am-55f77847b7-ch6mt 66m 5781Mi am-55f77847b7-gbbjq 37m 5792Mi ds-cts-0 9m 382Mi ds-cts-1 8m 381Mi ds-cts-2 8m 464Mi ds-idrepo-0 3682m 13867Mi ds-idrepo-1 671m 13849Mi ds-idrepo-2 913m 13850Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 1913m 5448Mi idm-65858d8c4c-h9wbp 3782m 5178Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1466m 527Mi 06:26:50 DEBUG --- stderr --- 06:26:50 DEBUG 06:26:51 INFO 06:26:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:26:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:26:51 INFO [loop_until]: OK (rc = 0) 06:26:51 DEBUG --- stdout --- 06:26:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 123m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 115m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5427m 34% 6528Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1283m 8% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4295m 27% 6716Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1440m 9% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 4355m 27% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1271m 7% 14379Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1535m 9% 2054Mi 3% 06:26:51 DEBUG --- stderr --- 06:26:51 DEBUG 06:27:50 INFO 06:27:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:27:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:27:50 INFO [loop_until]: OK (rc = 0) 06:27:50 DEBUG --- stdout --- 06:27:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 99m 5791Mi am-55f77847b7-ch6mt 104m 5781Mi am-55f77847b7-gbbjq 101m 5793Mi ds-cts-0 7m 381Mi ds-cts-1 9m 382Mi ds-cts-2 6m 464Mi ds-idrepo-0 10743m 13850Mi ds-idrepo-1 3533m 13818Mi ds-idrepo-2 3277m 13810Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10165m 5439Mi idm-65858d8c4c-h9wbp 8800m 5249Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1324m 540Mi 06:27:50 DEBUG --- stderr --- 06:27:50 DEBUG 06:27:51 INFO 06:27:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:27:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:27:51 INFO [loop_until]: OK (rc = 0) 06:27:51 DEBUG --- stdout --- 06:27:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 166m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 160m 1% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 157m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9301m 58% 6570Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2693m 16% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10398m 65% 6707Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3941m 24% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11085m 69% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3617m 22% 14355Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1416m 8% 2057Mi 3% 06:27:51 DEBUG --- stderr --- 06:27:51 DEBUG 06:28:50 INFO 06:28:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:28:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:28:50 INFO [loop_until]: OK (rc = 0) 06:28:50 DEBUG --- stdout --- 06:28:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 89m 5791Mi am-55f77847b7-ch6mt 100m 5781Mi am-55f77847b7-gbbjq 96m 5793Mi ds-cts-0 6m 382Mi ds-cts-1 8m 382Mi ds-cts-2 6m 464Mi ds-idrepo-0 11941m 13825Mi ds-idrepo-1 3510m 13723Mi ds-idrepo-2 4655m 13753Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10444m 5442Mi idm-65858d8c4c-h9wbp 8829m 5309Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1311m 585Mi 06:28:50 DEBUG --- stderr --- 06:28:50 DEBUG 06:28:51 INFO 06:28:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:28:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:28:51 INFO [loop_until]: OK (rc = 0) 06:28:51 DEBUG --- stdout --- 06:28:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 159m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9393m 59% 6631Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2691m 16% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10761m 67% 6708Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4873m 30% 14287Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 12076m 75% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3587m 22% 14267Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1414m 8% 2100Mi 3% 06:28:51 DEBUG --- stderr --- 06:28:51 DEBUG 06:29:50 INFO 06:29:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:29:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:29:50 INFO [loop_until]: OK (rc = 0) 06:29:50 DEBUG --- stdout --- 06:29:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 98m 5791Mi am-55f77847b7-ch6mt 100m 5781Mi am-55f77847b7-gbbjq 97m 5793Mi ds-cts-0 6m 382Mi ds-cts-1 9m 382Mi ds-cts-2 6m 464Mi ds-idrepo-0 11007m 13799Mi ds-idrepo-1 3673m 13818Mi ds-idrepo-2 3591m 13803Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10408m 5450Mi idm-65858d8c4c-h9wbp 8933m 5365Mi lodemon-65c77dbb64-7jwvp 1m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1285m 591Mi 06:29:50 DEBUG --- stderr --- 06:29:50 DEBUG 06:29:51 INFO 06:29:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:29:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:29:51 INFO [loop_until]: OK (rc = 0) 06:29:51 DEBUG --- stdout --- 06:29:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 157m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9321m 58% 6683Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2684m 16% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10775m 67% 6715Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3527m 22% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11348m 71% 14319Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3725m 23% 14357Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1399m 8% 2105Mi 3% 06:29:51 DEBUG --- stderr --- 06:29:51 DEBUG 06:30:50 INFO 06:30:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:30:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:30:50 INFO [loop_until]: OK (rc = 0) 06:30:50 DEBUG --- stdout --- 06:30:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 99m 5792Mi am-55f77847b7-ch6mt 107m 5781Mi am-55f77847b7-gbbjq 99m 5794Mi ds-cts-0 7m 382Mi ds-cts-1 8m 383Mi ds-cts-2 6m 464Mi ds-idrepo-0 10899m 13811Mi ds-idrepo-1 4386m 13823Mi ds-idrepo-2 3802m 13828Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10301m 5450Mi idm-65858d8c4c-h9wbp 9097m 5385Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1316m 597Mi 06:30:50 DEBUG --- stderr --- 06:30:50 DEBUG 06:30:51 INFO 06:30:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:30:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:30:51 INFO [loop_until]: OK (rc = 0) 06:30:51 DEBUG --- stdout --- 06:30:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 165m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9300m 58% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2675m 16% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10707m 67% 6716Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3938m 24% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11169m 70% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4168m 26% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1415m 8% 2119Mi 3% 06:30:51 DEBUG --- stderr --- 06:30:51 DEBUG 06:31:50 INFO 06:31:50 INFO [loop_until]: kubectl --namespace=xlou top pods 06:31:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:31:51 INFO [loop_until]: OK (rc = 0) 06:31:51 DEBUG --- stdout --- 06:31:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 95m 5792Mi am-55f77847b7-ch6mt 101m 5781Mi am-55f77847b7-gbbjq 97m 5794Mi ds-cts-0 6m 382Mi ds-cts-1 8m 382Mi ds-cts-2 6m 464Mi ds-idrepo-0 11040m 13607Mi ds-idrepo-1 3530m 13772Mi ds-idrepo-2 3720m 13835Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9937m 5452Mi idm-65858d8c4c-h9wbp 8724m 5385Mi lodemon-65c77dbb64-7jwvp 4m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1332m 604Mi 06:31:51 DEBUG --- stderr --- 06:31:51 DEBUG 06:31:51 INFO 06:31:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:31:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:31:51 INFO [loop_until]: OK (rc = 0) 06:31:51 DEBUG --- stdout --- 06:31:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 154m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9343m 58% 6711Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2586m 16% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10671m 67% 6720Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3514m 22% 14418Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11303m 71% 14163Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3587m 22% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1354m 8% 2122Mi 3% 06:31:51 DEBUG --- stderr --- 06:31:51 DEBUG 06:32:51 INFO 06:32:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:32:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:32:51 INFO [loop_until]: OK (rc = 0) 06:32:51 DEBUG --- stdout --- 06:32:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 96m 5793Mi am-55f77847b7-ch6mt 103m 5781Mi am-55f77847b7-gbbjq 97m 5794Mi ds-cts-0 7m 382Mi ds-cts-1 8m 382Mi ds-cts-2 6m 465Mi ds-idrepo-0 10992m 13820Mi ds-idrepo-1 3115m 13733Mi ds-idrepo-2 2926m 13715Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10280m 5452Mi idm-65858d8c4c-h9wbp 9205m 5386Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1406m 608Mi 06:32:51 DEBUG --- stderr --- 06:32:51 DEBUG 06:32:51 INFO 06:32:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:32:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:32:51 INFO [loop_until]: OK (rc = 0) 06:32:51 DEBUG --- stdout --- 06:32:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 170m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 161m 1% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9438m 59% 6711Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2712m 17% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10693m 67% 6719Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3009m 18% 14314Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10951m 68% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3210m 20% 14303Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1415m 8% 2125Mi 3% 06:32:51 DEBUG --- stderr --- 06:32:51 DEBUG 06:33:51 INFO 06:33:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:33:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:33:51 INFO [loop_until]: OK (rc = 0) 06:33:51 DEBUG --- stdout --- 06:33:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 100m 5793Mi am-55f77847b7-ch6mt 100m 5781Mi am-55f77847b7-gbbjq 96m 5794Mi ds-cts-0 7m 382Mi ds-cts-1 8m 383Mi ds-cts-2 7m 464Mi ds-idrepo-0 10692m 13830Mi ds-idrepo-1 2841m 13840Mi ds-idrepo-2 2721m 13732Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10559m 5454Mi idm-65858d8c4c-h9wbp 9175m 5386Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1347m 662Mi 06:33:51 DEBUG --- stderr --- 06:33:51 DEBUG 06:33:51 INFO 06:33:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:33:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:33:51 INFO [loop_until]: OK (rc = 0) 06:33:51 DEBUG --- stdout --- 06:33:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 160m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 156m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9452m 59% 6710Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2641m 16% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10709m 67% 6722Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2904m 18% 14304Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10872m 68% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2562m 16% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1390m 8% 2177Mi 3% 06:33:51 DEBUG --- stderr --- 06:33:51 DEBUG 06:34:51 INFO 06:34:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:34:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:34:51 INFO [loop_until]: OK (rc = 0) 06:34:51 DEBUG --- stdout --- 06:34:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 102m 5793Mi am-55f77847b7-ch6mt 105m 5782Mi am-55f77847b7-gbbjq 98m 5795Mi ds-cts-0 6m 382Mi ds-cts-1 9m 383Mi ds-cts-2 7m 464Mi ds-idrepo-0 10318m 13866Mi ds-idrepo-1 2639m 13857Mi ds-idrepo-2 3714m 13851Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10605m 5454Mi idm-65858d8c4c-h9wbp 9067m 5393Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1317m 668Mi 06:34:51 DEBUG --- stderr --- 06:34:51 DEBUG 06:34:51 INFO 06:34:51 INFO [loop_until]: kubectl --namespace=xlou top node 06:34:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:34:52 INFO [loop_until]: OK (rc = 0) 06:34:52 DEBUG --- stdout --- 06:34:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 161m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9534m 60% 6721Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2711m 17% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10359m 65% 6722Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4001m 25% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1203Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10287m 64% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2718m 17% 14424Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1376m 8% 2182Mi 3% 06:34:52 DEBUG --- stderr --- 06:34:52 DEBUG 06:35:51 INFO 06:35:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:35:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:35:51 INFO [loop_until]: OK (rc = 0) 06:35:51 DEBUG --- stdout --- 06:35:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 97m 5793Mi am-55f77847b7-ch6mt 104m 5782Mi am-55f77847b7-gbbjq 105m 5795Mi ds-cts-0 9m 382Mi ds-cts-1 8m 382Mi ds-cts-2 6m 464Mi ds-idrepo-0 10403m 13864Mi ds-idrepo-1 3215m 13840Mi ds-idrepo-2 2732m 13838Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10419m 5455Mi idm-65858d8c4c-h9wbp 8932m 5394Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1279m 674Mi 06:35:51 DEBUG --- stderr --- 06:35:51 DEBUG 06:35:52 INFO 06:35:52 INFO [loop_until]: kubectl --namespace=xlou top node 06:35:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:35:52 INFO [loop_until]: OK (rc = 0) 06:35:52 DEBUG --- stdout --- 06:35:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 159m 1% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 165m 1% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 156m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9343m 58% 6719Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2601m 16% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10673m 67% 6720Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2885m 18% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10504m 66% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3329m 20% 14412Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1372m 8% 2188Mi 3% 06:35:52 DEBUG --- stderr --- 06:35:52 DEBUG 06:36:51 INFO 06:36:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:36:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:36:51 INFO [loop_until]: OK (rc = 0) 06:36:51 DEBUG --- stdout --- 06:36:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 100m 5793Mi am-55f77847b7-ch6mt 104m 5782Mi am-55f77847b7-gbbjq 102m 5795Mi ds-cts-0 6m 379Mi ds-cts-1 8m 382Mi ds-cts-2 7m 464Mi ds-idrepo-0 10048m 13864Mi ds-idrepo-1 2558m 13858Mi ds-idrepo-2 2917m 13856Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10306m 5457Mi idm-65858d8c4c-h9wbp 9021m 5394Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1309m 679Mi 06:36:51 DEBUG --- stderr --- 06:36:51 DEBUG 06:36:52 INFO 06:36:52 INFO [loop_until]: kubectl --namespace=xlou top node 06:36:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:36:52 INFO [loop_until]: OK (rc = 0) 06:36:52 DEBUG --- stdout --- 06:36:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 167m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 163m 1% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 158m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9327m 58% 6719Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2683m 16% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10673m 67% 6723Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2880m 18% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10047m 63% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2548m 16% 14434Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1369m 8% 2196Mi 3% 06:36:52 DEBUG --- stderr --- 06:36:52 DEBUG 06:37:51 INFO 06:37:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:37:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:37:51 INFO [loop_until]: OK (rc = 0) 06:37:51 DEBUG --- stdout --- 06:37:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 96m 5793Mi am-55f77847b7-ch6mt 100m 5782Mi am-55f77847b7-gbbjq 98m 5795Mi ds-cts-0 14m 383Mi ds-cts-1 11m 382Mi ds-cts-2 8m 465Mi ds-idrepo-0 11306m 13830Mi ds-idrepo-1 2654m 13854Mi ds-idrepo-2 2612m 13858Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10531m 5457Mi idm-65858d8c4c-h9wbp 8921m 5394Mi lodemon-65c77dbb64-7jwvp 2m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1311m 684Mi 06:37:51 DEBUG --- stderr --- 06:37:51 DEBUG 06:37:52 INFO 06:37:52 INFO [loop_until]: kubectl --namespace=xlou top node 06:37:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:37:52 INFO [loop_until]: OK (rc = 0) 06:37:52 DEBUG --- stdout --- 06:37:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 159m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9339m 58% 6730Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2683m 16% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10759m 67% 6723Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2741m 17% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10877m 68% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2305m 14% 14431Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1386m 8% 2197Mi 3% 06:37:52 DEBUG --- stderr --- 06:37:52 DEBUG 06:38:51 INFO 06:38:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:38:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:38:51 INFO [loop_until]: OK (rc = 0) 06:38:51 DEBUG --- stdout --- 06:38:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 100m 5794Mi am-55f77847b7-ch6mt 107m 5782Mi am-55f77847b7-gbbjq 111m 5795Mi ds-cts-0 6m 382Mi ds-cts-1 12m 383Mi ds-cts-2 6m 466Mi ds-idrepo-0 10156m 13691Mi ds-idrepo-1 2484m 13848Mi ds-idrepo-2 3062m 13844Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10246m 5458Mi idm-65858d8c4c-h9wbp 8762m 5394Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1329m 690Mi 06:38:51 DEBUG --- stderr --- 06:38:51 DEBUG 06:38:52 INFO 06:38:52 INFO [loop_until]: kubectl --namespace=xlou top node 06:38:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:38:52 INFO [loop_until]: OK (rc = 0) 06:38:52 DEBUG --- stdout --- 06:38:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 166m 1% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 166m 1% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 161m 1% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8978m 56% 6721Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2687m 16% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10396m 65% 6723Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3023m 19% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10543m 66% 14259Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2714m 17% 14425Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1331m 8% 2204Mi 3% 06:38:52 DEBUG --- stderr --- 06:38:52 DEBUG 06:39:51 INFO 06:39:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:39:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:39:51 INFO [loop_until]: OK (rc = 0) 06:39:51 DEBUG --- stdout --- 06:39:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 101m 5794Mi am-55f77847b7-ch6mt 108m 5782Mi am-55f77847b7-gbbjq 100m 5795Mi ds-cts-0 5m 382Mi ds-cts-1 11m 382Mi ds-cts-2 6m 462Mi ds-idrepo-0 10080m 13692Mi ds-idrepo-1 2831m 13686Mi ds-idrepo-2 2832m 13802Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10590m 5459Mi idm-65858d8c4c-h9wbp 9057m 5394Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1338m 696Mi 06:39:51 DEBUG --- stderr --- 06:39:51 DEBUG 06:39:52 INFO 06:39:52 INFO [loop_until]: kubectl --namespace=xlou top node 06:39:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:39:52 INFO [loop_until]: OK (rc = 0) 06:39:52 DEBUG --- stdout --- 06:39:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 169m 1% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 162m 1% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 160m 1% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9565m 60% 6720Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2736m 17% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10710m 67% 6727Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 50m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2926m 18% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9960m 62% 14277Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2883m 18% 14266Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1399m 8% 2213Mi 3% 06:39:52 DEBUG --- stderr --- 06:39:52 DEBUG 06:40:51 INFO 06:40:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:40:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:40:51 INFO [loop_until]: OK (rc = 0) 06:40:51 DEBUG --- stdout --- 06:40:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 97m 5794Mi am-55f77847b7-ch6mt 106m 5782Mi am-55f77847b7-gbbjq 97m 5795Mi ds-cts-0 5m 382Mi ds-cts-1 10m 383Mi ds-cts-2 6m 462Mi ds-idrepo-0 10601m 13823Mi ds-idrepo-1 3399m 13808Mi ds-idrepo-2 3547m 13814Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10015m 5460Mi idm-65858d8c4c-h9wbp 8747m 5394Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1304m 701Mi 06:40:51 DEBUG --- stderr --- 06:40:51 DEBUG 06:40:52 INFO 06:40:52 INFO [loop_until]: kubectl --namespace=xlou top node 06:40:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:40:52 INFO [loop_until]: OK (rc = 0) 06:40:52 DEBUG --- stdout --- 06:40:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 165m 1% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 160m 1% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 158m 0% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9255m 58% 6724Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2680m 16% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10707m 67% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3686m 23% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10628m 66% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3475m 21% 14350Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1382m 8% 2216Mi 3% 06:40:52 DEBUG --- stderr --- 06:40:52 DEBUG 06:41:51 INFO 06:41:51 INFO [loop_until]: kubectl --namespace=xlou top pods 06:41:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:41:52 INFO [loop_until]: OK (rc = 0) 06:41:52 DEBUG --- stdout --- 06:41:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 102m 5788Mi am-55f77847b7-ch6mt 109m 5778Mi am-55f77847b7-gbbjq 108m 5796Mi ds-cts-0 6m 382Mi ds-cts-1 11m 383Mi ds-cts-2 6m 462Mi ds-idrepo-0 10225m 13507Mi ds-idrepo-1 2990m 13546Mi ds-idrepo-2 3180m 13475Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10439m 5461Mi idm-65858d8c4c-h9wbp 8974m 5395Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1241m 706Mi 06:41:52 DEBUG --- stderr --- 06:41:52 DEBUG 06:41:52 INFO 06:41:52 INFO [loop_until]: kubectl --namespace=xlou top node 06:41:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:41:53 INFO [loop_until]: OK (rc = 0) 06:41:53 DEBUG --- stdout --- 06:41:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 169m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 171m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9082m 57% 6721Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2717m 17% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10760m 67% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3157m 19% 14081Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10595m 66% 14126Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3178m 20% 14119Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1394m 8% 2220Mi 3% 06:41:53 DEBUG --- stderr --- 06:41:53 DEBUG 06:42:52 INFO 06:42:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:42:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:42:52 INFO [loop_until]: OK (rc = 0) 06:42:52 DEBUG --- stdout --- 06:42:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 103m 5789Mi am-55f77847b7-ch6mt 108m 5779Mi am-55f77847b7-gbbjq 98m 5797Mi ds-cts-0 6m 382Mi ds-cts-1 9m 382Mi ds-cts-2 5m 461Mi ds-idrepo-0 10089m 13416Mi ds-idrepo-1 2687m 13651Mi ds-idrepo-2 2530m 13599Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10256m 5462Mi idm-65858d8c4c-h9wbp 8740m 5395Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1295m 712Mi 06:42:52 DEBUG --- stderr --- 06:42:52 DEBUG 06:42:53 INFO 06:42:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:42:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:42:53 INFO [loop_until]: OK (rc = 0) 06:42:53 DEBUG --- stdout --- 06:42:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 164m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 161m 1% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9263m 58% 6720Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2704m 17% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10736m 67% 6733Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2449m 15% 14196Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1203Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10011m 63% 13999Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2469m 15% 14217Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1370m 8% 2224Mi 3% 06:42:53 DEBUG --- stderr --- 06:42:53 DEBUG 06:43:52 INFO 06:43:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:43:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:43:52 INFO [loop_until]: OK (rc = 0) 06:43:52 DEBUG --- stdout --- 06:43:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 97m 5789Mi am-55f77847b7-ch6mt 91m 5779Mi am-55f77847b7-gbbjq 93m 5796Mi ds-cts-0 10m 382Mi ds-cts-1 8m 382Mi ds-cts-2 6m 462Mi ds-idrepo-0 11288m 13573Mi ds-idrepo-1 3130m 13442Mi ds-idrepo-2 3532m 13504Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10126m 5464Mi idm-65858d8c4c-h9wbp 8898m 5395Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1296m 718Mi 06:43:52 DEBUG --- stderr --- 06:43:52 DEBUG 06:43:53 INFO 06:43:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:43:53 INFO [loop_until]: OK (rc = 0) 06:43:53 DEBUG --- stdout --- 06:43:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 161m 1% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 167m 1% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 154m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9371m 58% 6723Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2705m 17% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10724m 67% 6733Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3667m 23% 14105Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1202Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11036m 69% 14199Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3174m 19% 14311Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1347m 8% 2229Mi 3% 06:43:53 DEBUG --- stderr --- 06:43:53 DEBUG 06:44:52 INFO 06:44:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:44:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:44:52 INFO [loop_until]: OK (rc = 0) 06:44:52 DEBUG --- stdout --- 06:44:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 94m 5789Mi am-55f77847b7-ch6mt 99m 5779Mi am-55f77847b7-gbbjq 96m 5797Mi ds-cts-0 6m 384Mi ds-cts-1 8m 382Mi ds-cts-2 6m 462Mi ds-idrepo-0 10676m 13435Mi ds-idrepo-1 3393m 13460Mi ds-idrepo-2 3383m 13270Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10040m 5465Mi idm-65858d8c4c-h9wbp 8978m 5396Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1285m 723Mi 06:44:52 DEBUG --- stderr --- 06:44:52 DEBUG 06:44:53 INFO 06:44:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:44:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:44:53 INFO [loop_until]: OK (rc = 0) 06:44:53 DEBUG --- stdout --- 06:44:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 161m 1% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 156m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9349m 58% 6720Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2690m 16% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10697m 67% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3267m 20% 13890Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10842m 68% 14026Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3449m 21% 14022Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1354m 8% 2234Mi 3% 06:44:53 DEBUG --- stderr --- 06:44:53 DEBUG 06:45:52 INFO 06:45:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:45:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:45:52 INFO [loop_until]: OK (rc = 0) 06:45:52 DEBUG --- stdout --- 06:45:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 98m 5789Mi am-55f77847b7-ch6mt 102m 5779Mi am-55f77847b7-gbbjq 99m 5797Mi ds-cts-0 7m 382Mi ds-cts-1 8m 382Mi ds-cts-2 6m 462Mi ds-idrepo-0 10069m 13517Mi ds-idrepo-1 2937m 13582Mi ds-idrepo-2 2614m 13330Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9979m 5465Mi idm-65858d8c4c-h9wbp 8643m 5396Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1276m 728Mi 06:45:52 DEBUG --- stderr --- 06:45:52 DEBUG 06:45:53 INFO 06:45:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:45:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:45:53 INFO [loop_until]: OK (rc = 0) 06:45:53 DEBUG --- stdout --- 06:45:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 167m 1% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 161m 1% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 163m 1% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9305m 58% 6721Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2691m 16% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10271m 64% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2830m 17% 13920Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9975m 62% 14095Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3011m 18% 14164Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1312m 8% 2241Mi 3% 06:45:53 DEBUG --- stderr --- 06:45:53 DEBUG 06:46:52 INFO 06:46:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:46:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:46:52 INFO [loop_until]: OK (rc = 0) 06:46:52 DEBUG --- stdout --- 06:46:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 101m 5789Mi am-55f77847b7-ch6mt 102m 5779Mi am-55f77847b7-gbbjq 94m 5797Mi ds-cts-0 6m 382Mi ds-cts-1 8m 382Mi ds-cts-2 7m 462Mi ds-idrepo-0 10850m 13547Mi ds-idrepo-1 2910m 13558Mi ds-idrepo-2 2691m 13280Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10438m 5466Mi idm-65858d8c4c-h9wbp 9044m 5397Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1267m 735Mi 06:46:52 DEBUG --- stderr --- 06:46:52 DEBUG 06:46:53 INFO 06:46:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:46:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:46:53 INFO [loop_until]: OK (rc = 0) 06:46:53 DEBUG --- stdout --- 06:46:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 162m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9394m 59% 6721Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2721m 17% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10861m 68% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2899m 18% 13868Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10784m 67% 14103Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3072m 19% 14140Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1366m 8% 2256Mi 3% 06:46:53 DEBUG --- stderr --- 06:46:53 DEBUG 06:47:52 INFO 06:47:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:47:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:47:52 INFO [loop_until]: OK (rc = 0) 06:47:52 DEBUG --- stdout --- 06:47:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 99m 5789Mi am-55f77847b7-ch6mt 100m 5779Mi am-55f77847b7-gbbjq 99m 5797Mi ds-cts-0 7m 383Mi ds-cts-1 8m 382Mi ds-cts-2 6m 462Mi ds-idrepo-0 10345m 13462Mi ds-idrepo-1 2946m 13688Mi ds-idrepo-2 2631m 13379Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10299m 5467Mi idm-65858d8c4c-h9wbp 9092m 5397Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1288m 739Mi 06:47:52 DEBUG --- stderr --- 06:47:52 DEBUG 06:47:53 INFO 06:47:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:47:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:47:53 INFO [loop_until]: OK (rc = 0) 06:47:53 DEBUG --- stdout --- 06:47:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 165m 1% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 161m 1% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 159m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9397m 59% 6724Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2707m 17% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10802m 67% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2479m 15% 13981Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10558m 66% 14031Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3184m 20% 14260Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1369m 8% 2251Mi 3% 06:47:53 DEBUG --- stderr --- 06:47:53 DEBUG 06:48:52 INFO 06:48:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:48:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:48:52 INFO [loop_until]: OK (rc = 0) 06:48:52 DEBUG --- stdout --- 06:48:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 98m 5789Mi am-55f77847b7-ch6mt 100m 5779Mi am-55f77847b7-gbbjq 92m 5797Mi ds-cts-0 7m 383Mi ds-cts-1 8m 383Mi ds-cts-2 6m 462Mi ds-idrepo-0 10238m 13607Mi ds-idrepo-1 2572m 13773Mi ds-idrepo-2 2609m 13359Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10331m 5468Mi idm-65858d8c4c-h9wbp 9404m 5397Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1303m 746Mi 06:48:52 DEBUG --- stderr --- 06:48:52 DEBUG 06:48:53 INFO 06:48:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:48:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:48:53 INFO [loop_until]: OK (rc = 0) 06:48:53 DEBUG --- stdout --- 06:48:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 87m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8969m 56% 6725Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2610m 16% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10708m 67% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2637m 16% 13959Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1201Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10422m 65% 14187Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2630m 16% 14362Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1316m 8% 2255Mi 3% 06:48:53 DEBUG --- stderr --- 06:48:53 DEBUG 06:49:52 INFO 06:49:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:49:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:49:52 INFO [loop_until]: OK (rc = 0) 06:49:52 DEBUG --- stdout --- 06:49:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 101m 5789Mi am-55f77847b7-ch6mt 103m 5779Mi am-55f77847b7-gbbjq 101m 5797Mi ds-cts-0 6m 383Mi ds-cts-1 9m 383Mi ds-cts-2 6m 462Mi ds-idrepo-0 10089m 13670Mi ds-idrepo-1 2589m 13557Mi ds-idrepo-2 2697m 13473Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10304m 5469Mi idm-65858d8c4c-h9wbp 8943m 5398Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1281m 751Mi 06:49:52 DEBUG --- stderr --- 06:49:52 DEBUG 06:49:53 INFO 06:49:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:49:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:49:53 INFO [loop_until]: OK (rc = 0) 06:49:53 DEBUG --- stdout --- 06:49:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 166m 1% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 157m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9351m 58% 6724Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2702m 17% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10697m 67% 6745Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2826m 17% 13903Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1203Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10262m 64% 14266Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2513m 15% 14145Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1389m 8% 2264Mi 3% 06:49:53 DEBUG --- stderr --- 06:49:53 DEBUG 06:50:52 INFO 06:50:52 INFO [loop_until]: kubectl --namespace=xlou top pods 06:50:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:50:52 INFO [loop_until]: OK (rc = 0) 06:50:52 DEBUG --- stdout --- 06:50:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 105m 5789Mi am-55f77847b7-ch6mt 102m 5779Mi am-55f77847b7-gbbjq 107m 5797Mi ds-cts-0 6m 383Mi ds-cts-1 15m 383Mi ds-cts-2 6m 462Mi ds-idrepo-0 10290m 13819Mi ds-idrepo-1 2665m 13678Mi ds-idrepo-2 3436m 13410Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10130m 5470Mi idm-65858d8c4c-h9wbp 8748m 5398Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1325m 757Mi 06:50:52 DEBUG --- stderr --- 06:50:52 DEBUG 06:50:53 INFO 06:50:53 INFO [loop_until]: kubectl --namespace=xlou top node 06:50:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:50:54 INFO [loop_until]: OK (rc = 0) 06:50:54 DEBUG --- stdout --- 06:50:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 164m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 163m 1% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 160m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9333m 58% 6725Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2618m 16% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10752m 67% 6739Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3464m 21% 14021Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10714m 67% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2809m 17% 14259Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1402m 8% 2266Mi 3% 06:50:54 DEBUG --- stderr --- 06:50:54 DEBUG 06:51:53 INFO 06:51:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:51:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:51:53 INFO [loop_until]: OK (rc = 0) 06:51:53 DEBUG --- stdout --- 06:51:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 101m 5789Mi am-55f77847b7-ch6mt 107m 5779Mi am-55f77847b7-gbbjq 105m 5797Mi ds-cts-0 6m 382Mi ds-cts-1 7m 382Mi ds-cts-2 6m 462Mi ds-idrepo-0 9986m 13867Mi ds-idrepo-1 2588m 13460Mi ds-idrepo-2 2919m 13426Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10447m 5470Mi idm-65858d8c4c-h9wbp 8804m 5399Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1272m 762Mi 06:51:53 DEBUG --- stderr --- 06:51:53 DEBUG 06:51:54 INFO 06:51:54 INFO [loop_until]: kubectl --namespace=xlou top node 06:51:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:51:54 INFO [loop_until]: OK (rc = 0) 06:51:54 DEBUG --- stdout --- 06:51:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 166m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 169m 1% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 162m 1% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9098m 57% 6729Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2715m 17% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10744m 67% 6736Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3016m 18% 14023Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9921m 62% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2678m 16% 14034Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1389m 8% 2273Mi 3% 06:51:54 DEBUG --- stderr --- 06:51:54 DEBUG 06:52:53 INFO 06:52:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:52:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:52:53 INFO [loop_until]: OK (rc = 0) 06:52:53 DEBUG --- stdout --- 06:52:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 103m 5789Mi am-55f77847b7-ch6mt 108m 5779Mi am-55f77847b7-gbbjq 102m 5797Mi ds-cts-0 5m 383Mi ds-cts-1 7m 384Mi ds-cts-2 17m 462Mi ds-idrepo-0 10225m 13714Mi ds-idrepo-1 2572m 13382Mi ds-idrepo-2 2806m 13277Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10248m 5471Mi idm-65858d8c4c-h9wbp 8962m 5400Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1297m 768Mi 06:52:53 DEBUG --- stderr --- 06:52:53 DEBUG 06:52:54 INFO 06:52:54 INFO [loop_until]: kubectl --namespace=xlou top node 06:52:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:52:54 INFO [loop_until]: OK (rc = 0) 06:52:54 DEBUG --- stdout --- 06:52:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 168m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 166m 1% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 158m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9416m 59% 6728Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2630m 16% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10477m 65% 6741Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2813m 17% 13892Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9839m 61% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2358m 14% 13955Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1393m 8% 2278Mi 3% 06:52:54 DEBUG --- stderr --- 06:52:54 DEBUG 06:53:53 INFO 06:53:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:53:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:53:53 INFO [loop_until]: OK (rc = 0) 06:53:53 DEBUG --- stdout --- 06:53:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 101m 5792Mi am-55f77847b7-ch6mt 103m 5779Mi am-55f77847b7-gbbjq 96m 5797Mi ds-cts-0 13m 383Mi ds-cts-1 9m 384Mi ds-cts-2 6m 462Mi ds-idrepo-0 10228m 13759Mi ds-idrepo-1 2671m 13544Mi ds-idrepo-2 2650m 13350Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10024m 5470Mi idm-65858d8c4c-h9wbp 8919m 5400Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1303m 772Mi 06:53:53 DEBUG --- stderr --- 06:53:53 DEBUG 06:53:54 INFO 06:53:54 INFO [loop_until]: kubectl --namespace=xlou top node 06:53:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:53:54 INFO [loop_until]: OK (rc = 0) 06:53:54 DEBUG --- stdout --- 06:53:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 159m 1% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9262m 58% 6729Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2701m 16% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10748m 67% 6739Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2497m 15% 14033Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10497m 66% 14322Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2918m 18% 14153Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1387m 8% 2280Mi 3% 06:53:54 DEBUG --- stderr --- 06:53:54 DEBUG 06:54:53 INFO 06:54:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:54:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:54:53 INFO [loop_until]: OK (rc = 0) 06:54:53 DEBUG --- stdout --- 06:54:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 104m 5792Mi am-55f77847b7-ch6mt 105m 5780Mi am-55f77847b7-gbbjq 102m 5797Mi ds-cts-0 5m 383Mi ds-cts-1 8m 384Mi ds-cts-2 6m 462Mi ds-idrepo-0 10356m 13823Mi ds-idrepo-1 2470m 13734Mi ds-idrepo-2 2834m 13570Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10711m 5471Mi idm-65858d8c4c-h9wbp 8841m 5401Mi lodemon-65c77dbb64-7jwvp 9m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1328m 773Mi 06:54:53 DEBUG --- stderr --- 06:54:53 DEBUG 06:54:54 INFO 06:54:54 INFO [loop_until]: kubectl --namespace=xlou top node 06:54:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:54:54 INFO [loop_until]: OK (rc = 0) 06:54:54 DEBUG --- stdout --- 06:54:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 168m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 166m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 158m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9240m 58% 6728Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2624m 16% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10881m 68% 6738Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2827m 17% 14167Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10347m 65% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2411m 15% 14315Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1388m 8% 2283Mi 3% 06:54:54 DEBUG --- stderr --- 06:54:54 DEBUG 06:55:53 INFO 06:55:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:55:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:55:53 INFO [loop_until]: OK (rc = 0) 06:55:53 DEBUG --- stdout --- 06:55:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 98m 5792Mi am-55f77847b7-ch6mt 104m 5780Mi am-55f77847b7-gbbjq 109m 5797Mi ds-cts-0 6m 383Mi ds-cts-1 8m 385Mi ds-cts-2 6m 462Mi ds-idrepo-0 10280m 13790Mi ds-idrepo-1 2195m 13803Mi ds-idrepo-2 2620m 13669Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10229m 5470Mi idm-65858d8c4c-h9wbp 9068m 5401Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1348m 773Mi 06:55:53 DEBUG --- stderr --- 06:55:53 DEBUG 06:55:54 INFO 06:55:54 INFO [loop_until]: kubectl --namespace=xlou top node 06:55:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:55:54 INFO [loop_until]: OK (rc = 0) 06:55:54 DEBUG --- stdout --- 06:55:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 161m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 154m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9389m 59% 6727Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2715m 17% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 10774m 67% 6736Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2277m 14% 14288Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10587m 66% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2321m 14% 14413Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1360m 8% 2284Mi 3% 06:55:54 DEBUG --- stderr --- 06:55:54 DEBUG 06:56:53 INFO 06:56:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:56:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:56:53 INFO [loop_until]: OK (rc = 0) 06:56:53 DEBUG --- stdout --- 06:56:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 15m 5792Mi am-55f77847b7-ch6mt 30m 5780Mi am-55f77847b7-gbbjq 23m 5797Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 462Mi ds-idrepo-0 12m 13856Mi ds-idrepo-1 376m 13835Mi ds-idrepo-2 12m 13702Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 1336m 5469Mi idm-65858d8c4c-h9wbp 9m 5400Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 404m 772Mi 06:56:53 DEBUG --- stderr --- 06:56:53 DEBUG 06:56:54 INFO 06:56:54 INFO [loop_until]: kubectl --namespace=xlou top node 06:56:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:56:54 INFO [loop_until]: OK (rc = 0) 06:56:54 DEBUG --- stdout --- 06:56:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 6728Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 518m 3% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 866m 5% 6739Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 98m 0% 14314Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 1318m 8% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 239m 1% 14441Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 266m 1% 1778Mi 3% 06:56:54 DEBUG --- stderr --- 06:56:54 DEBUG 06:57:53 INFO 06:57:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:57:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:57:53 INFO [loop_until]: OK (rc = 0) 06:57:53 DEBUG --- stdout --- 06:57:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 6m 5792Mi am-55f77847b7-ch6mt 8m 5780Mi am-55f77847b7-gbbjq 8m 5797Mi ds-cts-0 6m 384Mi ds-cts-1 6m 383Mi ds-cts-2 6m 463Mi ds-idrepo-0 14m 13855Mi ds-idrepo-1 9m 13835Mi ds-idrepo-2 12m 13692Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 11m 5469Mi idm-65858d8c4c-h9wbp 9m 5400Mi lodemon-65c77dbb64-7jwvp 10m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 260Mi 06:57:53 DEBUG --- stderr --- 06:57:53 DEBUG 06:57:54 INFO 06:57:54 INFO [loop_until]: kubectl --namespace=xlou top node 06:57:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:57:54 INFO [loop_until]: OK (rc = 0) 06:57:54 DEBUG --- stdout --- 06:57:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 6731Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 6738Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 14318Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14438Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1778Mi 3% 06:57:54 DEBUG --- stderr --- 06:57:54 DEBUG 127.0.0.1 - - [13/Aug/2023 06:58:05] "GET /monitoring/average?start_time=23-08-13_05:27:34&stop_time=23-08-13_05:56:04 HTTP/1.1" 200 - 06:58:53 INFO 06:58:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:58:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:58:53 INFO [loop_until]: OK (rc = 0) 06:58:53 DEBUG --- stdout --- 06:58:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 8m 5792Mi am-55f77847b7-ch6mt 8m 5780Mi am-55f77847b7-gbbjq 10m 5797Mi ds-cts-0 6m 383Mi ds-cts-1 7m 384Mi ds-cts-2 6m 462Mi ds-idrepo-0 11m 13855Mi ds-idrepo-1 9m 13834Mi ds-idrepo-2 10m 13692Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 5469Mi idm-65858d8c4c-h9wbp 7m 5399Mi lodemon-65c77dbb64-7jwvp 7m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1695m 541Mi 06:58:53 DEBUG --- stderr --- 06:58:53 DEBUG 06:58:54 INFO 06:58:54 INFO [loop_until]: kubectl --namespace=xlou top node 06:58:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:58:54 INFO [loop_until]: OK (rc = 0) 06:58:54 DEBUG --- stdout --- 06:58:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1392Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6733Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 6739Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14437Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1855m 11% 2053Mi 3% 06:58:54 DEBUG --- stderr --- 06:58:54 DEBUG 06:59:53 INFO 06:59:53 INFO [loop_until]: kubectl --namespace=xlou top pods 06:59:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:59:53 INFO [loop_until]: OK (rc = 0) 06:59:53 DEBUG --- stdout --- 06:59:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 68m 5795Mi am-55f77847b7-ch6mt 69m 5780Mi am-55f77847b7-gbbjq 67m 5798Mi ds-cts-0 6m 383Mi ds-cts-1 8m 384Mi ds-cts-2 5m 462Mi ds-idrepo-0 4365m 13829Mi ds-idrepo-1 2832m 13803Mi ds-idrepo-2 2574m 13823Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2046m 5503Mi idm-65858d8c4c-h9wbp 1993m 5426Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1933m 1328Mi 06:59:53 DEBUG --- stderr --- 06:59:53 DEBUG 06:59:55 INFO 06:59:55 INFO [loop_until]: kubectl --namespace=xlou top node 06:59:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:59:55 INFO [loop_until]: OK (rc = 0) 06:59:55 DEBUG --- stdout --- 06:59:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 133m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2210m 13% 6747Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2029m 12% 3181Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2127m 13% 6756Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2518m 15% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 4428m 27% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2823m 17% 14440Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2595m 16% 2863Mi 4% 06:59:55 DEBUG --- stderr --- 06:59:55 DEBUG 07:00:53 INFO 07:00:53 INFO [loop_until]: kubectl --namespace=xlou top pods 07:00:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:00:53 INFO [loop_until]: OK (rc = 0) 07:00:53 DEBUG --- stdout --- 07:00:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 78m 5795Mi am-55f77847b7-ch6mt 84m 5780Mi am-55f77847b7-gbbjq 85m 5800Mi ds-cts-0 6m 383Mi ds-cts-1 7m 385Mi ds-cts-2 6m 462Mi ds-idrepo-0 5899m 13823Mi ds-idrepo-1 4230m 13853Mi ds-idrepo-2 3637m 13823Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2430m 5517Mi idm-65858d8c4c-h9wbp 2570m 5442Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 2463m 2014Mi 07:00:53 DEBUG --- stderr --- 07:00:53 DEBUG 07:00:55 INFO 07:00:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:00:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:00:55 INFO [loop_until]: OK (rc = 0) 07:00:55 DEBUG --- stdout --- 07:00:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2786m 17% 6748Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2159m 13% 3525Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 2686m 16% 6764Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4081m 25% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 6027m 37% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4287m 26% 14457Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2512m 15% 3453Mi 5% 07:00:55 DEBUG --- stderr --- 07:00:55 DEBUG 07:01:54 INFO 07:01:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:01:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:01:54 INFO [loop_until]: OK (rc = 0) 07:01:54 DEBUG --- stdout --- 07:01:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 89m 5795Mi am-55f77847b7-ch6mt 96m 5780Mi am-55f77847b7-gbbjq 93m 5800Mi ds-cts-0 6m 383Mi ds-cts-1 8m 384Mi ds-cts-2 6m 462Mi ds-idrepo-0 6640m 13817Mi ds-idrepo-1 4604m 13796Mi ds-idrepo-2 3939m 13864Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 3078m 5496Mi idm-65858d8c4c-h9wbp 2691m 5426Mi lodemon-65c77dbb64-7jwvp 6m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1459m 1968Mi 07:01:54 DEBUG --- stderr --- 07:01:54 DEBUG 07:01:55 INFO 07:01:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:01:55 INFO [loop_until]: OK (rc = 0) 07:01:55 DEBUG --- stdout --- 07:01:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2845m 17% 6755Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2139m 13% 2210Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3263m 20% 6763Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4003m 25% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 6602m 41% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4518m 28% 14417Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1558m 9% 3451Mi 5% 07:01:55 DEBUG --- stderr --- 07:01:55 DEBUG 07:02:54 INFO 07:02:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:02:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:02:54 INFO [loop_until]: OK (rc = 0) 07:02:54 DEBUG --- stdout --- 07:02:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 115m 5795Mi am-55f77847b7-ch6mt 114m 5780Mi am-55f77847b7-gbbjq 118m 5800Mi ds-cts-0 6m 383Mi ds-cts-1 7m 384Mi ds-cts-2 6m 463Mi ds-idrepo-0 8532m 13822Mi ds-idrepo-1 5836m 13824Mi ds-idrepo-2 5951m 13823Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 3970m 5506Mi idm-65858d8c4c-h9wbp 3949m 5436Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1021m 2042Mi 07:02:54 DEBUG --- stderr --- 07:02:54 DEBUG 07:02:55 INFO 07:02:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:02:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:02:55 INFO [loop_until]: OK (rc = 0) 07:02:55 DEBUG --- stdout --- 07:02:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 181m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 172m 1% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 171m 1% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4247m 26% 6759Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2200m 13% 2992Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 4197m 26% 6768Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5805m 36% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8997m 56% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5795m 36% 14438Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1455m 9% 3471Mi 5% 07:02:55 DEBUG --- stderr --- 07:02:55 DEBUG 07:03:54 INFO 07:03:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:03:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:03:54 INFO [loop_until]: OK (rc = 0) 07:03:54 DEBUG --- stdout --- 07:03:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 125m 5796Mi am-55f77847b7-ch6mt 115m 5781Mi am-55f77847b7-gbbjq 112m 5801Mi ds-cts-0 5m 383Mi ds-cts-1 7m 384Mi ds-cts-2 6m 462Mi ds-idrepo-0 9668m 13823Mi ds-idrepo-1 6411m 13822Mi ds-idrepo-2 6640m 13832Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 4159m 5512Mi idm-65858d8c4c-h9wbp 4028m 5444Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 2298m 2068Mi 07:03:54 DEBUG --- stderr --- 07:03:54 DEBUG 07:03:55 INFO 07:03:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:03:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:03:55 INFO [loop_until]: OK (rc = 0) 07:03:55 DEBUG --- stdout --- 07:03:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 181m 1% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 178m 1% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4314m 27% 6757Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2349m 14% 3284Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 4742m 29% 6764Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 6627m 41% 14507Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9927m 62% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6532m 41% 14438Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2316m 14% 3474Mi 5% 07:03:55 DEBUG --- stderr --- 07:03:55 DEBUG 07:04:54 INFO 07:04:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:04:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:04:54 INFO [loop_until]: OK (rc = 0) 07:04:54 DEBUG --- stdout --- 07:04:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 94m 5796Mi am-55f77847b7-ch6mt 99m 5780Mi am-55f77847b7-gbbjq 67m 5802Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 463Mi ds-idrepo-0 7503m 13822Mi ds-idrepo-1 6139m 13790Mi ds-idrepo-2 8734m 13778Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 4926m 5497Mi idm-65858d8c4c-h9wbp 3362m 5431Mi lodemon-65c77dbb64-7jwvp 9m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1107m 2019Mi 07:04:54 DEBUG --- stderr --- 07:04:54 DEBUG 07:04:55 INFO 07:04:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:04:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:04:55 INFO [loop_until]: OK (rc = 0) 07:04:55 DEBUG --- stdout --- 07:04:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 87m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 190m 1% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2860m 17% 6756Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2210m 13% 2344Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4736m 29% 6764Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 8596m 54% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 7211m 45% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7079m 44% 14474Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1242m 7% 3471Mi 5% 07:04:55 DEBUG --- stderr --- 07:04:55 DEBUG 07:05:54 INFO 07:05:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:05:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:05:54 INFO [loop_until]: OK (rc = 0) 07:05:54 DEBUG --- stdout --- 07:05:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 145m 5796Mi am-55f77847b7-ch6mt 139m 5781Mi am-55f77847b7-gbbjq 133m 5802Mi ds-cts-0 6m 383Mi ds-cts-1 7m 384Mi ds-cts-2 6m 463Mi ds-idrepo-0 11190m 13799Mi ds-idrepo-1 7383m 13819Mi ds-idrepo-2 8098m 13768Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 1204m 3390Mi idm-65858d8c4c-h9wbp 11748m 5432Mi lodemon-65c77dbb64-7jwvp 5m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1074m 2025Mi 07:05:54 DEBUG --- stderr --- 07:05:54 DEBUG 07:05:55 INFO 07:05:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:05:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:05:55 INFO [loop_until]: OK (rc = 0) 07:05:55 DEBUG --- stdout --- 07:05:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 197m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 198m 1% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 202m 1% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 12553m 78% 6758Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2241m 14% 2397Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 1486m 9% 4690Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 8251m 51% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11408m 71% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7677m 48% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1112m 6% 3474Mi 5% 07:05:55 DEBUG --- stderr --- 07:05:55 DEBUG 07:06:54 INFO 07:06:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:06:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:06:54 INFO [loop_until]: OK (rc = 0) 07:06:54 DEBUG --- stdout --- 07:06:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 139m 5796Mi am-55f77847b7-ch6mt 149m 5781Mi am-55f77847b7-gbbjq 137m 5802Mi ds-cts-0 7m 383Mi ds-cts-1 7m 384Mi ds-cts-2 5m 462Mi ds-idrepo-0 12191m 13813Mi ds-idrepo-1 8364m 13851Mi ds-idrepo-2 8696m 13852Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 8791m 4361Mi idm-65858d8c4c-h9wbp 5021m 5434Mi lodemon-65c77dbb64-7jwvp 7m 65Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1090m 2041Mi 07:06:54 DEBUG --- stderr --- 07:06:54 DEBUG 07:06:55 INFO 07:06:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:06:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:06:55 INFO [loop_until]: OK (rc = 0) 07:06:55 DEBUG --- stdout --- 07:06:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 183m 1% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 177m 1% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3276m 20% 6756Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2346m 14% 2748Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 6784m 42% 5713Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 8567m 53% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11753m 73% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7981m 50% 14429Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1157m 7% 3477Mi 5% 07:06:55 DEBUG --- stderr --- 07:06:55 DEBUG 07:07:54 INFO 07:07:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:07:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:07:54 INFO [loop_until]: OK (rc = 0) 07:07:54 DEBUG --- stdout --- 07:07:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 147m 5797Mi am-55f77847b7-ch6mt 147m 5781Mi am-55f77847b7-gbbjq 136m 5802Mi ds-cts-0 7m 383Mi ds-cts-1 8m 384Mi ds-cts-2 6m 462Mi ds-idrepo-0 12537m 13823Mi ds-idrepo-1 8106m 13856Mi ds-idrepo-2 8466m 13779Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5811m 4512Mi idm-65858d8c4c-h9wbp 5617m 5434Mi lodemon-65c77dbb64-7jwvp 7m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1127m 2054Mi 07:07:54 DEBUG --- stderr --- 07:07:54 DEBUG 07:07:55 INFO 07:07:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:07:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:07:56 INFO [loop_until]: OK (rc = 0) 07:07:56 DEBUG --- stdout --- 07:07:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 188m 1% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 185m 1% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 195m 1% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6714m 42% 6759Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2341m 14% 2834Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 5074m 31% 5888Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 8705m 54% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11919m 75% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8913m 56% 14378Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1261m 7% 3479Mi 5% 07:07:56 DEBUG --- stderr --- 07:07:56 DEBUG 07:08:54 INFO 07:08:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:08:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:08:54 INFO [loop_until]: OK (rc = 0) 07:08:54 DEBUG --- stdout --- 07:08:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 135m 5797Mi am-55f77847b7-ch6mt 140m 5781Mi am-55f77847b7-gbbjq 131m 5802Mi ds-cts-0 7m 383Mi ds-cts-1 8m 384Mi ds-cts-2 7m 462Mi ds-idrepo-0 11843m 13821Mi ds-idrepo-1 8638m 13825Mi ds-idrepo-2 7304m 13785Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5635m 4649Mi idm-65858d8c4c-h9wbp 5837m 5442Mi lodemon-65c77dbb64-7jwvp 8m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1519m 2070Mi 07:08:54 DEBUG --- stderr --- 07:08:54 DEBUG 07:08:56 INFO 07:08:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:08:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:08:56 INFO [loop_until]: OK (rc = 0) 07:08:56 DEBUG --- stdout --- 07:08:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 191m 1% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 192m 1% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 183m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5355m 33% 6756Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2535m 15% 3002Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 6005m 37% 5985Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 7583m 47% 14500Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11521m 72% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8360m 52% 14362Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1837m 11% 3479Mi 5% 07:08:56 DEBUG --- stderr --- 07:08:56 DEBUG 07:09:54 INFO 07:09:54 INFO [loop_until]: kubectl --namespace=xlou top pods 07:09:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:09:54 INFO [loop_until]: OK (rc = 0) 07:09:54 DEBUG --- stdout --- 07:09:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 157m 5797Mi am-55f77847b7-ch6mt 151m 5781Mi am-55f77847b7-gbbjq 144m 5802Mi ds-cts-0 8m 384Mi ds-cts-1 8m 384Mi ds-cts-2 9m 463Mi ds-idrepo-0 11635m 13748Mi ds-idrepo-1 8722m 13888Mi ds-idrepo-2 9564m 13833Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 6690m 4829Mi idm-65858d8c4c-h9wbp 7037m 5438Mi lodemon-65c77dbb64-7jwvp 6m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1628m 2067Mi 07:09:54 DEBUG --- stderr --- 07:09:54 DEBUG 07:09:56 INFO 07:09:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:09:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:09:56 INFO [loop_until]: OK (rc = 0) 07:09:56 DEBUG --- stdout --- 07:09:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 201m 1% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 203m 1% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 205m 1% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 10254m 64% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2315m 14% 2993Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 3037m 19% 6207Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 9292m 58% 14515Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11717m 73% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9887m 62% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1297m 8% 3481Mi 5% 07:09:56 DEBUG --- stderr --- 07:09:56 DEBUG 07:10:55 INFO 07:10:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:10:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:10:55 INFO [loop_until]: OK (rc = 0) 07:10:55 DEBUG --- stdout --- 07:10:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 145m 5797Mi am-55f77847b7-ch6mt 142m 5781Mi am-55f77847b7-gbbjq 142m 5802Mi ds-cts-0 13m 384Mi ds-cts-1 7m 384Mi ds-cts-2 5m 462Mi ds-idrepo-0 11422m 13822Mi ds-idrepo-1 8165m 13798Mi ds-idrepo-2 7817m 13816Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 4109m 4952Mi idm-65858d8c4c-h9wbp 8897m 5444Mi lodemon-65c77dbb64-7jwvp 8m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1135m 2062Mi 07:10:55 DEBUG --- stderr --- 07:10:55 DEBUG 07:10:56 INFO 07:10:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:10:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:10:56 INFO [loop_until]: OK (rc = 0) 07:10:56 DEBUG --- stdout --- 07:10:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 188m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 167m 1% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 186m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8325m 52% 6758Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2468m 15% 2215Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5341m 33% 2411Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 7221m 45% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10941m 68% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7205m 45% 14496Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1561m 9% 3482Mi 5% 07:10:56 DEBUG --- stderr --- 07:10:56 DEBUG 07:11:55 INFO 07:11:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:11:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:11:55 INFO [loop_until]: OK (rc = 0) 07:11:55 DEBUG --- stdout --- 07:11:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 118m 5797Mi am-55f77847b7-ch6mt 124m 5781Mi am-55f77847b7-gbbjq 97m 5802Mi ds-cts-0 8m 385Mi ds-cts-1 7m 384Mi ds-cts-2 12m 463Mi ds-idrepo-0 8094m 13803Mi ds-idrepo-1 6131m 13814Mi ds-idrepo-2 8373m 13892Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 206m 1243Mi idm-65858d8c4c-h9wbp 9338m 5436Mi lodemon-65c77dbb64-7jwvp 9m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1287m 2028Mi 07:11:55 DEBUG --- stderr --- 07:11:55 DEBUG 07:11:56 INFO 07:11:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:11:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:11:56 INFO [loop_until]: OK (rc = 0) 07:11:56 DEBUG --- stdout --- 07:11:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 193m 1% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 187m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8214m 51% 6756Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2276m 14% 2460Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 7977m 50% 5407Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 8040m 50% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11573m 72% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6689m 42% 14446Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1149m 7% 3482Mi 5% 07:11:56 DEBUG --- stderr --- 07:11:56 DEBUG 07:12:55 INFO 07:12:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:12:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:12:55 INFO [loop_until]: OK (rc = 0) 07:12:55 DEBUG --- stdout --- 07:12:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 138m 5797Mi am-55f77847b7-ch6mt 137m 5782Mi am-55f77847b7-gbbjq 140m 5802Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 8m 464Mi ds-idrepo-0 11716m 13788Mi ds-idrepo-1 8273m 13877Mi ds-idrepo-2 7732m 13822Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 6231m 4271Mi idm-65858d8c4c-h9wbp 5666m 5432Mi lodemon-65c77dbb64-7jwvp 7m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1142m 2045Mi 07:12:55 DEBUG --- stderr --- 07:12:55 DEBUG 07:12:56 INFO 07:12:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:12:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:12:56 INFO [loop_until]: OK (rc = 0) 07:12:56 DEBUG --- stdout --- 07:12:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 200m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 169m 1% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 186m 1% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5332m 33% 6757Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2278m 14% 2668Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 5338m 33% 5620Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 9124m 57% 14474Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1216Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11544m 72% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8784m 55% 14488Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1176m 7% 3482Mi 5% 07:12:56 DEBUG --- stderr --- 07:12:56 DEBUG 07:13:55 INFO 07:13:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:13:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:13:55 INFO [loop_until]: OK (rc = 0) 07:13:55 DEBUG --- stdout --- 07:13:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 137m 5798Mi am-55f77847b7-ch6mt 153m 5782Mi am-55f77847b7-gbbjq 136m 5802Mi ds-cts-0 12m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 11693m 13791Mi ds-idrepo-1 9058m 13823Mi ds-idrepo-2 9109m 13772Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5951m 4377Mi idm-65858d8c4c-h9wbp 7061m 5437Mi lodemon-65c77dbb64-7jwvp 6m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1182m 2052Mi 07:13:55 DEBUG --- stderr --- 07:13:55 DEBUG 07:13:56 INFO 07:13:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:13:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:13:56 INFO [loop_until]: OK (rc = 0) 07:13:56 DEBUG --- stdout --- 07:13:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1392Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 210m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 202m 1% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 195m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8071m 50% 6755Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2340m 14% 2719Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 4914m 30% 5748Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 9867m 62% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11656m 73% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7967m 50% 14423Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1198m 7% 3480Mi 5% 07:13:56 DEBUG --- stderr --- 07:13:56 DEBUG 07:14:55 INFO 07:14:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:14:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:14:55 INFO [loop_until]: OK (rc = 0) 07:14:55 DEBUG --- stdout --- 07:14:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 154m 5797Mi am-55f77847b7-ch6mt 150m 5782Mi am-55f77847b7-gbbjq 140m 5802Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 12092m 13824Mi ds-idrepo-1 7558m 13712Mi ds-idrepo-2 9484m 13791Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 7077m 4503Mi idm-65858d8c4c-h9wbp 6170m 5439Mi lodemon-65c77dbb64-7jwvp 8m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1220m 2056Mi 07:14:55 DEBUG --- stderr --- 07:14:55 DEBUG 07:14:56 INFO 07:14:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:14:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:14:56 INFO [loop_until]: OK (rc = 0) 07:14:56 DEBUG --- stdout --- 07:14:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 186m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 175m 1% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 177m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3607m 22% 6757Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2317m 14% 2723Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 6307m 39% 5861Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 7906m 49% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 12024m 75% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8586m 54% 14424Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1151m 7% 3480Mi 5% 07:14:56 DEBUG --- stderr --- 07:14:56 DEBUG 07:15:55 INFO 07:15:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:15:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:15:55 INFO [loop_until]: OK (rc = 0) 07:15:55 DEBUG --- stdout --- 07:15:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 142m 5797Mi am-55f77847b7-ch6mt 148m 5782Mi am-55f77847b7-gbbjq 137m 5803Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 5m 463Mi ds-idrepo-0 12406m 13684Mi ds-idrepo-1 9272m 13588Mi ds-idrepo-2 9797m 13825Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 7006m 4683Mi idm-65858d8c4c-h9wbp 7199m 5439Mi lodemon-65c77dbb64-7jwvp 7m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1133m 2057Mi 07:15:55 DEBUG --- stderr --- 07:15:55 DEBUG 07:15:56 INFO 07:15:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:15:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:15:56 INFO [loop_until]: OK (rc = 0) 07:15:56 DEBUG --- stdout --- 07:15:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1391Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 203m 1% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 196m 1% 6947Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 201m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7493m 47% 6759Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2219m 13% 2773Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 5376m 33% 6027Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 9877m 62% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11700m 73% 14413Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8598m 54% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1194m 7% 3482Mi 5% 07:15:56 DEBUG --- stderr --- 07:15:56 DEBUG 07:16:55 INFO 07:16:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:16:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:16:55 INFO [loop_until]: OK (rc = 0) 07:16:55 DEBUG --- stdout --- 07:16:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 141m 5797Mi am-55f77847b7-ch6mt 142m 5782Mi am-55f77847b7-gbbjq 138m 5803Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 463Mi ds-idrepo-0 12320m 13827Mi ds-idrepo-1 7245m 13784Mi ds-idrepo-2 8852m 13768Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 6130m 4771Mi idm-65858d8c4c-h9wbp 6197m 5439Mi lodemon-65c77dbb64-7jwvp 6m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1164m 2056Mi 07:16:55 DEBUG --- stderr --- 07:16:55 DEBUG 07:16:57 INFO 07:16:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:16:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:16:57 INFO [loop_until]: OK (rc = 0) 07:16:57 DEBUG --- stdout --- 07:16:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 181m 1% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 182m 1% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 180m 1% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3347m 21% 6758Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2330m 14% 2783Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 5550m 34% 6126Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 6933m 43% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11512m 72% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8557m 53% 14391Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1165m 7% 3479Mi 5% 07:16:57 DEBUG --- stderr --- 07:16:57 DEBUG 07:17:55 INFO 07:17:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:17:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:17:55 INFO [loop_until]: OK (rc = 0) 07:17:55 DEBUG --- stdout --- 07:17:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 142m 5798Mi am-55f77847b7-ch6mt 133m 5782Mi am-55f77847b7-gbbjq 134m 5803Mi ds-cts-0 6m 384Mi ds-cts-1 8m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 12502m 13824Mi ds-idrepo-1 6962m 13815Mi ds-idrepo-2 7427m 13823Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5347m 4954Mi idm-65858d8c4c-h9wbp 5600m 5442Mi lodemon-65c77dbb64-7jwvp 7m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1728m 2082Mi 07:17:55 DEBUG --- stderr --- 07:17:55 DEBUG 07:17:57 INFO 07:17:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:17:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:17:57 INFO [loop_until]: OK (rc = 0) 07:17:57 DEBUG --- stdout --- 07:17:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 210m 1% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 200m 1% 6952Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 202m 1% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 7541m 47% 6761Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2363m 14% 2969Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 6570m 41% 6351Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 9661m 60% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1203Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 12384m 77% 14480Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 69m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9584m 60% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1292m 8% 3481Mi 5% 07:17:57 DEBUG --- stderr --- 07:17:57 DEBUG 07:18:55 INFO 07:18:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:18:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:18:55 INFO [loop_until]: OK (rc = 0) 07:18:55 DEBUG --- stdout --- 07:18:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 147m 5797Mi am-55f77847b7-ch6mt 149m 5782Mi am-55f77847b7-gbbjq 144m 5803Mi ds-cts-0 6m 384Mi ds-cts-1 8m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 12466m 13772Mi ds-idrepo-1 8778m 13808Mi ds-idrepo-2 9362m 13862Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 6251m 5113Mi idm-65858d8c4c-h9wbp 6501m 5439Mi lodemon-65c77dbb64-7jwvp 5m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1212m 2071Mi 07:18:55 DEBUG --- stderr --- 07:18:55 DEBUG 07:18:57 INFO 07:18:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:18:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:18:57 INFO [loop_until]: OK (rc = 0) 07:18:57 DEBUG --- stdout --- 07:18:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 185m 1% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 183m 1% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 183m 1% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5048m 31% 6789Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2372m 14% 2971Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 5573m 35% 6465Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 8834m 55% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1204Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 12182m 76% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7586m 47% 14379Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1318m 8% 3481Mi 5% 07:18:57 DEBUG --- stderr --- 07:18:57 DEBUG 07:19:55 INFO 07:19:55 INFO [loop_until]: kubectl --namespace=xlou top pods 07:19:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:19:56 INFO [loop_until]: OK (rc = 0) 07:19:56 DEBUG --- stdout --- 07:19:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 140m 5798Mi am-55f77847b7-ch6mt 138m 5782Mi am-55f77847b7-gbbjq 137m 5803Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 11551m 13821Mi ds-idrepo-1 10551m 13719Mi ds-idrepo-2 9225m 13826Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2602m 5413Mi idm-65858d8c4c-h9wbp 8533m 5480Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1767m 2091Mi 07:19:56 DEBUG --- stderr --- 07:19:56 DEBUG 07:19:57 INFO 07:19:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:19:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:19:57 INFO [loop_until]: OK (rc = 0) 07:19:57 DEBUG --- stdout --- 07:19:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1391Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 193m 1% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 186m 1% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 194m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 10063m 63% 6787Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2492m 15% 3018Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 4392m 27% 6697Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 9252m 58% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 12114m 76% 14314Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9818m 61% 14242Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1838m 11% 3482Mi 5% 07:19:57 DEBUG --- stderr --- 07:19:57 DEBUG 07:20:56 INFO 07:20:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:20:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:20:56 INFO [loop_until]: OK (rc = 0) 07:20:56 DEBUG --- stdout --- 07:20:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 97m 5798Mi am-55f77847b7-ch6mt 121m 5782Mi am-55f77847b7-gbbjq 78m 5803Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 463Mi ds-idrepo-0 8407m 13706Mi ds-idrepo-1 6250m 13814Mi ds-idrepo-2 8018m 13718Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 3064m 1386Mi idm-65858d8c4c-h9wbp 7819m 5465Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 2014m 2039Mi 07:20:56 DEBUG --- stderr --- 07:20:56 DEBUG 07:20:57 INFO 07:20:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:20:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:20:57 INFO [loop_until]: OK (rc = 0) 07:20:57 DEBUG --- stdout --- 07:20:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 156m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4913m 30% 6789Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2662m 16% 2205Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6624m 41% 2022Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 6858m 43% 14211Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 6980m 43% 14311Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6240m 39% 14411Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2143m 13% 3494Mi 5% 07:20:57 DEBUG --- stderr --- 07:20:57 DEBUG 07:21:56 INFO 07:21:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:21:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:21:56 INFO [loop_until]: OK (rc = 0) 07:21:56 DEBUG --- stdout --- 07:21:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 136m 5798Mi am-55f77847b7-ch6mt 125m 5782Mi am-55f77847b7-gbbjq 128m 5803Mi ds-cts-0 6m 384Mi ds-cts-1 8m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 11561m 13843Mi ds-idrepo-1 7619m 13825Mi ds-idrepo-2 6948m 13822Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5976m 4130Mi idm-65858d8c4c-h9wbp 8504m 5459Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1042m 2059Mi 07:21:56 DEBUG --- stderr --- 07:21:56 DEBUG 07:21:57 INFO 07:21:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:21:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:21:57 INFO [loop_until]: OK (rc = 0) 07:21:57 DEBUG --- stdout --- 07:21:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 181m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 187m 1% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 186m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6978m 43% 6782Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2156m 13% 2430Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 6287m 39% 5424Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 7151m 45% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 11604m 73% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8481m 53% 14447Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1127m 7% 3481Mi 5% 07:21:57 DEBUG --- stderr --- 07:21:57 DEBUG 07:22:56 INFO 07:22:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:22:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:22:56 INFO [loop_until]: OK (rc = 0) 07:22:56 DEBUG --- stdout --- 07:22:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 151m 5798Mi am-55f77847b7-ch6mt 150m 5783Mi am-55f77847b7-gbbjq 146m 5803Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 11022m 13795Mi ds-idrepo-1 9091m 13823Mi ds-idrepo-2 8443m 13815Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 6231m 4407Mi idm-65858d8c4c-h9wbp 6361m 5460Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1105m 2066Mi 07:22:56 DEBUG --- stderr --- 07:22:56 DEBUG 07:22:57 INFO 07:22:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:22:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:22:57 INFO [loop_until]: OK (rc = 0) 07:22:57 DEBUG --- stdout --- 07:22:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 208m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 202m 1% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 206m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6292m 39% 6785Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2376m 14% 2568Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 6689m 42% 5714Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 9200m 57% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10979m 69% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8424m 53% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1202m 7% 3481Mi 5% 07:22:57 DEBUG --- stderr --- 07:22:57 DEBUG 07:23:56 INFO 07:23:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:23:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:23:56 INFO [loop_until]: OK (rc = 0) 07:23:56 DEBUG --- stdout --- 07:23:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 97m 5799Mi am-55f77847b7-ch6mt 96m 5783Mi am-55f77847b7-gbbjq 105m 5804Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 5m 464Mi ds-idrepo-0 8283m 13835Mi ds-idrepo-1 4973m 13824Mi ds-idrepo-2 6847m 13811Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 4950m 4549Mi idm-65858d8c4c-h9wbp 2366m 5460Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1137m 2060Mi 07:23:56 DEBUG --- stderr --- 07:23:56 DEBUG 07:23:57 INFO 07:23:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:23:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:23:57 INFO [loop_until]: OK (rc = 0) 07:23:57 DEBUG --- stdout --- 07:23:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 168m 1% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3855m 24% 6788Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2132m 13% 2357Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 4621m 29% 5840Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 7184m 45% 14269Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9539m 60% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4142m 26% 14434Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1158m 7% 3480Mi 5% 07:23:57 DEBUG --- stderr --- 07:23:57 DEBUG 07:24:56 INFO 07:24:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:24:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:24:56 INFO [loop_until]: OK (rc = 0) 07:24:56 DEBUG --- stdout --- 07:24:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 147m 5799Mi am-55f77847b7-ch6mt 139m 5783Mi am-55f77847b7-gbbjq 138m 5804Mi ds-cts-0 7m 384Mi ds-cts-1 7m 384Mi ds-cts-2 5m 464Mi ds-idrepo-0 11683m 13744Mi ds-idrepo-1 8005m 13860Mi ds-idrepo-2 9593m 13719Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 6338m 4632Mi idm-65858d8c4c-h9wbp 5939m 5463Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1183m 2084Mi 07:24:56 DEBUG --- stderr --- 07:24:56 DEBUG 07:24:57 INFO 07:24:57 INFO [loop_until]: kubectl --namespace=xlou top node 07:24:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:24:58 INFO [loop_until]: OK (rc = 0) 07:24:58 DEBUG --- stdout --- 07:24:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 196m 1% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 193m 1% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 194m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6533m 41% 6787Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2357m 14% 2685Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 6644m 41% 5930Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 10134m 63% 14323Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 12064m 75% 14308Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8211m 51% 14434Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1239m 7% 3482Mi 5% 07:24:58 DEBUG --- stderr --- 07:24:58 DEBUG 07:25:56 INFO 07:25:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:25:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:25:56 INFO [loop_until]: OK (rc = 0) 07:25:56 DEBUG --- stdout --- 07:25:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 93m 5799Mi am-55f77847b7-ch6mt 92m 5783Mi am-55f77847b7-gbbjq 87m 5804Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 7515m 13824Mi ds-idrepo-1 6965m 13737Mi ds-idrepo-2 5309m 13855Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5666m 4761Mi idm-65858d8c4c-h9wbp 2963m 5461Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1211m 2068Mi 07:25:56 DEBUG --- stderr --- 07:25:56 DEBUG 07:25:58 INFO 07:25:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:25:58 INFO [loop_until]: OK (rc = 0) 07:25:58 DEBUG --- stdout --- 07:25:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 169m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4226m 26% 6788Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2129m 13% 2377Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 4129m 25% 6042Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1145Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4987m 31% 14418Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9578m 60% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6538m 41% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1114m 7% 3483Mi 5% 07:25:58 DEBUG --- stderr --- 07:25:58 DEBUG 07:26:56 INFO 07:26:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:26:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:26:56 INFO [loop_until]: OK (rc = 0) 07:26:56 DEBUG --- stdout --- 07:26:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 136m 5799Mi am-55f77847b7-ch6mt 129m 5783Mi am-55f77847b7-gbbjq 129m 5804Mi ds-cts-0 6m 384Mi ds-cts-1 8m 384Mi ds-cts-2 5m 464Mi ds-idrepo-0 11817m 13795Mi ds-idrepo-1 8106m 13806Mi ds-idrepo-2 9154m 13868Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5867m 4824Mi idm-65858d8c4c-h9wbp 5336m 5460Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1067m 2087Mi 07:26:56 DEBUG --- stderr --- 07:26:56 DEBUG 07:26:58 INFO 07:26:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:26:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:26:58 INFO [loop_until]: OK (rc = 0) 07:26:58 DEBUG --- stdout --- 07:26:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 183m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 179m 1% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 191m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5450m 34% 6788Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2288m 14% 2601Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 5621m 35% 6125Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 9389m 59% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 12278m 77% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8017m 50% 14442Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1145m 7% 3484Mi 5% 07:26:58 DEBUG --- stderr --- 07:26:58 DEBUG 07:27:56 INFO 07:27:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:27:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:27:56 INFO [loop_until]: OK (rc = 0) 07:27:56 DEBUG --- stdout --- 07:27:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 145m 5799Mi am-55f77847b7-ch6mt 141m 5783Mi am-55f77847b7-gbbjq 137m 5804Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 464Mi ds-idrepo-0 12402m 13815Mi ds-idrepo-1 8833m 13794Mi ds-idrepo-2 8717m 13840Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 4263m 4957Mi idm-65858d8c4c-h9wbp 7669m 5467Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1174m 2088Mi 07:27:56 DEBUG --- stderr --- 07:27:56 DEBUG 07:27:58 INFO 07:27:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:27:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:27:58 INFO [loop_until]: OK (rc = 0) 07:27:58 DEBUG --- stdout --- 07:27:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 202m 1% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 196m 1% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 202m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6818m 42% 6788Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2494m 15% 2658Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 4828m 30% 6255Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 8479m 53% 14309Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 12114m 76% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8486m 53% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1261m 7% 3483Mi 5% 07:27:58 DEBUG --- stderr --- 07:27:58 DEBUG 07:28:56 INFO 07:28:56 INFO [loop_until]: kubectl --namespace=xlou top pods 07:28:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:28:56 INFO [loop_until]: OK (rc = 0) 07:28:56 DEBUG --- stdout --- 07:28:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 93m 5799Mi am-55f77847b7-ch6mt 116m 5783Mi am-55f77847b7-gbbjq 76m 5804Mi ds-cts-0 9m 384Mi ds-cts-1 8m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 7035m 13819Mi ds-idrepo-1 4990m 13789Mi ds-idrepo-2 7191m 13848Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 8082m 5164Mi idm-65858d8c4c-h9wbp 577m 5457Mi lodemon-65c77dbb64-7jwvp 1m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 537m 909Mi 07:28:56 DEBUG --- stderr --- 07:28:56 DEBUG 07:28:58 INFO 07:28:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:28:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:28:58 INFO [loop_until]: OK (rc = 0) 07:28:58 DEBUG --- stdout --- 07:28:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 85m 0% 6789Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 304m 1% 2207Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6948m 43% 6453Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 6493m 40% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8654m 54% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5422m 34% 14400Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 387m 2% 3483Mi 5% 07:28:58 DEBUG --- stderr --- 07:28:58 DEBUG 07:29:57 INFO 07:29:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:29:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:29:57 INFO [loop_until]: OK (rc = 0) 07:29:57 DEBUG --- stdout --- 07:29:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 5m 5799Mi am-55f77847b7-ch6mt 7m 5783Mi am-55f77847b7-gbbjq 7m 5804Mi ds-cts-0 6m 384Mi ds-cts-1 6m 385Mi ds-cts-2 6m 465Mi ds-idrepo-0 14m 13663Mi ds-idrepo-1 9m 13683Mi ds-idrepo-2 12m 13642Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 48m 5163Mi idm-65858d8c4c-h9wbp 8m 5457Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 345Mi 07:29:57 DEBUG --- stderr --- 07:29:57 DEBUG 07:29:58 INFO 07:29:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:29:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:29:58 INFO [loop_until]: OK (rc = 0) 07:29:58 DEBUG --- stdout --- 07:29:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6785Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2196Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 112m 0% 6460Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 14272Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 242m 1% 14265Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14294Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1783Mi 3% 07:29:58 DEBUG --- stderr --- 07:29:58 DEBUG 07:30:57 INFO 07:30:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:30:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:30:57 INFO [loop_until]: OK (rc = 0) 07:30:57 DEBUG --- stdout --- 07:30:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 6m 5799Mi am-55f77847b7-ch6mt 6m 5783Mi am-55f77847b7-gbbjq 7m 5804Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 11m 13663Mi ds-idrepo-1 9m 13682Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 8m 5154Mi idm-65858d8c4c-h9wbp 7m 5457Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 4m 345Mi 07:30:57 DEBUG --- stderr --- 07:30:57 DEBUG 07:30:58 INFO 07:30:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:30:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:30:58 INFO [loop_until]: OK (rc = 0) 07:30:58 DEBUG --- stdout --- 07:30:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6785Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2202Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 6453Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1213Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 14268Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14295Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 86m 0% 1784Mi 3% 07:30:58 DEBUG --- stderr --- 07:30:58 DEBUG 127.0.0.1 - - [13/Aug/2023 07:31:33] "GET /monitoring/average?start_time=23-08-13_06:00:05&stop_time=23-08-13_06:29:32 HTTP/1.1" 200 - 07:31:57 INFO 07:31:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:31:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:31:57 INFO [loop_until]: OK (rc = 0) 07:31:57 DEBUG --- stdout --- 07:31:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 6m 5799Mi am-55f77847b7-ch6mt 6m 5783Mi am-55f77847b7-gbbjq 8m 5804Mi ds-cts-0 6m 384Mi ds-cts-1 6m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 11m 13664Mi ds-idrepo-1 9m 13682Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 8m 5154Mi idm-65858d8c4c-h9wbp 7m 5456Mi lodemon-65c77dbb64-7jwvp 3m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1545m 779Mi 07:31:57 DEBUG --- stderr --- 07:31:57 DEBUG 07:31:58 INFO 07:31:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:31:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:31:58 INFO [loop_until]: OK (rc = 0) 07:31:58 DEBUG --- stdout --- 07:31:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 57m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 6787Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2200Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 6453Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 14269Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 55m 0% 14296Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2009m 12% 2244Mi 3% 07:31:58 DEBUG --- stderr --- 07:31:58 DEBUG 07:32:57 INFO 07:32:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:32:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:32:57 INFO [loop_until]: OK (rc = 0) 07:32:57 DEBUG --- stdout --- 07:32:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 181m 5813Mi am-55f77847b7-ch6mt 75m 5796Mi am-55f77847b7-gbbjq 160m 5816Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 5m 465Mi ds-idrepo-0 7770m 13804Mi ds-idrepo-1 4825m 13823Mi ds-idrepo-2 4406m 13832Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2979m 5271Mi idm-65858d8c4c-h9wbp 2880m 5478Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1035m 1445Mi 07:32:57 DEBUG --- stderr --- 07:32:57 DEBUG 07:32:58 INFO 07:32:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:32:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:32:58 INFO [loop_until]: OK (rc = 0) 07:32:58 DEBUG --- stdout --- 07:32:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 106m 0% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 118m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6986Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3199m 20% 6801Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2116m 13% 2599Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3293m 20% 6550Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4876m 30% 14457Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8266m 52% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4794m 30% 14469Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1134m 7% 2837Mi 4% 07:32:58 DEBUG --- stderr --- 07:32:58 DEBUG 07:33:57 INFO 07:33:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:33:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:33:57 INFO [loop_until]: OK (rc = 0) 07:33:57 DEBUG --- stdout --- 07:33:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 50m 5815Mi am-55f77847b7-ch6mt 51m 5802Mi am-55f77847b7-gbbjq 47m 5817Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 8384m 13840Mi ds-idrepo-1 4613m 13844Mi ds-idrepo-2 3728m 13824Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2539m 5423Mi idm-65858d8c4c-h9wbp 2516m 5488Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 2618m 1652Mi 07:33:57 DEBUG --- stderr --- 07:33:57 DEBUG 07:33:58 INFO 07:33:58 INFO [loop_until]: kubectl --namespace=xlou top node 07:33:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:33:59 INFO [loop_until]: OK (rc = 0) 07:33:59 DEBUG --- stdout --- 07:33:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 107m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2705m 17% 6800Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2330m 14% 3187Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2675m 16% 6695Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4005m 25% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8461m 53% 14453Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4927m 31% 14466Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2438m 15% 3025Mi 5% 07:33:59 DEBUG --- stderr --- 07:33:59 DEBUG 07:34:57 INFO 07:34:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:34:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:34:57 INFO [loop_until]: OK (rc = 0) 07:34:57 DEBUG --- stdout --- 07:34:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 93m 5815Mi am-55f77847b7-ch6mt 69m 5802Mi am-55f77847b7-gbbjq 125m 5817Mi ds-cts-0 6m 384Mi ds-cts-1 7m 385Mi ds-cts-2 6m 465Mi ds-idrepo-0 7656m 13824Mi ds-idrepo-1 4486m 13835Mi ds-idrepo-2 4501m 13799Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 3058m 5558Mi idm-65858d8c4c-h9wbp 1630m 5477Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1176m 1618Mi 07:34:57 DEBUG --- stderr --- 07:34:57 DEBUG 07:34:59 INFO 07:34:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:34:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:34:59 INFO [loop_until]: OK (rc = 0) 07:34:59 DEBUG --- stdout --- 07:34:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 100m 0% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1547m 9% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2123m 13% 2520Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2972m 18% 6849Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4619m 29% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 7748m 48% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4601m 28% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1094m 6% 3026Mi 5% 07:34:59 DEBUG --- stderr --- 07:34:59 DEBUG 07:35:57 INFO 07:35:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:35:57 INFO [loop_until]: OK (rc = 0) 07:35:57 DEBUG --- stdout --- 07:35:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 31m 5815Mi am-55f77847b7-ch6mt 37m 5802Mi am-55f77847b7-gbbjq 38m 5818Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 466Mi ds-idrepo-0 8795m 13824Mi ds-idrepo-1 5063m 13849Mi ds-idrepo-2 5743m 13856Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2716m 5637Mi idm-65858d8c4c-h9wbp 2611m 5484Mi lodemon-65c77dbb64-7jwvp 1m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1867m 1673Mi 07:35:57 DEBUG --- stderr --- 07:35:57 DEBUG 07:35:59 INFO 07:35:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:35:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:35:59 INFO [loop_until]: OK (rc = 0) 07:35:59 DEBUG --- stdout --- 07:35:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 87m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 7004Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2881m 18% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2216m 13% 3173Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2968m 18% 6920Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5643m 35% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1213Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9048m 56% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4977m 31% 14472Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1872m 11% 3024Mi 5% 07:35:59 DEBUG --- stderr --- 07:35:59 DEBUG 07:36:57 INFO 07:36:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:36:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:36:57 INFO [loop_until]: OK (rc = 0) 07:36:57 DEBUG --- stdout --- 07:36:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 37m 5815Mi am-55f77847b7-ch6mt 24m 5802Mi am-55f77847b7-gbbjq 77m 5818Mi ds-cts-0 6m 384Mi ds-cts-1 7m 385Mi ds-cts-2 6m 465Mi ds-idrepo-0 8377m 13814Mi ds-idrepo-1 4765m 13844Mi ds-idrepo-2 5497m 13780Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 3112m 5632Mi idm-65858d8c4c-h9wbp 2902m 5477Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1498m 1622Mi 07:36:57 DEBUG --- stderr --- 07:36:57 DEBUG 07:36:59 INFO 07:36:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:36:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:36:59 INFO [loop_until]: OK (rc = 0) 07:36:59 DEBUG --- stdout --- 07:36:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 127m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2733m 17% 6800Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2303m 14% 2220Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2918m 18% 6921Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4904m 30% 14511Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 7967m 50% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4985m 31% 14281Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1600m 10% 3039Mi 5% 07:36:59 DEBUG --- stderr --- 07:36:59 DEBUG 07:37:57 INFO 07:37:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:37:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:37:57 INFO [loop_until]: OK (rc = 0) 07:37:57 DEBUG --- stdout --- 07:37:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 119m 5816Mi am-55f77847b7-ch6mt 78m 5802Mi am-55f77847b7-gbbjq 71m 5818Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 8281m 13648Mi ds-idrepo-1 5094m 13826Mi ds-idrepo-2 5544m 13853Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 4104m 2282Mi idm-65858d8c4c-h9wbp 4290m 1170Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1173m 1628Mi 07:37:57 DEBUG --- stderr --- 07:37:57 DEBUG 07:37:59 INFO 07:37:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:37:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:37:59 INFO [loop_until]: OK (rc = 0) 07:37:59 DEBUG --- stdout --- 07:37:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 174m 1% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3789m 23% 2287Mi 3% gke-xlou-cdm-default-pool-f05840a3-h81k 2387m 15% 2205Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3678m 23% 3450Mi 5% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5863m 36% 14505Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1205Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8386m 52% 14291Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5162m 32% 14495Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1271m 7% 3044Mi 5% 07:37:59 DEBUG --- stderr --- 07:37:59 DEBUG 07:38:57 INFO 07:38:57 INFO [loop_until]: kubectl --namespace=xlou top pods 07:38:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:38:57 INFO [loop_until]: OK (rc = 0) 07:38:57 DEBUG --- stdout --- 07:38:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 79m 5816Mi am-55f77847b7-ch6mt 75m 5802Mi am-55f77847b7-gbbjq 89m 5818Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 5935m 13708Mi ds-idrepo-1 5328m 13821Mi ds-idrepo-2 4608m 13711Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 4835m 3849Mi idm-65858d8c4c-h9wbp 4269m 3798Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1081m 1640Mi 07:38:57 DEBUG --- stderr --- 07:38:57 DEBUG 07:38:59 INFO 07:38:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:38:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:38:59 INFO [loop_until]: OK (rc = 0) 07:38:59 DEBUG --- stdout --- 07:38:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4584m 28% 5139Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 2105m 13% 2343Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6391m 40% 5122Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5480m 34% 14472Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 6248m 39% 14325Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5446m 34% 14472Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1121m 7% 3043Mi 5% 07:38:59 DEBUG --- stderr --- 07:38:59 DEBUG 07:39:58 INFO 07:39:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:39:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:39:58 INFO [loop_until]: OK (rc = 0) 07:39:58 DEBUG --- stdout --- 07:39:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 37m 5816Mi am-55f77847b7-ch6mt 39m 5802Mi am-55f77847b7-gbbjq 39m 5818Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 8614m 13822Mi ds-idrepo-1 4626m 13835Mi ds-idrepo-2 4822m 13805Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2647m 3969Mi idm-65858d8c4c-h9wbp 2633m 3973Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1010m 1686Mi 07:39:58 DEBUG --- stderr --- 07:39:58 DEBUG 07:39:59 INFO 07:39:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:39:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:39:59 INFO [loop_until]: OK (rc = 0) 07:39:59 DEBUG --- stdout --- 07:39:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 91m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 89m 0% 6991Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2911m 18% 5303Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2237m 14% 2943Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2974m 18% 5261Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4849m 30% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1206Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8658m 54% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 70m 0% 1146Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4439m 27% 14451Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1399m 8% 3044Mi 5% 07:39:59 DEBUG --- stderr --- 07:39:59 DEBUG 07:40:58 INFO 07:40:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:40:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:40:58 INFO [loop_until]: OK (rc = 0) 07:40:58 DEBUG --- stdout --- 07:40:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 29m 5816Mi am-55f77847b7-ch6mt 43m 5802Mi am-55f77847b7-gbbjq 30m 5818Mi ds-cts-0 5m 384Mi ds-cts-1 7m 385Mi ds-cts-2 5m 465Mi ds-idrepo-0 8296m 13797Mi ds-idrepo-1 4871m 13839Mi ds-idrepo-2 4615m 13823Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2964m 4157Mi idm-65858d8c4c-h9wbp 1552m 4106Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1366m 1635Mi 07:40:58 DEBUG --- stderr --- 07:40:58 DEBUG 07:40:59 INFO 07:40:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:40:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:40:59 INFO [loop_until]: OK (rc = 0) 07:40:59 DEBUG --- stdout --- 07:40:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 90m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 87m 0% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1852m 11% 5467Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2176m 13% 2216Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2001m 12% 5451Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4684m 29% 14457Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 6988m 43% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4723m 29% 14437Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1432m 9% 3043Mi 5% 07:40:59 DEBUG --- stderr --- 07:40:59 DEBUG 07:41:58 INFO 07:41:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:41:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:41:58 INFO [loop_until]: OK (rc = 0) 07:41:58 DEBUG --- stdout --- 07:41:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 39m 5816Mi am-55f77847b7-ch6mt 41m 5802Mi am-55f77847b7-gbbjq 42m 5818Mi ds-cts-0 6m 384Mi ds-cts-1 7m 385Mi ds-cts-2 6m 464Mi ds-idrepo-0 8973m 13824Mi ds-idrepo-1 5645m 13825Mi ds-idrepo-2 5055m 13824Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2880m 4252Mi idm-65858d8c4c-h9wbp 2767m 4177Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1066m 1686Mi 07:41:58 DEBUG --- stderr --- 07:41:58 DEBUG 07:41:59 INFO 07:41:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:41:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:41:59 INFO [loop_until]: OK (rc = 0) 07:41:59 DEBUG --- stdout --- 07:41:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6853Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3006m 18% 5514Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2218m 13% 2791Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3134m 19% 5533Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5246m 33% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8650m 54% 14318Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5543m 34% 14444Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1110m 6% 3047Mi 5% 07:41:59 DEBUG --- stderr --- 07:41:59 DEBUG 07:42:58 INFO 07:42:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:42:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:42:58 INFO [loop_until]: OK (rc = 0) 07:42:58 DEBUG --- stdout --- 07:42:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 31m 5816Mi am-55f77847b7-ch6mt 45m 5802Mi am-55f77847b7-gbbjq 54m 5833Mi ds-cts-0 6m 384Mi ds-cts-1 7m 385Mi ds-cts-2 6m 464Mi ds-idrepo-0 5121m 13822Mi ds-idrepo-1 4889m 13823Mi ds-idrepo-2 4556m 13828Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5469m 1041Mi idm-65858d8c4c-h9wbp 4511m 1250Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 2027m 1640Mi 07:42:58 DEBUG --- stderr --- 07:42:58 DEBUG 07:42:59 INFO 07:42:59 INFO [loop_until]: kubectl --namespace=xlou top node 07:42:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:43:00 INFO [loop_until]: OK (rc = 0) 07:43:00 DEBUG --- stdout --- 07:43:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4221m 26% 2337Mi 3% gke-xlou-cdm-default-pool-f05840a3-h81k 2555m 16% 2209Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4368m 27% 2357Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4563m 28% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 5851m 36% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5152m 32% 14494Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2270m 14% 3045Mi 5% 07:43:00 DEBUG --- stderr --- 07:43:00 DEBUG 07:43:58 INFO 07:43:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:43:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:43:58 INFO [loop_until]: OK (rc = 0) 07:43:58 DEBUG --- stdout --- 07:43:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 87m 5816Mi am-55f77847b7-ch6mt 100m 5802Mi am-55f77847b7-gbbjq 121m 5833Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 466Mi ds-idrepo-0 8045m 13738Mi ds-idrepo-1 4179m 13829Mi ds-idrepo-2 3611m 13797Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5841m 3718Mi idm-65858d8c4c-h9wbp 5071m 3738Mi lodemon-65c77dbb64-7jwvp 1m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1071m 1650Mi 07:43:58 DEBUG --- stderr --- 07:43:58 DEBUG 07:44:00 INFO 07:44:00 INFO [loop_until]: kubectl --namespace=xlou top node 07:44:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:44:00 INFO [loop_until]: OK (rc = 0) 07:44:00 DEBUG --- stdout --- 07:44:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1391Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 167m 1% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6991Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5104m 32% 5115Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 2209m 13% 2324Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7006m 44% 5051Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3181m 20% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 7207m 45% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4044m 25% 14465Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1162m 7% 3045Mi 5% 07:44:00 DEBUG --- stderr --- 07:44:00 DEBUG 07:44:58 INFO 07:44:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:44:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:44:58 INFO [loop_until]: OK (rc = 0) 07:44:58 DEBUG --- stdout --- 07:44:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 41m 5816Mi am-55f77847b7-ch6mt 41m 5802Mi am-55f77847b7-gbbjq 38m 5828Mi ds-cts-0 7m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 466Mi ds-idrepo-0 9430m 13816Mi ds-idrepo-1 5518m 13815Mi ds-idrepo-2 5053m 13822Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2825m 3867Mi idm-65858d8c4c-h9wbp 2776m 3934Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1019m 1683Mi 07:44:58 DEBUG --- stderr --- 07:44:58 DEBUG 07:45:00 INFO 07:45:00 INFO [loop_until]: kubectl --namespace=xlou top node 07:45:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:45:00 INFO [loop_until]: OK (rc = 0) 07:45:00 DEBUG --- stdout --- 07:45:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6846Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6989Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2944m 18% 5289Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2259m 14% 2771Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2973m 18% 5160Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5101m 32% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9738m 61% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5788m 36% 14429Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1101m 6% 3047Mi 5% 07:45:00 DEBUG --- stderr --- 07:45:00 DEBUG 07:45:58 INFO 07:45:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:45:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:45:58 INFO [loop_until]: OK (rc = 0) 07:45:58 DEBUG --- stdout --- 07:45:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 118m 5820Mi am-55f77847b7-ch6mt 68m 5802Mi am-55f77847b7-gbbjq 43m 5828Mi ds-cts-0 5m 384Mi ds-cts-1 7m 385Mi ds-cts-2 6m 467Mi ds-idrepo-0 9960m 13817Mi ds-idrepo-1 5063m 13750Mi ds-idrepo-2 5047m 13812Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 1889m 4016Mi idm-65858d8c4c-h9wbp 4585m 4170Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1204m 1661Mi 07:45:58 DEBUG --- stderr --- 07:45:58 DEBUG 07:46:00 INFO 07:46:00 INFO [loop_until]: kubectl --namespace=xlou top node 07:46:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:46:00 INFO [loop_until]: OK (rc = 0) 07:46:00 DEBUG --- stdout --- 07:46:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1391Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 197m 1% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 115m 0% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 171m 1% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4778m 30% 5514Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2252m 14% 2534Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 1322m 8% 5310Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5220m 32% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9316m 58% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5181m 32% 14465Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1232m 7% 3043Mi 5% 07:46:00 DEBUG --- stderr --- 07:46:00 DEBUG 07:46:58 INFO 07:46:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:46:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:46:58 INFO [loop_until]: OK (rc = 0) 07:46:58 DEBUG --- stdout --- 07:46:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 31m 5820Mi am-55f77847b7-ch6mt 38m 5805Mi am-55f77847b7-gbbjq 39m 5828Mi ds-cts-0 6m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 468Mi ds-idrepo-0 8971m 13842Mi ds-idrepo-1 5118m 13815Mi ds-idrepo-2 5762m 13836Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2808m 4067Mi idm-65858d8c4c-h9wbp 2735m 4285Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1371m 1703Mi 07:46:58 DEBUG --- stderr --- 07:46:58 DEBUG 07:47:00 INFO 07:47:00 INFO [loop_until]: kubectl --namespace=xlou top node 07:47:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:47:00 INFO [loop_until]: OK (rc = 0) 07:47:00 DEBUG --- stdout --- 07:47:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 89m 0% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 88m 0% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3187m 20% 5620Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2265m 14% 2934Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 3149m 19% 5375Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5847m 36% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1212Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9408m 59% 14476Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5583m 35% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1332m 8% 3044Mi 5% 07:47:00 DEBUG --- stderr --- 07:47:00 DEBUG 07:47:58 INFO 07:47:58 INFO [loop_until]: kubectl --namespace=xlou top pods 07:47:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:47:58 INFO [loop_until]: OK (rc = 0) 07:47:58 DEBUG --- stdout --- 07:47:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 119m 5820Mi am-55f77847b7-ch6mt 74m 5805Mi am-55f77847b7-gbbjq 44m 5828Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 466Mi ds-idrepo-0 7476m 13837Mi ds-idrepo-1 5622m 13854Mi ds-idrepo-2 5602m 13834Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 4800m 1094Mi idm-65858d8c4c-h9wbp 1853m 4384Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 2353m 2169Mi 07:47:58 DEBUG --- stderr --- 07:47:58 DEBUG 07:48:00 INFO 07:48:00 INFO [loop_until]: kubectl --namespace=xlou top node 07:48:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:48:00 INFO [loop_until]: OK (rc = 0) 07:48:00 DEBUG --- stdout --- 07:48:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 116m 0% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1310m 8% 5733Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 3108m 19% 2209Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4357m 27% 2399Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5853m 36% 14509Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 6697m 42% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6092m 38% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2541m 15% 3570Mi 6% 07:48:00 DEBUG --- stderr --- 07:48:00 DEBUG 07:48:59 INFO 07:48:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:48:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:48:59 INFO [loop_until]: OK (rc = 0) 07:48:59 DEBUG --- stdout --- 07:48:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 120m 5820Mi am-55f77847b7-ch6mt 118m 5806Mi am-55f77847b7-gbbjq 136m 5828Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 466Mi ds-idrepo-0 10126m 13822Mi ds-idrepo-1 5343m 13816Mi ds-idrepo-2 5604m 13822Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5506m 3855Mi idm-65858d8c4c-h9wbp 4708m 1175Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1033m 2396Mi 07:48:59 DEBUG --- stderr --- 07:48:59 DEBUG 07:49:00 INFO 07:49:00 INFO [loop_until]: kubectl --namespace=xlou top node 07:49:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:49:00 INFO [loop_until]: OK (rc = 0) 07:49:00 DEBUG --- stdout --- 07:49:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 176m 1% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 186m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 180m 1% 6998Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5037m 31% 2479Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 2288m 14% 2345Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8600m 54% 5178Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5819m 36% 14533Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1213Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 10019m 63% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5182m 32% 14481Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1138m 7% 3789Mi 6% 07:49:00 DEBUG --- stderr --- 07:49:00 DEBUG 07:49:59 INFO 07:49:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:49:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:49:59 INFO [loop_until]: OK (rc = 0) 07:49:59 DEBUG --- stdout --- 07:49:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 34m 5820Mi am-55f77847b7-ch6mt 47m 5805Mi am-55f77847b7-gbbjq 56m 5828Mi ds-cts-0 5m 384Mi ds-cts-1 7m 384Mi ds-cts-2 6m 466Mi ds-idrepo-0 8219m 13824Mi ds-idrepo-1 4897m 13823Mi ds-idrepo-2 4703m 13823Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 3792m 4086Mi idm-65858d8c4c-h9wbp 3676m 3830Mi lodemon-65c77dbb64-7jwvp 1m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1096m 2446Mi 07:49:59 DEBUG --- stderr --- 07:49:59 DEBUG 07:50:00 INFO 07:50:00 INFO [loop_until]: kubectl --namespace=xlou top node 07:50:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:50:00 INFO [loop_until]: OK (rc = 0) 07:50:00 DEBUG --- stdout --- 07:50:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 108m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4292m 27% 5173Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 2205m 13% 3019Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 3456m 21% 5365Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4761m 29% 14499Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1212Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8107m 51% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4716m 29% 14462Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1460m 9% 3798Mi 6% 07:50:00 DEBUG --- stderr --- 07:50:00 DEBUG 07:50:59 INFO 07:50:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:50:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:50:59 INFO [loop_until]: OK (rc = 0) 07:50:59 DEBUG --- stdout --- 07:50:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 31m 5820Mi am-55f77847b7-ch6mt 33m 5806Mi am-55f77847b7-gbbjq 33m 5828Mi ds-cts-0 6m 387Mi ds-cts-1 7m 384Mi ds-cts-2 6m 468Mi ds-idrepo-0 8401m 13831Mi ds-idrepo-1 4854m 13816Mi ds-idrepo-2 4547m 13830Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2177m 4231Mi idm-65858d8c4c-h9wbp 2172m 4059Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 3940m 2555Mi 07:50:59 DEBUG --- stderr --- 07:50:59 DEBUG 07:51:01 INFO 07:51:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:51:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:51:01 INFO [loop_until]: OK (rc = 0) 07:51:01 DEBUG --- stdout --- 07:51:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 87m 0% 6853Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 6997Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2518m 15% 5405Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2894m 18% 3598Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 2475m 15% 5594Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4397m 27% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8133m 51% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4818m 30% 14478Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 4554m 28% 3857Mi 6% 07:51:01 DEBUG --- stderr --- 07:51:01 DEBUG 07:51:59 INFO 07:51:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:51:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:51:59 INFO [loop_until]: OK (rc = 0) 07:51:59 DEBUG --- stdout --- 07:51:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 43m 5820Mi am-55f77847b7-ch6mt 65m 5806Mi am-55f77847b7-gbbjq 43m 5828Mi ds-cts-0 5m 387Mi ds-cts-1 8m 385Mi ds-cts-2 6m 464Mi ds-idrepo-0 9197m 13780Mi ds-idrepo-1 5230m 13810Mi ds-idrepo-2 5449m 13822Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 1466m 4379Mi idm-65858d8c4c-h9wbp 3508m 4444Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1254m 2516Mi 07:51:59 DEBUG --- stderr --- 07:51:59 DEBUG 07:52:01 INFO 07:52:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:52:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:52:01 INFO [loop_until]: OK (rc = 0) 07:52:01 DEBUG --- stdout --- 07:52:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3201m 20% 5793Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2273m 14% 3321Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2397m 15% 5694Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5374m 33% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9291m 58% 14486Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5166m 32% 14485Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1278m 8% 3871Mi 6% 07:52:01 DEBUG --- stderr --- 07:52:01 DEBUG 07:52:59 INFO 07:52:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:52:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:52:59 INFO [loop_until]: OK (rc = 0) 07:52:59 DEBUG --- stdout --- 07:52:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 41m 5820Mi am-55f77847b7-ch6mt 45m 5805Mi am-55f77847b7-gbbjq 44m 5828Mi ds-cts-0 7m 386Mi ds-cts-1 7m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 9566m 13668Mi ds-idrepo-1 4816m 13823Mi ds-idrepo-2 5654m 13823Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 3212m 4696Mi idm-65858d8c4c-h9wbp 2983m 4698Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 4337m 2568Mi 07:52:59 DEBUG --- stderr --- 07:52:59 DEBUG 07:53:01 INFO 07:53:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:53:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:53:01 INFO [loop_until]: OK (rc = 0) 07:53:01 DEBUG --- stdout --- 07:53:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3232m 20% 6050Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2081m 13% 3595Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 3585m 22% 6003Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5715m 35% 14486Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1222Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9296m 58% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4720m 29% 14478Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 3397m 21% 3875Mi 6% 07:53:01 DEBUG --- stderr --- 07:53:01 DEBUG 07:53:59 INFO 07:53:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:53:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:53:59 INFO [loop_until]: OK (rc = 0) 07:53:59 DEBUG --- stdout --- 07:53:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 39m 5820Mi am-55f77847b7-ch6mt 40m 5805Mi am-55f77847b7-gbbjq 43m 5828Mi ds-cts-0 5m 386Mi ds-cts-1 7m 384Mi ds-cts-2 6m 466Mi ds-idrepo-0 8917m 13822Mi ds-idrepo-1 4898m 13823Mi ds-idrepo-2 5480m 13756Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2960m 4958Mi idm-65858d8c4c-h9wbp 3234m 4827Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 2870m 2571Mi 07:53:59 DEBUG --- stderr --- 07:53:59 DEBUG 07:54:01 INFO 07:54:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:54:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:54:01 INFO [loop_until]: OK (rc = 0) 07:54:01 DEBUG --- stdout --- 07:54:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3575m 22% 6189Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 3439m 21% 3549Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 3130m 19% 6272Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5564m 35% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9167m 57% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4838m 30% 14472Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 3026m 19% 3886Mi 6% 07:54:01 DEBUG --- stderr --- 07:54:01 DEBUG 07:54:59 INFO 07:54:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:54:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:54:59 INFO [loop_until]: OK (rc = 0) 07:54:59 DEBUG --- stdout --- 07:54:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 58m 5820Mi am-55f77847b7-ch6mt 61m 5806Mi am-55f77847b7-gbbjq 54m 5828Mi ds-cts-0 5m 386Mi ds-cts-1 7m 385Mi ds-cts-2 6m 467Mi ds-idrepo-0 8838m 13803Mi ds-idrepo-1 4305m 13815Mi ds-idrepo-2 4854m 13768Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5722m 1149Mi idm-65858d8c4c-h9wbp 3441m 5139Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 3092m 2531Mi 07:54:59 DEBUG --- stderr --- 07:54:59 DEBUG 07:55:01 INFO 07:55:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:55:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:55:01 INFO [loop_until]: OK (rc = 0) 07:55:01 DEBUG --- stdout --- 07:55:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 121m 0% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6996Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4869m 30% 6538Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2965m 18% 3552Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 3267m 20% 2154Mi 3% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1141Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4843m 30% 14474Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9093m 57% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4426m 27% 14510Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2892m 18% 3877Mi 6% 07:55:01 DEBUG --- stderr --- 07:55:01 DEBUG 07:55:59 INFO 07:55:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:55:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:55:59 INFO [loop_until]: OK (rc = 0) 07:55:59 DEBUG --- stdout --- 07:55:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 129m 5820Mi am-55f77847b7-ch6mt 129m 5805Mi am-55f77847b7-gbbjq 167m 5836Mi ds-cts-0 6m 387Mi ds-cts-1 7m 384Mi ds-cts-2 6m 465Mi ds-idrepo-0 9852m 13709Mi ds-idrepo-1 5027m 13775Mi ds-idrepo-2 5187m 13806Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 7365m 3897Mi idm-65858d8c4c-h9wbp 3971m 1574Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1130m 2485Mi 07:55:59 DEBUG --- stderr --- 07:55:59 DEBUG 07:56:01 INFO 07:56:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:56:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:56:01 INFO [loop_until]: OK (rc = 0) 07:56:01 DEBUG --- stdout --- 07:56:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1392Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 190m 1% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 228m 1% 6981Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 185m 1% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4179m 26% 2968Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 2304m 14% 2856Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 6659m 41% 5157Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1141Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5789m 36% 14499Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1207Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 9326m 58% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4858m 30% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1208m 7% 3874Mi 6% 07:56:01 DEBUG --- stderr --- 07:56:01 DEBUG 07:56:59 INFO 07:56:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:56:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:56:59 INFO [loop_until]: OK (rc = 0) 07:56:59 DEBUG --- stdout --- 07:56:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 120m 5823Mi am-55f77847b7-ch6mt 122m 5810Mi am-55f77847b7-gbbjq 70m 5836Mi ds-cts-0 5m 386Mi ds-cts-1 7m 384Mi ds-cts-2 5m 465Mi ds-idrepo-0 8278m 13764Mi ds-idrepo-1 5232m 13844Mi ds-idrepo-2 5986m 13821Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 5847m 4099Mi idm-65858d8c4c-h9wbp 2428m 3832Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1320m 2466Mi 07:56:59 DEBUG --- stderr --- 07:56:59 DEBUG 07:57:01 INFO 07:57:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:57:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:57:01 INFO [loop_until]: OK (rc = 0) 07:57:01 DEBUG --- stdout --- 07:57:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 193m 1% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 183m 1% 6998Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3150m 19% 5211Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 2460m 15% 3038Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 6615m 41% 5439Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1143Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 5639m 35% 14551Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1208Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8997m 56% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5188m 32% 14495Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1444m 9% 3876Mi 6% 07:57:01 DEBUG --- stderr --- 07:57:01 DEBUG 07:57:59 INFO 07:57:59 INFO [loop_until]: kubectl --namespace=xlou top pods 07:57:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:57:59 INFO [loop_until]: OK (rc = 0) 07:57:59 DEBUG --- stdout --- 07:57:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 5m 5823Mi am-55f77847b7-ch6mt 6m 5810Mi am-55f77847b7-gbbjq 7m 5836Mi ds-cts-0 6m 387Mi ds-cts-1 6m 385Mi ds-cts-2 7m 466Mi ds-idrepo-0 692m 13538Mi ds-idrepo-1 5148m 13826Mi ds-idrepo-2 2460m 13774Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 817m 4227Mi idm-65858d8c4c-h9wbp 685m 3852Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1060m 2467Mi 07:57:59 DEBUG --- stderr --- 07:57:59 DEBUG 07:58:01 INFO 07:58:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:58:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:58:01 INFO [loop_until]: OK (rc = 0) 07:58:01 DEBUG --- stdout --- 07:58:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 58m 0% 6999Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 827m 5% 5223Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 2163m 13% 2219Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 902m 5% 5518Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1141Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2448m 15% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1210Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 530m 3% 14194Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5086m 32% 14511Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1160m 7% 3876Mi 6% 07:58:01 DEBUG --- stderr --- 07:58:01 DEBUG 07:59:00 INFO 07:59:00 INFO [loop_until]: kubectl --namespace=xlou top pods 07:59:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:59:00 INFO [loop_until]: OK (rc = 0) 07:59:00 DEBUG --- stdout --- 07:59:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 36m 5823Mi am-55f77847b7-ch6mt 38m 5810Mi am-55f77847b7-gbbjq 42m 5836Mi ds-cts-0 7m 387Mi ds-cts-1 7m 385Mi ds-cts-2 6m 466Mi ds-idrepo-0 8162m 13822Mi ds-idrepo-1 4628m 13826Mi ds-idrepo-2 4890m 13811Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2923m 4313Mi idm-65858d8c4c-h9wbp 2945m 4005Mi lodemon-65c77dbb64-7jwvp 1m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1042m 2492Mi 07:59:00 DEBUG --- stderr --- 07:59:00 DEBUG 07:59:01 INFO 07:59:01 INFO [loop_until]: kubectl --namespace=xlou top node 07:59:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:59:02 INFO [loop_until]: OK (rc = 0) 07:59:02 DEBUG --- stdout --- 07:59:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6999Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3190m 20% 5361Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2229m 14% 2544Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3196m 20% 5593Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4998m 31% 14495Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8550m 53% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4397m 27% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1161m 7% 3876Mi 6% 07:59:02 DEBUG --- stderr --- 07:59:02 DEBUG 08:00:00 INFO 08:00:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:00:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:00:00 INFO [loop_until]: OK (rc = 0) 08:00:00 DEBUG --- stdout --- 08:00:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 7m 5823Mi am-55f77847b7-ch6mt 7m 5810Mi am-55f77847b7-gbbjq 8m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 6m 385Mi ds-cts-2 6m 466Mi ds-idrepo-0 715m 13556Mi ds-idrepo-1 5030m 13828Mi ds-idrepo-2 867m 13596Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 738m 4374Mi idm-65858d8c4c-h9wbp 668m 4048Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1106m 2467Mi 08:00:00 DEBUG --- stderr --- 08:00:00 DEBUG 08:00:02 INFO 08:00:02 INFO [loop_until]: kubectl --namespace=xlou top node 08:00:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:00:02 INFO [loop_until]: OK (rc = 0) 08:00:02 DEBUG --- stdout --- 08:00:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6997Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 793m 4% 5396Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2183m 13% 2212Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 871m 5% 5670Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 907m 5% 14292Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 479m 3% 14222Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5055m 31% 14489Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1161m 7% 3878Mi 6% 08:00:02 DEBUG --- stderr --- 08:00:02 DEBUG 08:01:00 INFO 08:01:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:01:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:01:00 INFO [loop_until]: OK (rc = 0) 08:01:00 DEBUG --- stdout --- 08:01:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 163m 5823Mi am-55f77847b7-ch6mt 147m 5810Mi am-55f77847b7-gbbjq 85m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 7m 386Mi ds-cts-2 6m 466Mi ds-idrepo-0 6534m 13657Mi ds-idrepo-1 4998m 13823Mi ds-idrepo-2 2324m 13701Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 2288m 4420Mi idm-65858d8c4c-h9wbp 2068m 4050Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1038m 2478Mi 08:01:00 DEBUG --- stderr --- 08:01:00 DEBUG 08:01:02 INFO 08:01:02 INFO [loop_until]: kubectl --namespace=xlou top node 08:01:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:01:02 INFO [loop_until]: OK (rc = 0) 08:01:02 DEBUG --- stdout --- 08:01:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 162m 1% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3120m 19% 5432Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2165m 13% 2325Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2864m 18% 5712Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3303m 20% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1212Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 8406m 52% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4846m 30% 14504Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1106m 6% 3873Mi 6% 08:01:02 DEBUG --- stderr --- 08:01:02 DEBUG 08:02:00 INFO 08:02:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:02:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:02:00 INFO [loop_until]: OK (rc = 0) 08:02:00 DEBUG --- stdout --- 08:02:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 79m 5823Mi am-55f77847b7-ch6mt 93m 5810Mi am-55f77847b7-gbbjq 134m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 7m 385Mi ds-cts-2 6m 466Mi ds-idrepo-0 7079m 13707Mi ds-idrepo-1 4631m 13806Mi ds-idrepo-2 4157m 13773Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 3038m 4561Mi idm-65858d8c4c-h9wbp 2070m 4177Mi lodemon-65c77dbb64-7jwvp 1m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1204m 2467Mi 08:02:00 DEBUG --- stderr --- 08:02:00 DEBUG 08:02:02 INFO 08:02:02 INFO [loop_until]: kubectl --namespace=xlou top node 08:02:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:02:02 INFO [loop_until]: OK (rc = 0) 08:02:02 DEBUG --- stdout --- 08:02:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 178m 1% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 182m 1% 6981Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 162m 1% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1642m 10% 5529Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2268m 14% 2208Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3108m 19% 5859Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4179m 26% 14540Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 6294m 39% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4633m 29% 14516Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1155m 7% 3878Mi 6% 08:02:02 DEBUG --- stderr --- 08:02:02 DEBUG 08:03:00 INFO 08:03:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:03:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:03:00 INFO [loop_until]: OK (rc = 0) 08:03:00 DEBUG --- stdout --- 08:03:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 7m 5823Mi am-55f77847b7-ch6mt 11m 5810Mi am-55f77847b7-gbbjq 12m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 5m 386Mi ds-cts-2 7m 466Mi ds-idrepo-0 11m 13531Mi ds-idrepo-1 1845m 13593Mi ds-idrepo-2 139m 13634Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 19m 4555Mi idm-65858d8c4c-h9wbp 12m 4175Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 328m 366Mi 08:03:00 DEBUG --- stderr --- 08:03:00 DEBUG 08:03:02 INFO 08:03:02 INFO [loop_until]: kubectl --namespace=xlou top node 08:03:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:03:02 INFO [loop_until]: OK (rc = 0) 08:03:02 DEBUG --- stdout --- 08:03:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6999Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 84m 0% 5530Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2202Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 85m 0% 5857Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 197m 1% 14335Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1500m 9% 14284Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 321m 2% 1783Mi 3% 08:03:02 DEBUG --- stderr --- 08:03:02 DEBUG 08:04:00 INFO 08:04:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:04:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:04:00 INFO [loop_until]: OK (rc = 0) 08:04:00 DEBUG --- stdout --- 08:04:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 7m 5823Mi am-55f77847b7-ch6mt 8m 5810Mi am-55f77847b7-gbbjq 9m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 5m 386Mi ds-cts-2 6m 466Mi ds-idrepo-0 10m 13531Mi ds-idrepo-1 9m 13472Mi ds-idrepo-2 10m 13634Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 9m 4556Mi idm-65858d8c4c-h9wbp 8m 4174Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 366Mi 08:04:00 DEBUG --- stderr --- 08:04:00 DEBUG 08:04:02 INFO 08:04:02 INFO [loop_until]: kubectl --namespace=xlou top node 08:04:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:04:02 INFO [loop_until]: OK (rc = 0) 08:04:02 DEBUG --- stdout --- 08:04:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6996Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 5528Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2210Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 5857Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 14335Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1212Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14194Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1139Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14153Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1783Mi 3% 08:04:02 DEBUG --- stderr --- 08:04:02 DEBUG 127.0.0.1 - - [13/Aug/2023 08:04:33] "GET /monitoring/average?start_time=23-08-13_06:33:33&stop_time=23-08-13_07:02:32 HTTP/1.1" 200 - 08:05:00 INFO 08:05:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:05:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:05:00 INFO [loop_until]: OK (rc = 0) 08:05:00 DEBUG --- stdout --- 08:05:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 4Mi am-55f77847b7-bb6x8 7m 5823Mi am-55f77847b7-ch6mt 8m 5810Mi am-55f77847b7-gbbjq 10m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 6m 387Mi ds-cts-2 6m 466Mi ds-idrepo-0 11m 13531Mi ds-idrepo-1 10m 13472Mi ds-idrepo-2 10m 13634Mi end-user-ui-6845bc78c7-dxwrr 1m 4Mi idm-65858d8c4c-9pfjc 10m 4557Mi idm-65858d8c4c-h9wbp 8m 4174Mi lodemon-65c77dbb64-7jwvp 4m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1m 366Mi 08:05:00 DEBUG --- stderr --- 08:05:00 DEBUG 08:05:02 INFO 08:05:02 INFO [loop_until]: kubectl --namespace=xlou top node 08:05:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:05:02 INFO [loop_until]: OK (rc = 0) 08:05:02 DEBUG --- stdout --- 08:05:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6997Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 5528Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2203Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 5861Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 14336Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1213Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1782Mi 3% 08:05:02 DEBUG --- stderr --- 08:05:02 DEBUG 08:06:00 INFO 08:06:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:06:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:06:00 INFO [loop_until]: OK (rc = 0) 08:06:00 DEBUG --- stdout --- 08:06:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 5Mi am-55f77847b7-bb6x8 7m 5823Mi am-55f77847b7-ch6mt 8m 5810Mi am-55f77847b7-gbbjq 9m 5836Mi ds-cts-0 6m 387Mi ds-cts-1 6m 386Mi ds-cts-2 6m 466Mi ds-idrepo-0 382m 13531Mi ds-idrepo-1 277m 13472Mi ds-idrepo-2 301m 13635Mi end-user-ui-6845bc78c7-dxwrr 1m 6Mi idm-65858d8c4c-9pfjc 9m 4558Mi idm-65858d8c4c-h9wbp 7m 4174Mi lodemon-65c77dbb64-7jwvp 1m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 982m 767Mi 08:06:00 DEBUG --- stderr --- 08:06:00 DEBUG 08:06:02 INFO 08:06:02 INFO [loop_until]: kubectl --namespace=xlou top node 08:06:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:06:02 INFO [loop_until]: OK (rc = 0) 08:06:02 DEBUG --- stdout --- 08:06:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1392Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6998Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 5529Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2212Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 5862Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1154Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 261m 1% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1213Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 423m 2% 14196Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 295m 1% 14162Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1078m 6% 2210Mi 3% 08:06:02 DEBUG --- stderr --- 08:06:02 DEBUG 08:07:00 INFO 08:07:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:07:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:07:00 INFO [loop_until]: OK (rc = 0) 08:07:00 DEBUG --- stdout --- 08:07:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 5Mi am-55f77847b7-bb6x8 6m 5823Mi am-55f77847b7-ch6mt 7m 5811Mi am-55f77847b7-gbbjq 10m 5836Mi ds-cts-0 6m 388Mi ds-cts-1 6m 386Mi ds-cts-2 6m 467Mi ds-idrepo-0 11m 13531Mi ds-idrepo-1 9m 13472Mi ds-idrepo-2 11m 13635Mi end-user-ui-6845bc78c7-dxwrr 1m 6Mi idm-65858d8c4c-9pfjc 8m 4558Mi idm-65858d8c4c-h9wbp 7m 4174Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 996m 959Mi 08:07:00 DEBUG --- stderr --- 08:07:00 DEBUG 08:07:02 INFO 08:07:02 INFO [loop_until]: kubectl --namespace=xlou top node 08:07:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:07:03 INFO [loop_until]: OK (rc = 0) 08:07:03 DEBUG --- stdout --- 08:07:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6983Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5529Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2199Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 5858Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 14342Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14194Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14158Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 661m 4% 2104Mi 3% 08:07:03 DEBUG --- stderr --- 08:07:03 DEBUG 08:08:01 INFO 08:08:01 INFO [loop_until]: kubectl --namespace=xlou top pods 08:08:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:08:01 INFO [loop_until]: OK (rc = 0) 08:08:01 DEBUG --- stdout --- 08:08:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 5Mi am-55f77847b7-bb6x8 8m 5823Mi am-55f77847b7-ch6mt 8m 5810Mi am-55f77847b7-gbbjq 9m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 5m 386Mi ds-cts-2 6m 467Mi ds-idrepo-0 11m 13531Mi ds-idrepo-1 9m 13472Mi ds-idrepo-2 12m 13636Mi end-user-ui-6845bc78c7-dxwrr 1m 6Mi idm-65858d8c4c-9pfjc 8m 4557Mi idm-65858d8c4c-h9wbp 7m 4173Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1006m 1134Mi 08:08:01 DEBUG --- stderr --- 08:08:01 DEBUG 08:08:03 INFO 08:08:03 INFO [loop_until]: kubectl --namespace=xlou top node 08:08:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:08:03 INFO [loop_until]: OK (rc = 0) 08:08:03 DEBUG --- stdout --- 08:08:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6998Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5527Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2194Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 5861Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1209Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14158Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1083m 6% 2533Mi 4% 08:08:03 DEBUG --- stderr --- 08:08:03 DEBUG 08:09:01 INFO 08:09:01 INFO [loop_until]: kubectl --namespace=xlou top pods 08:09:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:09:01 INFO [loop_until]: OK (rc = 0) 08:09:01 DEBUG --- stdout --- 08:09:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 5Mi am-55f77847b7-bb6x8 7m 5823Mi am-55f77847b7-ch6mt 8m 5810Mi am-55f77847b7-gbbjq 8m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 6m 386Mi ds-cts-2 6m 466Mi ds-idrepo-0 11m 13533Mi ds-idrepo-1 9m 13472Mi ds-idrepo-2 10m 13635Mi end-user-ui-6845bc78c7-dxwrr 1m 6Mi idm-65858d8c4c-9pfjc 8m 4558Mi idm-65858d8c4c-h9wbp 7m 4173Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 991m 1341Mi 08:09:01 DEBUG --- stderr --- 08:09:01 DEBUG 08:09:03 INFO 08:09:03 INFO [loop_until]: kubectl --namespace=xlou top node 08:09:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:09:03 INFO [loop_until]: OK (rc = 0) 08:09:03 DEBUG --- stdout --- 08:09:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1391Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 5528Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2201Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 5862Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14160Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 930m 5% 2467Mi 4% 08:09:03 DEBUG --- stderr --- 08:09:03 DEBUG 08:10:01 INFO 08:10:01 INFO [loop_until]: kubectl --namespace=xlou top pods 08:10:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:10:01 INFO [loop_until]: OK (rc = 0) 08:10:01 DEBUG --- stdout --- 08:10:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 5Mi am-55f77847b7-bb6x8 9m 5823Mi am-55f77847b7-ch6mt 9m 5810Mi am-55f77847b7-gbbjq 10m 5836Mi ds-cts-0 12m 388Mi ds-cts-1 5m 386Mi ds-cts-2 6m 466Mi ds-idrepo-0 11m 13531Mi ds-idrepo-1 9m 13473Mi ds-idrepo-2 10m 13635Mi end-user-ui-6845bc78c7-dxwrr 1m 6Mi idm-65858d8c4c-9pfjc 8m 4558Mi idm-65858d8c4c-h9wbp 7m 4173Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1023m 1483Mi 08:10:01 DEBUG --- stderr --- 08:10:01 DEBUG 08:10:03 INFO 08:10:03 INFO [loop_until]: kubectl --namespace=xlou top node 08:10:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:10:03 INFO [loop_until]: OK (rc = 0) 08:10:03 DEBUG --- stdout --- 08:10:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 5530Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2202Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 5861Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 14342Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1212Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14160Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1083m 6% 2897Mi 4% 08:10:03 DEBUG --- stderr --- 08:10:03 DEBUG 08:11:01 INFO 08:11:01 INFO [loop_until]: kubectl --namespace=xlou top pods 08:11:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:11:01 INFO [loop_until]: OK (rc = 0) 08:11:01 DEBUG --- stdout --- 08:11:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 5Mi am-55f77847b7-bb6x8 8m 5823Mi am-55f77847b7-ch6mt 9m 5810Mi am-55f77847b7-gbbjq 9m 5836Mi ds-cts-0 6m 387Mi ds-cts-1 5m 387Mi ds-cts-2 12m 465Mi ds-idrepo-0 11m 13531Mi ds-idrepo-1 9m 13472Mi ds-idrepo-2 11m 13635Mi end-user-ui-6845bc78c7-dxwrr 1m 6Mi idm-65858d8c4c-9pfjc 8m 4557Mi idm-65858d8c4c-h9wbp 6m 4173Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 355m 1359Mi 08:11:01 DEBUG --- stderr --- 08:11:01 DEBUG 08:11:03 INFO 08:11:03 INFO [loop_until]: kubectl --namespace=xlou top node 08:11:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:11:03 INFO [loop_until]: OK (rc = 0) 08:11:03 DEBUG --- stdout --- 08:11:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6996Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5527Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2198Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 5860Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 55m 0% 14163Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 632m 3% 2889Mi 4% 08:11:03 DEBUG --- stderr --- 08:11:03 DEBUG 08:12:01 INFO 08:12:01 INFO [loop_until]: kubectl --namespace=xlou top pods 08:12:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:12:01 INFO [loop_until]: OK (rc = 0) 08:12:01 DEBUG --- stdout --- 08:12:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-lmth7 1m 5Mi am-55f77847b7-bb6x8 7m 5823Mi am-55f77847b7-ch6mt 8m 5810Mi am-55f77847b7-gbbjq 9m 5836Mi ds-cts-0 5m 387Mi ds-cts-1 5m 387Mi ds-cts-2 6m 465Mi ds-idrepo-0 11m 13532Mi ds-idrepo-1 12m 13473Mi ds-idrepo-2 11m 13635Mi end-user-ui-6845bc78c7-dxwrr 1m 6Mi idm-65858d8c4c-9pfjc 9m 4559Mi idm-65858d8c4c-h9wbp 7m 4173Mi lodemon-65c77dbb64-7jwvp 2m 66Mi login-ui-74d6fb46c-j9xdm 1m 3Mi overseer-0-556966658d-mh4rk 1038m 1828Mi 08:12:01 DEBUG --- stderr --- 08:12:01 DEBUG 08:12:03 INFO 08:12:03 INFO [loop_until]: kubectl --namespace=xlou top node 08:12:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:12:03 INFO [loop_until]: OK (rc = 0) 08:12:03 DEBUG --- stdout --- 08:12:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6996Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 5525Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2202Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 5862Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1138Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1211Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14161Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1105m 6% 3284Mi 5% 08:12:03 DEBUG --- stderr --- 08:12:03 DEBUG 08:12:30 INFO Finished: True 08:12:30 INFO Waiting for threads to register finish flag 08:13:03 INFO Done. Have a nice day! :) 127.0.0.1 - - [13/Aug/2023 08:13:03] "GET /monitoring/stop HTTP/1.1" 200 - 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Cpu_cores_used_per_pod.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Memory_usage_per_pod.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Disk_tps_read_per_pod.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Disk_tps_writes_per_pod.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Cpu_cores_used_per_node.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Memory_usage_used_per_node.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Cpu_iowait_per_node.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Network_receive_per_node.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Network_transmit_per_node.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/am_cts_task_count_token_session.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/am_authentication_rate.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/am_authentication_count_per_pod.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/Cts_reaper_Deletion_count.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/AM_oauth2_authorization_codes.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/ds_pods_replication_delay.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/am_cts_reaper_cache_size.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/node_disk_read_bytes_total.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/node_disk_written_bytes_total.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/ds_backend_entry_count.json does not exist. Skipping... 08:13:06 INFO File /tmp/lodemon_data-23-08-13_05:29:05/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [13/Aug/2023 08:13:08] "GET /monitoring/process HTTP/1.1" 200 -