==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-86f768796c-ts724 Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sun, 13 Aug 2023 00:57:27 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=86f768796c skaffold.dev/run-id=a0a56d59-3916-4342-a42c-adadb56be1d9 Annotations: Status: Running IP: 10.106.45.95 IPs: IP: 10.106.45.95 Controlled By: ReplicaSet/lodemon-86f768796c Containers: lodemon: Container ID: containerd://9a6258af651d07680685a16e5351d400f367ad68ca2e17557793f8b9daee47f1 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sun, 13 Aug 2023 00:57:28 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvjzn (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-rvjzn: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 01:57:29 INFO 01:57:29 INFO --------------------- Get expected number of pods --------------------- 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 01:57:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG 3 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO ---------------------------- Get pod list ---------------------------- 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 01:57:29 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG am-55f77847b7-5qsm5 am-55f77847b7-c9bk2 am-55f77847b7-zpsrs 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO -------------- Check pod am-55f77847b7-5qsm5 is running -------------- 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-5qsm5 -o=jsonpath={.status.phase} | grep "Running" 01:57:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG Running 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-5qsm5 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG true 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-5qsm5 --output jsonpath={.status.startTime} 01:57:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG 2023-08-13T00:48:00Z 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO ------- Check pod am-55f77847b7-5qsm5 filesystem is accessible ------- 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-5qsm5 --container openam -- ls / | grep "bin" 01:57:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO ------------- Check pod am-55f77847b7-5qsm5 restart count ------------- 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-5qsm5 --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG 0 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO Pod am-55f77847b7-5qsm5 has been restarted 0 times. 01:57:29 INFO 01:57:29 INFO -------------- Check pod am-55f77847b7-c9bk2 is running -------------- 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-c9bk2 -o=jsonpath={.status.phase} | grep "Running" 01:57:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG Running 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-c9bk2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG true 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-c9bk2 --output jsonpath={.status.startTime} 01:57:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:29 INFO [loop_until]: OK (rc = 0) 01:57:29 DEBUG --- stdout --- 01:57:29 DEBUG 2023-08-13T00:48:00Z 01:57:29 DEBUG --- stderr --- 01:57:29 DEBUG 01:57:29 INFO 01:57:29 INFO ------- Check pod am-55f77847b7-c9bk2 filesystem is accessible ------- 01:57:29 INFO 01:57:29 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-c9bk2 --container openam -- ls / | grep "bin" 01:57:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO ------------- Check pod am-55f77847b7-c9bk2 restart count ------------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-c9bk2 --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG 0 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO Pod am-55f77847b7-c9bk2 has been restarted 0 times. 01:57:30 INFO 01:57:30 INFO -------------- Check pod am-55f77847b7-zpsrs is running -------------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-zpsrs -o=jsonpath={.status.phase} | grep "Running" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG Running 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-zpsrs -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG true 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-zpsrs --output jsonpath={.status.startTime} 01:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG 2023-08-13T00:48:01Z 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO ------- Check pod am-55f77847b7-zpsrs filesystem is accessible ------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-zpsrs --container openam -- ls / | grep "bin" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO ------------- Check pod am-55f77847b7-zpsrs restart count ------------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-zpsrs --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG 0 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO Pod am-55f77847b7-zpsrs has been restarted 0 times. 01:57:30 INFO 01:57:30 INFO --------------------- Get expected number of pods --------------------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 01:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG 2 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO ---------------------------- Get pod list ---------------------------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 01:57:30 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG idm-65858d8c4c-4jclh idm-65858d8c4c-97wdf 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO -------------- Check pod idm-65858d8c4c-4jclh is running -------------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-4jclh -o=jsonpath={.status.phase} | grep "Running" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG Running 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-4jclh -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG true 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-4jclh --output jsonpath={.status.startTime} 01:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG 2023-08-13T00:48:01Z 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO ------- Check pod idm-65858d8c4c-4jclh filesystem is accessible ------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-4jclh --container openidm -- ls / | grep "bin" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO ------------ Check pod idm-65858d8c4c-4jclh restart count ------------ 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-4jclh --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG 0 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO Pod idm-65858d8c4c-4jclh has been restarted 0 times. 01:57:30 INFO 01:57:30 INFO -------------- Check pod idm-65858d8c4c-97wdf is running -------------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-97wdf -o=jsonpath={.status.phase} | grep "Running" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG Running 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-97wdf -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG true 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-97wdf --output jsonpath={.status.startTime} 01:57:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:30 INFO [loop_until]: OK (rc = 0) 01:57:30 DEBUG --- stdout --- 01:57:30 DEBUG 2023-08-13T00:48:01Z 01:57:30 DEBUG --- stderr --- 01:57:30 DEBUG 01:57:30 INFO 01:57:30 INFO ------- Check pod idm-65858d8c4c-97wdf filesystem is accessible ------- 01:57:30 INFO 01:57:30 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-97wdf --container openidm -- ls / | grep "bin" 01:57:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO ------------ Check pod idm-65858d8c4c-97wdf restart count ------------ 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-97wdf --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG 0 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO Pod idm-65858d8c4c-97wdf has been restarted 0 times. 01:57:31 INFO 01:57:31 INFO --------------------- Get expected number of pods --------------------- 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 01:57:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG 3 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO ---------------------------- Get pod list ---------------------------- 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 01:57:31 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG Running 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG true 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 01:57:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG 2023-08-13T00:13:41Z 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG 0 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO Pod ds-idrepo-0 has been restarted 0 times. 01:57:31 INFO 01:57:31 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG Running 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG true 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 01:57:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG 2023-08-13T00:26:05Z 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG 0 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO Pod ds-idrepo-1 has been restarted 0 times. 01:57:31 INFO 01:57:31 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG Running 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG true 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 01:57:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:31 INFO [loop_until]: OK (rc = 0) 01:57:31 DEBUG --- stdout --- 01:57:31 DEBUG 2023-08-13T00:37:07Z 01:57:31 DEBUG --- stderr --- 01:57:31 DEBUG 01:57:31 INFO 01:57:31 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 01:57:31 INFO 01:57:31 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 01:57:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG 0 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO Pod ds-idrepo-2 has been restarted 0 times. 01:57:32 INFO 01:57:32 INFO --------------------- Get expected number of pods --------------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 01:57:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG 3 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO ---------------------------- Get pod list ---------------------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 01:57:32 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO -------------------- Check pod ds-cts-0 is running -------------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG Running 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG true 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 01:57:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG 2023-08-13T00:13:41Z 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG 0 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO Pod ds-cts-0 has been restarted 0 times. 01:57:32 INFO 01:57:32 INFO -------------------- Check pod ds-cts-1 is running -------------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG Running 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG true 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 01:57:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG 2023-08-13T00:14:05Z 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG 0 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO Pod ds-cts-1 has been restarted 0 times. 01:57:32 INFO 01:57:32 INFO -------------------- Check pod ds-cts-2 is running -------------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG Running 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG true 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 01:57:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:32 INFO [loop_until]: OK (rc = 0) 01:57:32 DEBUG --- stdout --- 01:57:32 DEBUG 2023-08-13T00:14:29Z 01:57:32 DEBUG --- stderr --- 01:57:32 DEBUG 01:57:32 INFO 01:57:32 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 01:57:32 INFO 01:57:32 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 01:57:32 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:57:33 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:57:33 INFO [loop_until]: OK (rc = 0) 01:57:33 DEBUG --- stdout --- 01:57:33 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:57:33 DEBUG --- stderr --- 01:57:33 DEBUG 01:57:33 INFO 01:57:33 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 01:57:33 INFO 01:57:33 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 01:57:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:33 INFO [loop_until]: OK (rc = 0) 01:57:33 DEBUG --- stdout --- 01:57:33 DEBUG 0 01:57:33 DEBUG --- stderr --- 01:57:33 DEBUG 01:57:33 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.95:8080 Press CTRL+C to quit 01:58:04 INFO 01:58:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:04 INFO [loop_until]: OK (rc = 0) 01:58:04 DEBUG --- stdout --- 01:58:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:04 DEBUG --- stderr --- 01:58:04 DEBUG 01:58:04 INFO 01:58:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:04 INFO [loop_until]: OK (rc = 0) 01:58:04 DEBUG --- stdout --- 01:58:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:04 DEBUG --- stderr --- 01:58:04 DEBUG 01:58:04 INFO 01:58:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:04 INFO [loop_until]: OK (rc = 0) 01:58:04 DEBUG --- stdout --- 01:58:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:04 DEBUG --- stderr --- 01:58:04 DEBUG 01:58:04 INFO 01:58:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:04 INFO [loop_until]: OK (rc = 0) 01:58:04 DEBUG --- stdout --- 01:58:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:04 DEBUG --- stderr --- 01:58:04 DEBUG 01:58:04 INFO 01:58:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:04 INFO [loop_until]: OK (rc = 0) 01:58:04 DEBUG --- stdout --- 01:58:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:04 DEBUG --- stderr --- 01:58:04 DEBUG 01:58:04 INFO 01:58:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:04 INFO [loop_until]: OK (rc = 0) 01:58:04 DEBUG --- stdout --- 01:58:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:04 DEBUG --- stderr --- 01:58:04 DEBUG 01:58:04 INFO 01:58:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:04 INFO [loop_until]: OK (rc = 0) 01:58:04 DEBUG --- stdout --- 01:58:04 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:04 DEBUG --- stderr --- 01:58:04 DEBUG 01:58:04 INFO 01:58:04 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:05 INFO [loop_until]: OK (rc = 0) 01:58:05 DEBUG --- stdout --- 01:58:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:05 DEBUG --- stderr --- 01:58:05 DEBUG 01:58:05 INFO 01:58:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:05 INFO [loop_until]: OK (rc = 0) 01:58:05 DEBUG --- stdout --- 01:58:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:05 DEBUG --- stderr --- 01:58:05 DEBUG 01:58:05 INFO 01:58:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:05 INFO [loop_until]: OK (rc = 0) 01:58:05 DEBUG --- stdout --- 01:58:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:05 DEBUG --- stderr --- 01:58:05 DEBUG 01:58:05 INFO 01:58:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:05 INFO [loop_until]: OK (rc = 0) 01:58:05 DEBUG --- stdout --- 01:58:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:05 DEBUG --- stderr --- 01:58:05 DEBUG 01:58:05 INFO 01:58:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:05 INFO [loop_until]: OK (rc = 0) 01:58:05 DEBUG --- stdout --- 01:58:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:05 DEBUG --- stderr --- 01:58:05 DEBUG 01:58:05 INFO 01:58:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:05 INFO [loop_until]: OK (rc = 0) 01:58:05 DEBUG --- stdout --- 01:58:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:05 DEBUG --- stderr --- 01:58:05 DEBUG 01:58:05 INFO 01:58:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:05 INFO [loop_until]: OK (rc = 0) 01:58:05 DEBUG --- stdout --- 01:58:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:05 DEBUG --- stderr --- 01:58:05 DEBUG 01:58:05 INFO 01:58:05 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:05 INFO [loop_until]: OK (rc = 0) 01:58:05 DEBUG --- stdout --- 01:58:05 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:05 DEBUG --- stderr --- 01:58:05 DEBUG 01:58:06 INFO 01:58:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:06 INFO [loop_until]: OK (rc = 0) 01:58:06 DEBUG --- stdout --- 01:58:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:06 DEBUG --- stderr --- 01:58:06 DEBUG 01:58:06 INFO 01:58:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:06 INFO [loop_until]: OK (rc = 0) 01:58:06 DEBUG --- stdout --- 01:58:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:06 DEBUG --- stderr --- 01:58:06 DEBUG 01:58:06 INFO 01:58:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:06 INFO [loop_until]: OK (rc = 0) 01:58:06 DEBUG --- stdout --- 01:58:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:06 DEBUG --- stderr --- 01:58:06 DEBUG 01:58:06 INFO 01:58:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:06 INFO [loop_until]: OK (rc = 0) 01:58:06 DEBUG --- stdout --- 01:58:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:06 DEBUG --- stderr --- 01:58:06 DEBUG 01:58:06 INFO 01:58:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:06 INFO [loop_until]: OK (rc = 0) 01:58:06 DEBUG --- stdout --- 01:58:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:06 DEBUG --- stderr --- 01:58:06 DEBUG 01:58:06 INFO 01:58:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:06 INFO [loop_until]: OK (rc = 0) 01:58:06 DEBUG --- stdout --- 01:58:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:06 DEBUG --- stderr --- 01:58:06 DEBUG 01:58:06 INFO 01:58:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:06 INFO [loop_until]: OK (rc = 0) 01:58:06 DEBUG --- stdout --- 01:58:06 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:06 DEBUG --- stderr --- 01:58:06 DEBUG 01:58:06 INFO 01:58:06 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:07 INFO [loop_until]: OK (rc = 0) 01:58:07 DEBUG --- stdout --- 01:58:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:07 DEBUG --- stderr --- 01:58:07 DEBUG 01:58:07 INFO 01:58:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:07 INFO [loop_until]: OK (rc = 0) 01:58:07 DEBUG --- stdout --- 01:58:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:07 DEBUG --- stderr --- 01:58:07 DEBUG 01:58:07 INFO 01:58:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:07 INFO [loop_until]: OK (rc = 0) 01:58:07 DEBUG --- stdout --- 01:58:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:07 DEBUG --- stderr --- 01:58:07 DEBUG 01:58:07 INFO 01:58:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:07 INFO [loop_until]: OK (rc = 0) 01:58:07 DEBUG --- stdout --- 01:58:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:07 DEBUG --- stderr --- 01:58:07 DEBUG 01:58:07 INFO 01:58:07 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:07 INFO [loop_until]: OK (rc = 0) 01:58:07 DEBUG --- stdout --- 01:58:07 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:58:07 DEBUG --- stderr --- 01:58:07 DEBUG 01:58:07 INFO Initializing monitoring instance threads 01:58:07 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 01:58:07 INFO Starting instance threads 01:58:07 INFO 01:58:07 INFO Thread started 01:58:07 INFO [loop_until]: kubectl --namespace=xlou top node 01:58:07 INFO 01:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:07 INFO Thread started 01:58:07 INFO [loop_until]: kubectl --namespace=xlou top pods 01:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287" 01:58:07 INFO Thread started Exception in thread Thread-23: 01:58:07 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-24: 01:58:07 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run 01:58:07 INFO Thread started Exception in thread Thread-25: self.run() 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691888287" File "/usr/local/lib/python3.9/threading.py", line 910, in run Traceback (most recent call last): 01:58:07 INFO Thread started 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691888287" self._target(*self._args, **self._kwargs) 01:58:07 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 01:58:07 INFO Thread started Exception in thread Thread-28: self._target(*self._args, **self._kwargs) self.run() 01:58:07 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287" instance.run() 01:58:07 INFO Thread started 01:58:07 INFO All threads has been started Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 127.0.0.1 - - [13/Aug/2023 01:58:07] "GET /monitoring/start HTTP/1.1" 200 - self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self.run() if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/usr/local/lib/python3.9/threading.py", line 910, in run KeyError: 'functions' instance.run() self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop if self.prom_data['functions']: instance.run() KeyError: 'functions' File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: KeyError: 'functions' if self.prom_data['functions']: KeyError: 'functions' 01:58:07 INFO [loop_until]: OK (rc = 0) 01:58:07 DEBUG --- stdout --- 01:58:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 19m 4415Mi am-55f77847b7-c9bk2 14m 3375Mi am-55f77847b7-zpsrs 11m 2255Mi ds-cts-0 7m 359Mi ds-cts-1 9m 375Mi ds-cts-2 7m 362Mi ds-idrepo-0 24m 10313Mi ds-idrepo-1 16m 10330Mi ds-idrepo-2 40m 10244Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 7m 1347Mi idm-65858d8c4c-97wdf 8m 1119Mi lodemon-86f768796c-ts724 374m 60Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 15Mi 01:58:07 DEBUG --- stderr --- 01:58:07 DEBUG 01:58:07 INFO [loop_until]: OK (rc = 0) 01:58:07 DEBUG --- stdout --- 01:58:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 344m 2% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5463Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3401Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 4552Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2688Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 2397Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 97m 0% 10914Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 10984Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 10957Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1632Mi 2% 01:58:07 DEBUG --- stderr --- 01:58:07 DEBUG 01:58:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:08 WARNING Response is NONE 01:58:08 DEBUG Exception is preset. Setting retry_loop to true 01:58:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:10 WARNING Response is NONE 01:58:10 WARNING Response is NONE 01:58:10 DEBUG Exception is preset. Setting retry_loop to true 01:58:10 DEBUG Exception is preset. Setting retry_loop to true 01:58:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:14 WARNING Response is NONE 01:58:14 WARNING Response is NONE 01:58:14 WARNING Response is NONE 01:58:14 WARNING Response is NONE 01:58:14 WARNING Response is NONE 01:58:14 DEBUG Exception is preset. Setting retry_loop to true 01:58:14 DEBUG Exception is preset. Setting retry_loop to true 01:58:14 DEBUG Exception is preset. Setting retry_loop to true 01:58:14 DEBUG Exception is preset. Setting retry_loop to true 01:58:14 DEBUG Exception is preset. Setting retry_loop to true 01:58:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:19 WARNING Response is NONE 01:58:19 DEBUG Exception is preset. Setting retry_loop to true 01:58:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:21 WARNING Response is NONE 01:58:21 WARNING Response is NONE 01:58:21 DEBUG Exception is preset. Setting retry_loop to true 01:58:21 DEBUG Exception is preset. Setting retry_loop to true 01:58:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:22 WARNING Response is NONE 01:58:22 WARNING Response is NONE 01:58:22 WARNING Response is NONE 01:58:22 DEBUG Exception is preset. Setting retry_loop to true 01:58:22 DEBUG Exception is preset. Setting retry_loop to true 01:58:22 DEBUG Exception is preset. Setting retry_loop to true 01:58:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:25 WARNING Response is NONE 01:58:25 DEBUG Exception is preset. Setting retry_loop to true 01:58:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:27 WARNING Response is NONE 01:58:27 WARNING Response is NONE 01:58:27 DEBUG Exception is preset. Setting retry_loop to true 01:58:27 DEBUG Exception is preset. Setting retry_loop to true 01:58:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:30 WARNING Response is NONE 01:58:30 DEBUG Exception is preset. Setting retry_loop to true 01:58:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:32 WARNING Response is NONE 01:58:32 DEBUG Exception is preset. Setting retry_loop to true 01:58:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:33 WARNING Response is NONE 01:58:33 DEBUG Exception is preset. Setting retry_loop to true 01:58:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:34 WARNING Response is NONE 01:58:34 DEBUG Exception is preset. Setting retry_loop to true 01:58:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:36 WARNING Response is NONE 01:58:36 DEBUG Exception is preset. Setting retry_loop to true 01:58:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:38 WARNING Response is NONE 01:58:38 DEBUG Exception is preset. Setting retry_loop to true 01:58:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:40 WARNING Response is NONE 01:58:40 DEBUG Exception is preset. Setting retry_loop to true 01:58:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:41 WARNING Response is NONE 01:58:41 DEBUG Exception is preset. Setting retry_loop to true 01:58:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:43 WARNING Response is NONE 01:58:43 DEBUG Exception is preset. Setting retry_loop to true 01:58:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:45 WARNING Response is NONE 01:58:45 DEBUG Exception is preset. Setting retry_loop to true 01:58:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:47 WARNING Response is NONE 01:58:47 DEBUG Exception is preset. Setting retry_loop to true 01:58:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:49 WARNING Response is NONE 01:58:49 DEBUG Exception is preset. Setting retry_loop to true 01:58:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:51 WARNING Response is NONE 01:58:51 DEBUG Exception is preset. Setting retry_loop to true 01:58:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:52 WARNING Response is NONE 01:58:52 DEBUG Exception is preset. Setting retry_loop to true 01:58:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:54 WARNING Response is NONE 01:58:54 DEBUG Exception is preset. Setting retry_loop to true 01:58:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:56 WARNING Response is NONE 01:58:56 DEBUG Exception is preset. Setting retry_loop to true 01:58:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:58:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:58:59 WARNING Response is NONE 01:58:59 DEBUG Exception is preset. Setting retry_loop to true 01:58:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:01 WARNING Response is NONE 01:59:01 DEBUG Exception is preset. Setting retry_loop to true 01:59:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:02 WARNING Response is NONE 01:59:02 DEBUG Exception is preset. Setting retry_loop to true 01:59:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:04 WARNING Response is NONE 01:59:04 DEBUG Exception is preset. Setting retry_loop to true 01:59:04 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:04 WARNING Response is NONE 01:59:04 DEBUG Exception is preset. Setting retry_loop to true 01:59:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:06 WARNING Response is NONE 01:59:06 DEBUG Exception is preset. Setting retry_loop to true 01:59:06 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:07 INFO 01:59:07 INFO [loop_until]: kubectl --namespace=xlou top pods 01:59:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:59:07 INFO 01:59:07 INFO [loop_until]: kubectl --namespace=xlou top node 01:59:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:59:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:07 WARNING Response is NONE 01:59:07 DEBUG Exception is preset. Setting retry_loop to true 01:59:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:07 INFO [loop_until]: OK (rc = 0) 01:59:07 DEBUG --- stdout --- 01:59:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 9m 4415Mi am-55f77847b7-c9bk2 16m 3376Mi am-55f77847b7-zpsrs 10m 2255Mi ds-cts-0 12m 364Mi ds-cts-1 76m 377Mi ds-cts-2 8m 365Mi ds-idrepo-0 536m 10317Mi ds-idrepo-1 40m 10334Mi ds-idrepo-2 112m 10250Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9m 1362Mi idm-65858d8c4c-97wdf 9m 1121Mi lodemon-86f768796c-ts724 3m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 214m 48Mi 01:59:07 DEBUG --- stderr --- 01:59:07 DEBUG 01:59:07 INFO [loop_until]: OK (rc = 0) 01:59:07 DEBUG --- stdout --- 01:59:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5462Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3403Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 4553Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 2698Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2103Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 2400Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 134m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 133m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 96m 0% 10921Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 86m 0% 10991Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 213m 1% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 915m 5% 10966Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 267m 1% 1632Mi 2% 01:59:07 DEBUG --- stderr --- 01:59:07 DEBUG 01:59:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:10 WARNING Response is NONE 01:59:10 DEBUG Exception is preset. Setting retry_loop to true 01:59:10 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:12 WARNING Response is NONE 01:59:12 DEBUG Exception is preset. Setting retry_loop to true 01:59:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:13 WARNING Response is NONE 01:59:13 DEBUG Exception is preset. Setting retry_loop to true 01:59:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:15 WARNING Response is NONE 01:59:15 DEBUG Exception is preset. Setting retry_loop to true 01:59:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:15 WARNING Response is NONE 01:59:15 DEBUG Exception is preset. Setting retry_loop to true 01:59:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:18 WARNING Response is NONE 01:59:18 DEBUG Exception is preset. Setting retry_loop to true 01:59:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:24 WARNING Response is NONE 01:59:24 DEBUG Exception is preset. Setting retry_loop to true 01:59:24 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:26 WARNING Response is NONE 01:59:26 DEBUG Exception is preset. Setting retry_loop to true 01:59:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:27 WARNING Response is NONE 01:59:27 DEBUG Exception is preset. Setting retry_loop to true 01:59:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:27 WARNING Response is NONE 01:59:27 DEBUG Exception is preset. Setting retry_loop to true 01:59:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:28 WARNING Response is NONE 01:59:28 DEBUG Exception is preset. Setting retry_loop to true 01:59:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:29 WARNING Response is NONE 01:59:29 DEBUG Exception is preset. Setting retry_loop to true 01:59:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:37 WARNING Response is NONE 01:59:37 DEBUG Exception is preset. Setting retry_loop to true 01:59:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:38 WARNING Response is NONE 01:59:38 DEBUG Exception is preset. Setting retry_loop to true 01:59:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:39 WARNING Response is NONE 01:59:39 DEBUG Exception is preset. Setting retry_loop to true 01:59:39 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:40 WARNING Response is NONE 01:59:40 DEBUG Exception is preset. Setting retry_loop to true 01:59:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:41 WARNING Response is NONE 01:59:41 DEBUG Exception is preset. Setting retry_loop to true 01:59:41 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:41 WARNING Response is NONE 01:59:41 DEBUG Exception is preset. Setting retry_loop to true 01:59:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:48 WARNING Response is NONE 01:59:48 DEBUG Exception is preset. Setting retry_loop to true 01:59:48 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:49 WARNING Response is NONE 01:59:49 DEBUG Exception is preset. Setting retry_loop to true 01:59:49 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:59:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:51 WARNING Response is NONE 01:59:51 DEBUG Exception is preset. Setting retry_loop to true 01:59:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:59:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:59:52 WARNING Response is NONE 01:59:52 DEBUG Exception is preset. Setting retry_loop to true 01:59:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:02 WARNING Response is NONE 02:00:02 DEBUG Exception is preset. Setting retry_loop to true 02:00:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:03 WARNING Response is NONE 02:00:03 DEBUG Exception is preset. Setting retry_loop to true 02:00:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:07 INFO 02:00:07 INFO [loop_until]: kubectl --namespace=xlou top node 02:00:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:00:07 INFO 02:00:07 INFO [loop_until]: kubectl --namespace=xlou top pods 02:00:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:00:07 INFO [loop_until]: OK (rc = 0) 02:00:07 DEBUG --- stdout --- 02:00:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 13m 4416Mi am-55f77847b7-c9bk2 12m 3376Mi am-55f77847b7-zpsrs 9m 2255Mi ds-cts-0 9m 363Mi ds-cts-1 11m 376Mi ds-cts-2 10m 366Mi ds-idrepo-0 16m 10317Mi ds-idrepo-1 16m 10337Mi ds-idrepo-2 21m 10251Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 10m 1376Mi idm-65858d8c4c-97wdf 8m 1122Mi lodemon-86f768796c-ts724 3m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 48Mi 02:00:07 DEBUG --- stderr --- 02:00:07 DEBUG 02:00:07 INFO [loop_until]: OK (rc = 0) 02:00:07 DEBUG --- stdout --- 02:00:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5463Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3402Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 4553Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 2710Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 118m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2398Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 73m 0% 10919Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10992Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 76m 0% 10966Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1632Mi 2% 02:00:07 DEBUG --- stderr --- 02:00:07 DEBUG 02:00:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:13 WARNING Response is NONE 02:00:13 DEBUG Exception is preset. Setting retry_loop to true 02:00:13 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:00:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:14 WARNING Response is NONE 02:00:14 DEBUG Exception is preset. Setting retry_loop to true 02:00:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 WARNING Response is NONE 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 DEBUG Exception is preset. Setting retry_loop to true 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:29 WARNING Response is NONE 02:00:29 DEBUG Exception is preset. Setting retry_loop to true 02:00:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:31 WARNING Response is NONE 02:00:31 WARNING Response is NONE 02:00:31 DEBUG Exception is preset. Setting retry_loop to true 02:00:31 DEBUG Exception is preset. Setting retry_loop to true 02:00:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:35 WARNING Response is NONE 02:00:35 WARNING Response is NONE 02:00:35 WARNING Response is NONE 02:00:35 WARNING Response is NONE 02:00:35 DEBUG Exception is preset. Setting retry_loop to true 02:00:35 DEBUG Exception is preset. Setting retry_loop to true 02:00:35 DEBUG Exception is preset. Setting retry_loop to true 02:00:35 DEBUG Exception is preset. Setting retry_loop to true 02:00:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:40 WARNING Response is NONE 02:00:40 DEBUG Exception is preset. Setting retry_loop to true 02:00:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:42 WARNING Response is NONE 02:00:42 WARNING Response is NONE 02:00:42 DEBUG Exception is preset. Setting retry_loop to true 02:00:42 DEBUG Exception is preset. Setting retry_loop to true 02:00:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:44 WARNING Response is NONE 02:00:44 WARNING Response is NONE 02:00:44 DEBUG Exception is preset. Setting retry_loop to true 02:00:44 DEBUG Exception is preset. Setting retry_loop to true 02:00:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:46 WARNING Response is NONE 02:00:46 DEBUG Exception is preset. Setting retry_loop to true 02:00:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:48 WARNING Response is NONE 02:00:48 WARNING Response is NONE 02:00:48 DEBUG Exception is preset. Setting retry_loop to true 02:00:48 DEBUG Exception is preset. Setting retry_loop to true 02:00:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:51 WARNING Response is NONE 02:00:51 DEBUG Exception is preset. Setting retry_loop to true 02:00:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:53 WARNING Response is NONE 02:00:53 DEBUG Exception is preset. Setting retry_loop to true 02:00:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:55 WARNING Response is NONE 02:00:55 DEBUG Exception is preset. Setting retry_loop to true 02:00:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:55 WARNING Response is NONE 02:00:55 DEBUG Exception is preset. Setting retry_loop to true 02:00:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:57 WARNING Response is NONE 02:00:57 DEBUG Exception is preset. Setting retry_loop to true 02:00:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:00:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:00:59 WARNING Response is NONE 02:00:59 DEBUG Exception is preset. Setting retry_loop to true 02:00:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:01 WARNING Response is NONE 02:01:01 DEBUG Exception is preset. Setting retry_loop to true 02:01:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:02 WARNING Response is NONE 02:01:02 DEBUG Exception is preset. Setting retry_loop to true 02:01:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:04 WARNING Response is NONE 02:01:04 DEBUG Exception is preset. Setting retry_loop to true 02:01:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:07 WARNING Response is NONE 02:01:07 DEBUG Exception is preset. Setting retry_loop to true 02:01:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:07 INFO 02:01:07 INFO [loop_until]: kubectl --namespace=xlou top pods 02:01:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:01:07 INFO 02:01:07 INFO [loop_until]: kubectl --namespace=xlou top node 02:01:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:01:08 INFO [loop_until]: OK (rc = 0) 02:01:08 DEBUG --- stdout --- 02:01:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 12m 4416Mi am-55f77847b7-c9bk2 12m 3377Mi am-55f77847b7-zpsrs 14m 2259Mi ds-cts-0 9m 364Mi ds-cts-1 9m 376Mi ds-cts-2 8m 367Mi ds-idrepo-0 17m 10317Mi ds-idrepo-1 27m 10334Mi ds-idrepo-2 38m 10248Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1385Mi idm-65858d8c4c-97wdf 7m 1132Mi lodemon-86f768796c-ts724 3m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2m 48Mi 02:01:08 DEBUG --- stderr --- 02:01:08 DEBUG 02:01:08 INFO [loop_until]: OK (rc = 0) 02:01:08 DEBUG --- stdout --- 02:01:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5462Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 3402Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 4554Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2723Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2409Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 93m 0% 10919Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 82m 0% 10993Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 76m 0% 10966Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 248m 1% 1739Mi 2% 02:01:08 DEBUG --- stderr --- 02:01:08 DEBUG 02:01:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:09 WARNING Response is NONE 02:01:09 DEBUG Exception is preset. Setting retry_loop to true 02:01:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:09 WARNING Response is NONE 02:01:09 DEBUG Exception is preset. Setting retry_loop to true 02:01:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:11 WARNING Response is NONE 02:01:11 DEBUG Exception is preset. Setting retry_loop to true 02:01:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:12 WARNING Response is NONE 02:01:12 DEBUG Exception is preset. Setting retry_loop to true 02:01:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:13 WARNING Response is NONE 02:01:13 DEBUG Exception is preset. Setting retry_loop to true 02:01:13 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:01:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:16 WARNING Response is NONE 02:01:16 DEBUG Exception is preset. Setting retry_loop to true 02:01:16 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:01:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:18 WARNING Response is NONE 02:01:18 DEBUG Exception is preset. Setting retry_loop to true 02:01:18 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:01:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:20 WARNING Response is NONE 02:01:20 DEBUG Exception is preset. Setting retry_loop to true 02:01:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:22 WARNING Response is NONE 02:01:22 DEBUG Exception is preset. Setting retry_loop to true 02:01:22 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:01:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:23 WARNING Response is NONE 02:01:23 DEBUG Exception is preset. Setting retry_loop to true 02:01:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:24 WARNING Response is NONE 02:01:24 DEBUG Exception is preset. Setting retry_loop to true 02:01:24 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:01:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:30 WARNING Response is NONE 02:01:30 DEBUG Exception is preset. Setting retry_loop to true 02:01:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:31 WARNING Response is NONE 02:01:31 DEBUG Exception is preset. Setting retry_loop to true 02:01:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:32 WARNING Response is NONE 02:01:32 WARNING Response is NONE 02:01:32 WARNING Response is NONE 02:01:32 DEBUG Exception is preset. Setting retry_loop to true 02:01:32 DEBUG Exception is preset. Setting retry_loop to true 02:01:32 DEBUG Exception is preset. Setting retry_loop to true 02:01:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:34 WARNING Response is NONE 02:01:34 DEBUG Exception is preset. Setting retry_loop to true 02:01:34 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:01:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:36 WARNING Response is NONE 02:01:36 DEBUG Exception is preset. Setting retry_loop to true 02:01:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:41 WARNING Response is NONE 02:01:41 DEBUG Exception is preset. Setting retry_loop to true 02:01:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:43 WARNING Response is NONE 02:01:43 DEBUG Exception is preset. Setting retry_loop to true 02:01:43 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:01:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:43 WARNING Response is NONE 02:01:43 WARNING Response is NONE 02:01:43 DEBUG Exception is preset. Setting retry_loop to true 02:01:43 DEBUG Exception is preset. Setting retry_loop to true 02:01:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:45 WARNING Response is NONE 02:01:45 DEBUG Exception is preset. Setting retry_loop to true 02:01:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:48 WARNING Response is NONE 02:01:48 DEBUG Exception is preset. Setting retry_loop to true 02:01:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:52 WARNING Response is NONE 02:01:52 DEBUG Exception is preset. Setting retry_loop to true 02:01:52 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:01:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:55 WARNING Response is NONE 02:01:55 WARNING Response is NONE 02:01:55 DEBUG Exception is preset. Setting retry_loop to true 02:01:55 DEBUG Exception is preset. Setting retry_loop to true 02:01:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:57 WARNING Response is NONE 02:01:57 DEBUG Exception is preset. Setting retry_loop to true 02:01:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:01:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:01:59 WARNING Response is NONE 02:01:59 DEBUG Exception is preset. Setting retry_loop to true 02:01:59 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:02:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:02:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:02:06 WARNING Response is NONE 02:02:06 WARNING Response is NONE 02:02:06 DEBUG Exception is preset. Setting retry_loop to true 02:02:06 DEBUG Exception is preset. Setting retry_loop to true 02:02:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:02:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:02:08 INFO 02:02:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:02:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:02:08 INFO [loop_until]: OK (rc = 0) 02:02:08 DEBUG --- stdout --- 02:02:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 12m 4417Mi am-55f77847b7-c9bk2 9m 3378Mi am-55f77847b7-zpsrs 6m 2252Mi ds-cts-0 7m 363Mi ds-cts-1 7m 376Mi ds-cts-2 7m 368Mi ds-idrepo-0 15m 10317Mi ds-idrepo-1 22m 10334Mi ds-idrepo-2 18m 10251Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1397Mi idm-65858d8c4c-97wdf 8m 1141Mi lodemon-86f768796c-ts724 3m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 98Mi 02:02:08 DEBUG --- stderr --- 02:02:08 DEBUG 02:02:08 INFO 02:02:08 INFO [loop_until]: kubectl --namespace=xlou top node 02:02:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:02:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:02:08 WARNING Response is NONE 02:02:08 DEBUG Exception is preset. Setting retry_loop to true 02:02:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:02:08 INFO [loop_until]: OK (rc = 0) 02:02:08 DEBUG --- stdout --- 02:02:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5464Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 3398Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 4550Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2734Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2415Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 74m 0% 10931Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 77m 0% 10990Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 10963Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1631Mi 2% 02:02:08 DEBUG --- stderr --- 02:02:08 DEBUG 02:02:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:02:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:02:17 WARNING Response is NONE 02:02:17 WARNING Response is NONE 02:02:17 DEBUG Exception is preset. Setting retry_loop to true 02:02:17 DEBUG Exception is preset. Setting retry_loop to true 02:02:17 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: 02:02:17 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): raise FailException('Failed to obtain response from server...') File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:02:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691888287 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:02:19 WARNING Response is NONE 02:02:19 DEBUG Exception is preset. Setting retry_loop to true 02:02:19 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:03:08 INFO 02:03:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:03:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:03:08 INFO [loop_until]: OK (rc = 0) 02:03:08 DEBUG --- stdout --- 02:03:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 131m 4450Mi am-55f77847b7-c9bk2 8m 3379Mi am-55f77847b7-zpsrs 68m 2276Mi ds-cts-0 7m 364Mi ds-cts-1 10m 377Mi ds-cts-2 6m 367Mi ds-idrepo-0 420m 10322Mi ds-idrepo-1 24m 10338Mi ds-idrepo-2 106m 10260Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 7m 1406Mi idm-65858d8c4c-97wdf 8m 1150Mi lodemon-86f768796c-ts724 1m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2m 98Mi 02:03:08 DEBUG --- stderr --- 02:03:08 DEBUG 02:03:08 INFO 02:03:08 INFO [loop_until]: kubectl --namespace=xlou top node 02:03:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:03:08 INFO [loop_until]: OK (rc = 0) 02:03:08 DEBUG --- stdout --- 02:03:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 190m 1% 5508Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 3419Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 4554Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2741Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2124Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2424Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 180m 1% 10923Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 150m 0% 10992Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 10965Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 78m 0% 1630Mi 2% 02:03:08 DEBUG --- stderr --- 02:03:08 DEBUG 02:04:08 INFO 02:04:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:04:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:04:08 INFO [loop_until]: OK (rc = 0) 02:04:08 DEBUG --- stdout --- 02:04:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 14m 4450Mi am-55f77847b7-c9bk2 10m 3417Mi am-55f77847b7-zpsrs 21m 2271Mi ds-cts-0 355m 371Mi ds-cts-1 106m 380Mi ds-cts-2 185m 368Mi ds-idrepo-0 3109m 12676Mi ds-idrepo-1 205m 10339Mi ds-idrepo-2 193m 10254Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1468Mi idm-65858d8c4c-97wdf 8m 1180Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1052m 375Mi 02:04:08 DEBUG --- stderr --- 02:04:08 DEBUG 02:04:08 INFO 02:04:08 INFO [loop_until]: kubectl --namespace=xlou top node 02:04:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:04:08 INFO [loop_until]: OK (rc = 0) 02:04:08 DEBUG --- stdout --- 02:04:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 3417Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 4592Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 2801Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 2469Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 208m 1% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 491m 3% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 291m 1% 10927Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 314m 1% 10993Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 203m 1% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3222m 20% 13220Mi 22% gke-xlou-cdm-frontend-a8771548-k40m 1134m 7% 1906Mi 3% 02:04:08 DEBUG --- stderr --- 02:04:08 DEBUG 02:05:08 INFO 02:05:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:05:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:05:08 INFO [loop_until]: OK (rc = 0) 02:05:08 DEBUG --- stdout --- 02:05:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 20m 4450Mi am-55f77847b7-c9bk2 10m 3417Mi am-55f77847b7-zpsrs 20m 2271Mi ds-cts-0 8m 371Mi ds-cts-1 6m 374Mi ds-cts-2 6m 368Mi ds-idrepo-0 2673m 13370Mi ds-idrepo-1 30m 10340Mi ds-idrepo-2 21m 10256Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 10m 1478Mi idm-65858d8c4c-97wdf 8m 1193Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1151m 376Mi 02:05:08 DEBUG --- stderr --- 02:05:08 DEBUG 02:05:08 INFO 02:05:08 INFO [loop_until]: kubectl --namespace=xlou top node 02:05:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:05:08 INFO [loop_until]: OK (rc = 0) 02:05:08 DEBUG --- stdout --- 02:05:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3417Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4593Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2815Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2470Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 80m 0% 10929Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 79m 0% 10996Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2767m 17% 13935Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1191m 7% 1903Mi 3% 02:05:08 DEBUG --- stderr --- 02:05:08 DEBUG 02:06:08 INFO 02:06:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:06:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:06:08 INFO [loop_until]: OK (rc = 0) 02:06:08 DEBUG --- stdout --- 02:06:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 14m 4450Mi am-55f77847b7-c9bk2 13m 3417Mi am-55f77847b7-zpsrs 14m 2272Mi ds-cts-0 7m 371Mi ds-cts-1 7m 375Mi ds-cts-2 6m 370Mi ds-idrepo-0 2738m 13353Mi ds-idrepo-1 19m 10340Mi ds-idrepo-2 19m 10257Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1487Mi idm-65858d8c4c-97wdf 9m 1207Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1294m 377Mi 02:06:08 DEBUG --- stderr --- 02:06:08 DEBUG 02:06:08 INFO 02:06:08 INFO [loop_until]: kubectl --namespace=xlou top node 02:06:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:06:08 INFO [loop_until]: OK (rc = 0) 02:06:08 DEBUG --- stdout --- 02:06:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5495Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3422Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 4592Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2823Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 2483Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 74m 0% 10927Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 10999Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2725m 17% 13913Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1374m 8% 1908Mi 3% 02:06:08 DEBUG --- stderr --- 02:06:08 DEBUG 02:07:08 INFO 02:07:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:07:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:07:08 INFO [loop_until]: OK (rc = 0) 02:07:08 DEBUG --- stdout --- 02:07:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 12m 4450Mi am-55f77847b7-c9bk2 8m 3418Mi am-55f77847b7-zpsrs 12m 2272Mi ds-cts-0 7m 371Mi ds-cts-1 7m 375Mi ds-cts-2 7m 369Mi ds-idrepo-0 3025m 13527Mi ds-idrepo-1 24m 10345Mi ds-idrepo-2 20m 10259Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9m 1497Mi idm-65858d8c4c-97wdf 9m 1217Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1373m 377Mi 02:07:08 DEBUG --- stderr --- 02:07:08 DEBUG 02:07:08 INFO 02:07:08 INFO [loop_until]: kubectl --namespace=xlou top node 02:07:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:07:08 INFO [loop_until]: OK (rc = 0) 02:07:08 DEBUG --- stdout --- 02:07:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3423Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 4593Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2836Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2495Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 80m 0% 10932Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 76m 0% 10998Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3113m 19% 14083Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1417m 8% 1910Mi 3% 02:07:08 DEBUG --- stderr --- 02:07:08 DEBUG 02:08:08 INFO 02:08:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:08:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:08:08 INFO [loop_until]: OK (rc = 0) 02:08:08 DEBUG --- stdout --- 02:08:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 10m 4450Mi am-55f77847b7-c9bk2 9m 3418Mi am-55f77847b7-zpsrs 13m 2274Mi ds-cts-0 12m 371Mi ds-cts-1 10m 375Mi ds-cts-2 8m 369Mi ds-idrepo-0 2963m 13576Mi ds-idrepo-1 15m 10346Mi ds-idrepo-2 19m 10261Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1507Mi idm-65858d8c4c-97wdf 11m 1227Mi lodemon-86f768796c-ts724 5m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1422m 377Mi 02:08:08 DEBUG --- stderr --- 02:08:08 DEBUG 02:08:08 INFO 02:08:08 INFO [loop_until]: kubectl --namespace=xlou top node 02:08:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:08:09 INFO [loop_until]: OK (rc = 0) 02:08:09 DEBUG --- stdout --- 02:08:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5495Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3420Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 4591Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2846Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 2506Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 75m 0% 10933Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 11002Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3047m 19% 14125Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1488m 9% 1909Mi 3% 02:08:09 DEBUG --- stderr --- 02:08:09 DEBUG 02:09:08 INFO 02:09:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:09:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:09:08 INFO [loop_until]: OK (rc = 0) 02:09:08 DEBUG --- stdout --- 02:09:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 25m 4454Mi am-55f77847b7-c9bk2 9m 3419Mi am-55f77847b7-zpsrs 17m 2278Mi ds-cts-0 7m 371Mi ds-cts-1 7m 375Mi ds-cts-2 6m 369Mi ds-idrepo-0 10m 13575Mi ds-idrepo-1 14m 10346Mi ds-idrepo-2 22m 10261Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1520Mi idm-65858d8c4c-97wdf 7m 1235Mi lodemon-86f768796c-ts724 5m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 98Mi 02:09:08 DEBUG --- stderr --- 02:09:08 DEBUG 02:09:09 INFO 02:09:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:09:09 INFO [loop_until]: OK (rc = 0) 02:09:09 DEBUG --- stdout --- 02:09:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 3426Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4596Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2856Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 137m 0% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2513Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 75m 0% 10934Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 11001Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14127Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1634Mi 2% 02:09:09 DEBUG --- stderr --- 02:09:09 DEBUG 02:10:08 INFO 02:10:08 INFO [loop_until]: kubectl --namespace=xlou top pods 02:10:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:10:08 INFO [loop_until]: OK (rc = 0) 02:10:08 DEBUG --- stdout --- 02:10:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 12m 4455Mi am-55f77847b7-c9bk2 13m 3420Mi am-55f77847b7-zpsrs 25m 2281Mi ds-cts-0 10m 372Mi ds-cts-1 7m 375Mi ds-cts-2 6m 369Mi ds-idrepo-0 17m 13576Mi ds-idrepo-1 2564m 12695Mi ds-idrepo-2 17m 10262Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9m 1531Mi idm-65858d8c4c-97wdf 7m 1246Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1051m 364Mi 02:10:08 DEBUG --- stderr --- 02:10:08 DEBUG 02:10:09 INFO 02:10:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:10:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:10:09 INFO [loop_until]: OK (rc = 0) 02:10:09 DEBUG --- stdout --- 02:10:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 3429Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 4593Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2867Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2520Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 70m 0% 10935Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 2655m 16% 12619Mi 21% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14126Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1103m 6% 1896Mi 3% 02:10:09 DEBUG --- stderr --- 02:10:09 DEBUG 02:11:09 INFO 02:11:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:11:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:11:09 INFO [loop_until]: OK (rc = 0) 02:11:09 DEBUG --- stdout --- 02:11:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 18m 4456Mi am-55f77847b7-c9bk2 11m 3420Mi am-55f77847b7-zpsrs 9m 2284Mi ds-cts-0 7m 372Mi ds-cts-1 8m 376Mi ds-cts-2 12m 370Mi ds-idrepo-0 11m 13575Mi ds-idrepo-1 2712m 13392Mi ds-idrepo-2 17m 10262Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 7m 1541Mi idm-65858d8c4c-97wdf 8m 1257Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1158m 364Mi 02:11:09 DEBUG --- stderr --- 02:11:09 DEBUG 02:11:09 INFO 02:11:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:11:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:11:09 INFO [loop_until]: OK (rc = 0) 02:11:09 DEBUG --- stdout --- 02:11:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3435Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 4593Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2877Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 2533Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 70m 0% 10934Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 2795m 17% 13957Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14122Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1247m 7% 1896Mi 3% 02:11:09 DEBUG --- stderr --- 02:11:09 DEBUG 02:12:09 INFO 02:12:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:12:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:12:09 INFO [loop_until]: OK (rc = 0) 02:12:09 DEBUG --- stdout --- 02:12:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 11m 4455Mi am-55f77847b7-c9bk2 9m 3420Mi am-55f77847b7-zpsrs 12m 2294Mi ds-cts-0 7m 372Mi ds-cts-1 12m 377Mi ds-cts-2 8m 370Mi ds-idrepo-0 12m 13576Mi ds-idrepo-1 2749m 13376Mi ds-idrepo-2 30m 10267Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1553Mi idm-65858d8c4c-97wdf 7m 1268Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1203m 367Mi 02:12:09 DEBUG --- stderr --- 02:12:09 DEBUG 02:12:09 INFO 02:12:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:12:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:12:09 INFO [loop_until]: OK (rc = 0) 02:12:09 DEBUG --- stdout --- 02:12:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3443Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4594Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2899Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2544Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 90m 0% 10939Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 2903m 18% 13934Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14126Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1237m 7% 1895Mi 3% 02:12:09 DEBUG --- stderr --- 02:12:09 DEBUG 02:13:09 INFO 02:13:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:13:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:13:09 INFO [loop_until]: OK (rc = 0) 02:13:09 DEBUG --- stdout --- 02:13:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 12m 4456Mi am-55f77847b7-c9bk2 13m 3420Mi am-55f77847b7-zpsrs 11m 2304Mi ds-cts-0 6m 372Mi ds-cts-1 6m 377Mi ds-cts-2 7m 370Mi ds-idrepo-0 10m 13576Mi ds-idrepo-1 2914m 13397Mi ds-idrepo-2 19m 10266Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 12m 1564Mi idm-65858d8c4c-97wdf 10m 1279Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1274m 367Mi 02:13:09 DEBUG --- stderr --- 02:13:09 DEBUG 02:13:09 INFO 02:13:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:13:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:13:09 INFO [loop_until]: OK (rc = 0) 02:13:09 DEBUG --- stdout --- 02:13:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3464Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 4594Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2902Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2555Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 80m 0% 10939Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 2847m 17% 13965Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14127Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1376m 8% 1909Mi 3% 02:13:09 DEBUG --- stderr --- 02:13:09 DEBUG 02:14:09 INFO 02:14:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:14:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:14:09 INFO [loop_until]: OK (rc = 0) 02:14:09 DEBUG --- stdout --- 02:14:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 9m 4456Mi am-55f77847b7-c9bk2 8m 3421Mi am-55f77847b7-zpsrs 10m 2315Mi ds-cts-0 6m 372Mi ds-cts-1 7m 377Mi ds-cts-2 8m 370Mi ds-idrepo-0 15m 13575Mi ds-idrepo-1 2920m 13627Mi ds-idrepo-2 18m 10267Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1577Mi idm-65858d8c4c-97wdf 8m 1290Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1348m 367Mi 02:14:09 DEBUG --- stderr --- 02:14:09 DEBUG 02:14:09 INFO 02:14:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:14:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:14:09 INFO [loop_until]: OK (rc = 0) 02:14:09 DEBUG --- stdout --- 02:14:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5505Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3465Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 4599Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2913Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2567Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 10943Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 3013m 18% 14184Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14125Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1437m 9% 1899Mi 3% 02:14:09 DEBUG --- stderr --- 02:14:09 DEBUG 02:15:09 INFO 02:15:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:15:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:15:09 INFO [loop_until]: OK (rc = 0) 02:15:09 DEBUG --- stdout --- 02:15:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 9m 4457Mi am-55f77847b7-c9bk2 8m 3420Mi am-55f77847b7-zpsrs 8m 2324Mi ds-cts-0 7m 372Mi ds-cts-1 7m 377Mi ds-cts-2 6m 370Mi ds-idrepo-0 10m 13575Mi ds-idrepo-1 11m 13648Mi ds-idrepo-2 16m 10267Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 7m 1588Mi idm-65858d8c4c-97wdf 7m 1299Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 22m 99Mi 02:15:09 DEBUG --- stderr --- 02:15:09 DEBUG 02:15:09 INFO 02:15:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:15:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:15:09 INFO [loop_until]: OK (rc = 0) 02:15:09 DEBUG --- stdout --- 02:15:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3473Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 4593Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2923Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 2579Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 10940Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14207Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14128Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1633Mi 2% 02:15:09 DEBUG --- stderr --- 02:15:09 DEBUG 02:16:09 INFO 02:16:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:16:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:16:09 INFO [loop_until]: OK (rc = 0) 02:16:09 DEBUG --- stdout --- 02:16:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 10m 4457Mi am-55f77847b7-c9bk2 8m 3421Mi am-55f77847b7-zpsrs 9m 2334Mi ds-cts-0 8m 372Mi ds-cts-1 7m 377Mi ds-cts-2 6m 370Mi ds-idrepo-0 10m 13576Mi ds-idrepo-1 12m 13650Mi ds-idrepo-2 2474m 12022Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 7m 1599Mi idm-65858d8c4c-97wdf 7m 1310Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1398m 377Mi 02:16:09 DEBUG --- stderr --- 02:16:09 DEBUG 02:16:09 INFO 02:16:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:16:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:16:09 INFO [loop_until]: OK (rc = 0) 02:16:09 DEBUG --- stdout --- 02:16:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5505Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3482Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 4597Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 2934Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2587Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2521m 15% 12636Mi 21% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14212Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14130Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1056m 6% 1906Mi 3% 02:16:09 DEBUG --- stderr --- 02:16:09 DEBUG 02:17:09 INFO 02:17:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:17:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:17:09 INFO [loop_until]: OK (rc = 0) 02:17:09 DEBUG --- stdout --- 02:17:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 10m 4460Mi am-55f77847b7-c9bk2 9m 3421Mi am-55f77847b7-zpsrs 9m 2344Mi ds-cts-0 9m 374Mi ds-cts-1 9m 377Mi ds-cts-2 8m 370Mi ds-idrepo-0 16m 13576Mi ds-idrepo-1 12m 13649Mi ds-idrepo-2 2544m 13333Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1608Mi idm-65858d8c4c-97wdf 7m 1321Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1151m 377Mi 02:17:09 DEBUG --- stderr --- 02:17:09 DEBUG 02:17:09 INFO 02:17:09 INFO [loop_until]: kubectl --namespace=xlou top node 02:17:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:17:10 INFO [loop_until]: OK (rc = 0) 02:17:10 DEBUG --- stdout --- 02:17:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3493Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4594Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2941Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2133Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2595Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2646m 16% 13923Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14212Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14131Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1243m 7% 1908Mi 3% 02:17:10 DEBUG --- stderr --- 02:17:10 DEBUG 02:18:09 INFO 02:18:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:18:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:18:09 INFO [loop_until]: OK (rc = 0) 02:18:09 DEBUG --- stdout --- 02:18:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 10m 4460Mi am-55f77847b7-c9bk2 8m 3421Mi am-55f77847b7-zpsrs 11m 2356Mi ds-cts-0 8m 373Mi ds-cts-1 7m 377Mi ds-cts-2 6m 370Mi ds-idrepo-0 16m 13576Mi ds-idrepo-1 11m 13649Mi ds-idrepo-2 2611m 13375Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 11m 1625Mi idm-65858d8c4c-97wdf 11m 1333Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1186m 380Mi 02:18:09 DEBUG --- stderr --- 02:18:09 DEBUG 02:18:10 INFO 02:18:10 INFO [loop_until]: kubectl --namespace=xlou top node 02:18:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:18:10 INFO [loop_until]: OK (rc = 0) 02:18:10 DEBUG --- stdout --- 02:18:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3506Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 4598Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2956Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 80m 0% 2609Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2760m 17% 13966Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14212Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14126Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1277m 8% 1913Mi 3% 02:18:10 DEBUG --- stderr --- 02:18:10 DEBUG 02:19:09 INFO 02:19:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:19:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:19:09 INFO [loop_until]: OK (rc = 0) 02:19:09 DEBUG --- stdout --- 02:19:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 13m 4466Mi am-55f77847b7-c9bk2 16m 3428Mi am-55f77847b7-zpsrs 10m 2366Mi ds-cts-0 7m 373Mi ds-cts-1 7m 377Mi ds-cts-2 7m 370Mi ds-idrepo-0 10m 13577Mi ds-idrepo-1 10m 13649Mi ds-idrepo-2 2728m 13492Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9m 1633Mi idm-65858d8c4c-97wdf 8m 1345Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1270m 381Mi 02:19:09 DEBUG --- stderr --- 02:19:09 DEBUG 02:19:10 INFO 02:19:10 INFO [loop_until]: kubectl --namespace=xlou top node 02:19:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:19:10 INFO [loop_until]: OK (rc = 0) 02:19:10 DEBUG --- stdout --- 02:19:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5516Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 3519Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 4605Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 2969Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 2623Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2727m 17% 14078Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 14212Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14126Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1371m 8% 1911Mi 3% 02:19:10 DEBUG --- stderr --- 02:19:10 DEBUG 02:20:09 INFO 02:20:09 INFO [loop_until]: kubectl --namespace=xlou top pods 02:20:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:20:09 INFO [loop_until]: OK (rc = 0) 02:20:09 DEBUG --- stdout --- 02:20:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 11m 4466Mi am-55f77847b7-c9bk2 9m 3428Mi am-55f77847b7-zpsrs 8m 2379Mi ds-cts-0 6m 373Mi ds-cts-1 7m 377Mi ds-cts-2 6m 370Mi ds-idrepo-0 11m 13575Mi ds-idrepo-1 11m 13649Mi ds-idrepo-2 2908m 13518Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 6m 1649Mi idm-65858d8c4c-97wdf 14m 1354Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1322m 380Mi 02:20:09 DEBUG --- stderr --- 02:20:09 DEBUG 02:20:10 INFO 02:20:10 INFO [loop_until]: kubectl --namespace=xlou top node 02:20:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:20:10 INFO [loop_until]: OK (rc = 0) 02:20:10 DEBUG --- stdout --- 02:20:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5513Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3529Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 4602Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2981Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2633Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3009m 18% 14098Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14213Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14125Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1398m 8% 1912Mi 3% 02:20:10 DEBUG --- stderr --- 02:20:10 DEBUG 02:21:10 INFO 02:21:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:21:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:21:10 INFO [loop_until]: OK (rc = 0) 02:21:10 DEBUG --- stdout --- 02:21:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 9m 4469Mi am-55f77847b7-c9bk2 9m 3428Mi am-55f77847b7-zpsrs 9m 2394Mi ds-cts-0 6m 373Mi ds-cts-1 6m 377Mi ds-cts-2 5m 370Mi ds-idrepo-0 10m 13575Mi ds-idrepo-1 12m 13648Mi ds-idrepo-2 307m 13717Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 1660Mi idm-65858d8c4c-97wdf 6m 1365Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 387m 99Mi 02:21:10 DEBUG --- stderr --- 02:21:10 DEBUG 02:21:10 INFO 02:21:10 INFO [loop_until]: kubectl --namespace=xlou top node 02:21:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:21:10 INFO [loop_until]: OK (rc = 0) 02:21:10 DEBUG --- stdout --- 02:21:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 5517Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 3539Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 4603Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2992Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2644Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 139m 0% 14293Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14214Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14127Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1633Mi 2% 02:21:10 DEBUG --- stderr --- 02:21:10 DEBUG 02:22:10 INFO 02:22:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:22:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:22:10 INFO [loop_until]: OK (rc = 0) 02:22:10 DEBUG --- stdout --- 02:22:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 25m 4465Mi am-55f77847b7-c9bk2 8m 3427Mi am-55f77847b7-zpsrs 8m 2403Mi ds-cts-0 10m 375Mi ds-cts-1 6m 377Mi ds-cts-2 11m 369Mi ds-idrepo-0 9m 13575Mi ds-idrepo-1 11m 13648Mi ds-idrepo-2 14m 13717Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 6m 1669Mi idm-65858d8c4c-97wdf 6m 1377Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1656m 399Mi 02:22:10 DEBUG --- stderr --- 02:22:10 DEBUG 02:22:10 INFO 02:22:10 INFO [loop_until]: kubectl --namespace=xlou top node 02:22:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:22:10 INFO [loop_until]: OK (rc = 0) 02:22:10 DEBUG --- stdout --- 02:22:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 5511Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3552Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4603Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 3007Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2656Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 14294Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14225Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14129Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1907m 12% 1928Mi 3% 02:22:10 DEBUG --- stderr --- 02:22:10 DEBUG 02:23:10 INFO 02:23:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:23:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:23:10 INFO [loop_until]: OK (rc = 0) 02:23:10 DEBUG --- stdout --- 02:23:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 111m 4639Mi am-55f77847b7-c9bk2 104m 3672Mi am-55f77847b7-zpsrs 127m 3809Mi ds-cts-0 7m 375Mi ds-cts-1 7m 379Mi ds-cts-2 7m 370Mi ds-idrepo-0 8179m 13730Mi ds-idrepo-1 1581m 13658Mi ds-idrepo-2 1717m 13671Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9253m 4236Mi idm-65858d8c4c-97wdf 8671m 3947Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1165m 527Mi 02:23:10 DEBUG --- stderr --- 02:23:10 DEBUG 02:23:10 INFO 02:23:10 INFO [loop_until]: kubectl --namespace=xlou top node 02:23:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:23:10 INFO [loop_until]: OK (rc = 0) 02:23:10 DEBUG --- stdout --- 02:23:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1392Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 5520Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 176m 1% 5070Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 160m 1% 4845Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 9106m 57% 5554Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2156m 13% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8631m 54% 5208Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1734m 10% 14252Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1869m 11% 14217Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8131m 51% 14268Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1220m 7% 2046Mi 3% 02:23:10 DEBUG --- stderr --- 02:23:10 DEBUG 02:24:10 INFO 02:24:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:24:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:24:10 INFO [loop_until]: OK (rc = 0) 02:24:10 DEBUG --- stdout --- 02:24:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 107m 4638Mi am-55f77847b7-c9bk2 92m 3677Mi am-55f77847b7-zpsrs 111m 5168Mi ds-cts-0 6m 375Mi ds-cts-1 7m 379Mi ds-cts-2 7m 370Mi ds-idrepo-0 8728m 13819Mi ds-idrepo-1 2201m 13735Mi ds-idrepo-2 1898m 13712Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8841m 4264Mi idm-65858d8c4c-97wdf 7780m 3976Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1112m 522Mi 02:24:10 DEBUG --- stderr --- 02:24:10 DEBUG 02:24:10 INFO 02:24:10 INFO [loop_until]: kubectl --namespace=xlou top node 02:24:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:24:10 INFO [loop_until]: OK (rc = 0) 02:24:10 DEBUG --- stdout --- 02:24:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 170m 1% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 152m 0% 4849Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 8971m 56% 5585Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2093m 13% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8235m 51% 5241Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2082m 13% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2602m 16% 14275Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8935m 56% 14348Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1196m 7% 2047Mi 3% 02:24:10 DEBUG --- stderr --- 02:24:10 DEBUG 02:25:10 INFO 02:25:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:25:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:10 INFO [loop_until]: OK (rc = 0) 02:25:10 DEBUG --- stdout --- 02:25:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 100m 4638Mi am-55f77847b7-c9bk2 107m 4620Mi am-55f77847b7-zpsrs 90m 5664Mi ds-cts-0 7m 375Mi ds-cts-1 8m 379Mi ds-cts-2 8m 370Mi ds-idrepo-0 8634m 13822Mi ds-idrepo-1 2260m 13823Mi ds-idrepo-2 2151m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8803m 4276Mi idm-65858d8c4c-97wdf 7765m 4002Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1114m 523Mi 02:25:10 DEBUG --- stderr --- 02:25:10 DEBUG 02:25:10 INFO 02:25:10 INFO [loop_until]: kubectl --namespace=xlou top node 02:25:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:11 INFO [loop_until]: OK (rc = 0) 02:25:11 DEBUG --- stdout --- 02:25:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 5683Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 164m 1% 6105Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 9033m 56% 5601Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2164m 13% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7965m 50% 5273Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2353m 14% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2326m 14% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8934m 56% 14339Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1144m 7% 2046Mi 3% 02:25:11 DEBUG --- stderr --- 02:25:11 DEBUG 02:26:10 INFO 02:26:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:26:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:26:10 INFO [loop_until]: OK (rc = 0) 02:26:10 DEBUG --- stdout --- 02:26:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 125m 5748Mi am-55f77847b7-c9bk2 103m 5707Mi am-55f77847b7-zpsrs 99m 5686Mi ds-cts-0 7m 375Mi ds-cts-1 9m 379Mi ds-cts-2 8m 371Mi ds-idrepo-0 8362m 13824Mi ds-idrepo-1 2105m 13823Mi ds-idrepo-2 2075m 13831Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8687m 4291Mi idm-65858d8c4c-97wdf 7553m 4033Mi lodemon-86f768796c-ts724 1m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1082m 523Mi 02:26:10 DEBUG --- stderr --- 02:26:10 DEBUG 02:26:11 INFO 02:26:11 INFO [loop_until]: kubectl --namespace=xlou top node 02:26:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:26:11 INFO [loop_until]: OK (rc = 0) 02:26:11 DEBUG --- stdout --- 02:26:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 170m 1% 6790Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6876Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8883m 55% 5615Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2094m 13% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8058m 50% 5297Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2121m 13% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2044m 12% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8435m 53% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1180m 7% 2049Mi 3% 02:26:11 DEBUG --- stderr --- 02:26:11 DEBUG 02:27:10 INFO 02:27:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:27:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:27:10 INFO [loop_until]: OK (rc = 0) 02:27:10 DEBUG --- stdout --- 02:27:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 100m 5748Mi am-55f77847b7-c9bk2 88m 5707Mi am-55f77847b7-zpsrs 94m 5719Mi ds-cts-0 7m 375Mi ds-cts-1 6m 379Mi ds-cts-2 7m 370Mi ds-idrepo-0 9741m 13813Mi ds-idrepo-1 2855m 13823Mi ds-idrepo-2 2657m 13822Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8586m 4308Mi idm-65858d8c4c-97wdf 8052m 4063Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1086m 524Mi 02:27:10 DEBUG --- stderr --- 02:27:10 DEBUG 02:27:11 INFO 02:27:11 INFO [loop_until]: kubectl --namespace=xlou top node 02:27:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:27:11 INFO [loop_until]: OK (rc = 0) 02:27:11 DEBUG --- stdout --- 02:27:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8998m 56% 5631Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2161m 13% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8234m 51% 5333Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2712m 17% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2881m 18% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9614m 60% 14341Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1162m 7% 2050Mi 3% 02:27:11 DEBUG --- stderr --- 02:27:11 DEBUG 02:28:10 INFO 02:28:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:28:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:28:10 INFO [loop_until]: OK (rc = 0) 02:28:10 DEBUG --- stdout --- 02:28:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 93m 5748Mi am-55f77847b7-c9bk2 87m 5713Mi am-55f77847b7-zpsrs 95m 5724Mi ds-cts-0 6m 375Mi ds-cts-1 13m 380Mi ds-cts-2 7m 370Mi ds-idrepo-0 9112m 13811Mi ds-idrepo-1 2262m 13823Mi ds-idrepo-2 2121m 13840Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8659m 4328Mi idm-65858d8c4c-97wdf 7834m 4092Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1118m 525Mi 02:28:10 DEBUG --- stderr --- 02:28:10 DEBUG 02:28:11 INFO 02:28:11 INFO [loop_until]: kubectl --namespace=xlou top node 02:28:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:28:11 INFO [loop_until]: OK (rc = 0) 02:28:11 DEBUG --- stdout --- 02:28:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6795Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6882Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8983m 56% 5654Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2083m 13% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8126m 51% 5364Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2204m 13% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2257m 14% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9166m 57% 14335Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1204m 7% 2051Mi 3% 02:28:11 DEBUG --- stderr --- 02:28:11 DEBUG 02:29:10 INFO 02:29:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:29:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:29:10 INFO [loop_until]: OK (rc = 0) 02:29:10 DEBUG --- stdout --- 02:29:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 95m 5755Mi am-55f77847b7-c9bk2 87m 5717Mi am-55f77847b7-zpsrs 87m 5724Mi ds-cts-0 6m 376Mi ds-cts-1 6m 380Mi ds-cts-2 7m 370Mi ds-idrepo-0 9889m 13808Mi ds-idrepo-1 2435m 13812Mi ds-idrepo-2 2823m 13810Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8632m 4350Mi idm-65858d8c4c-97wdf 8019m 4128Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1101m 525Mi 02:29:10 DEBUG --- stderr --- 02:29:10 DEBUG 02:29:11 INFO 02:29:11 INFO [loop_until]: kubectl --namespace=xlou top node 02:29:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:29:11 INFO [loop_until]: OK (rc = 0) 02:29:11 DEBUG --- stdout --- 02:29:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6800Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6868Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 140m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8999m 56% 5673Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2149m 13% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7854m 49% 5397Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2840m 17% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2649m 16% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9779m 61% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1178m 7% 2050Mi 3% 02:29:11 DEBUG --- stderr --- 02:29:11 DEBUG 02:30:10 INFO 02:30:10 INFO [loop_until]: kubectl --namespace=xlou top pods 02:30:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:30:11 INFO [loop_until]: OK (rc = 0) 02:30:11 DEBUG --- stdout --- 02:30:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 91m 5755Mi am-55f77847b7-c9bk2 86m 5717Mi am-55f77847b7-zpsrs 103m 5730Mi ds-cts-0 7m 375Mi ds-cts-1 6m 380Mi ds-cts-2 8m 371Mi ds-idrepo-0 9163m 13825Mi ds-idrepo-1 2334m 13848Mi ds-idrepo-2 2472m 13836Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8445m 4374Mi idm-65858d8c4c-97wdf 7734m 4163Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1174m 525Mi 02:30:11 DEBUG --- stderr --- 02:30:11 DEBUG 02:30:11 INFO 02:30:11 INFO [loop_until]: kubectl --namespace=xlou top node 02:30:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:30:11 INFO [loop_until]: OK (rc = 0) 02:30:11 DEBUG --- stdout --- 02:30:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 147m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6872Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9019m 56% 5692Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2091m 13% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8214m 51% 5428Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2671m 16% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2785m 17% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9364m 58% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1226m 7% 2052Mi 3% 02:30:11 DEBUG --- stderr --- 02:30:11 DEBUG 02:31:11 INFO 02:31:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:31:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:31:11 INFO [loop_until]: OK (rc = 0) 02:31:11 DEBUG --- stdout --- 02:31:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 91m 5755Mi am-55f77847b7-c9bk2 86m 5718Mi am-55f77847b7-zpsrs 91m 5730Mi ds-cts-0 6m 375Mi ds-cts-1 9m 380Mi ds-cts-2 6m 371Mi ds-idrepo-0 9751m 13806Mi ds-idrepo-1 2801m 13813Mi ds-idrepo-2 2825m 13842Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8590m 4408Mi idm-65858d8c4c-97wdf 7925m 4204Mi lodemon-86f768796c-ts724 5m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1150m 526Mi 02:31:11 DEBUG --- stderr --- 02:31:11 DEBUG 02:31:11 INFO 02:31:11 INFO [loop_until]: kubectl --namespace=xlou top node 02:31:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:31:11 INFO [loop_until]: OK (rc = 0) 02:31:11 DEBUG --- stdout --- 02:31:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6797Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8961m 56% 5734Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2163m 13% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8364m 52% 5470Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2862m 18% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2937m 18% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9737m 61% 14376Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1222m 7% 2049Mi 3% 02:31:11 DEBUG --- stderr --- 02:31:11 DEBUG 02:32:11 INFO 02:32:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:32:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:32:11 INFO [loop_until]: OK (rc = 0) 02:32:11 DEBUG --- stdout --- 02:32:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 94m 5756Mi am-55f77847b7-c9bk2 87m 5719Mi am-55f77847b7-zpsrs 88m 5730Mi ds-cts-0 8m 375Mi ds-cts-1 6m 380Mi ds-cts-2 6m 371Mi ds-idrepo-0 9214m 13849Mi ds-idrepo-1 2678m 13828Mi ds-idrepo-2 2581m 13783Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8532m 4446Mi idm-65858d8c4c-97wdf 7989m 4236Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1182m 527Mi 02:32:11 DEBUG --- stderr --- 02:32:11 DEBUG 02:32:11 INFO 02:32:11 INFO [loop_until]: kubectl --namespace=xlou top node 02:32:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:32:11 INFO [loop_until]: OK (rc = 0) 02:32:11 DEBUG --- stdout --- 02:32:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6799Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6869Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9073m 57% 5771Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2169m 13% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8342m 52% 5506Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2426m 15% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2725m 17% 14328Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9931m 62% 14331Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1204m 7% 2049Mi 3% 02:32:11 DEBUG --- stderr --- 02:32:11 DEBUG 02:33:11 INFO 02:33:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:33:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:33:11 INFO [loop_until]: OK (rc = 0) 02:33:11 DEBUG --- stdout --- 02:33:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 103m 5757Mi am-55f77847b7-c9bk2 91m 5719Mi am-55f77847b7-zpsrs 91m 5731Mi ds-cts-0 6m 375Mi ds-cts-1 9m 381Mi ds-cts-2 8m 372Mi ds-idrepo-0 9204m 13814Mi ds-idrepo-1 2588m 13818Mi ds-idrepo-2 2344m 13825Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8578m 4483Mi idm-65858d8c4c-97wdf 7776m 4272Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1122m 527Mi 02:33:11 DEBUG --- stderr --- 02:33:11 DEBUG 02:33:11 INFO 02:33:11 INFO [loop_until]: kubectl --namespace=xlou top node 02:33:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:33:12 INFO [loop_until]: OK (rc = 0) 02:33:12 DEBUG --- stdout --- 02:33:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 160m 1% 6800Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6871Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9133m 57% 5811Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2161m 13% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8167m 51% 5537Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2697m 16% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2581m 16% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9340m 58% 14334Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1223m 7% 2049Mi 3% 02:33:12 DEBUG --- stderr --- 02:33:12 DEBUG 02:34:11 INFO 02:34:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:34:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:34:11 INFO [loop_until]: OK (rc = 0) 02:34:11 DEBUG --- stdout --- 02:34:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 88m 5757Mi am-55f77847b7-c9bk2 88m 5722Mi am-55f77847b7-zpsrs 92m 5731Mi ds-cts-0 6m 375Mi ds-cts-1 7m 380Mi ds-cts-2 7m 372Mi ds-idrepo-0 9362m 13825Mi ds-idrepo-1 2059m 13845Mi ds-idrepo-2 1997m 13848Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8557m 4514Mi idm-65858d8c4c-97wdf 7889m 4302Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1116m 529Mi 02:34:11 DEBUG --- stderr --- 02:34:11 DEBUG 02:34:12 INFO 02:34:12 INFO [loop_until]: kubectl --namespace=xlou top node 02:34:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:34:12 INFO [loop_until]: OK (rc = 0) 02:34:12 DEBUG --- stdout --- 02:34:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6798Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6874Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9012m 56% 5845Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2159m 13% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8195m 51% 5569Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2072m 13% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1937m 12% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9362m 58% 14349Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1189m 7% 2053Mi 3% 02:34:12 DEBUG --- stderr --- 02:34:12 DEBUG 02:35:11 INFO 02:35:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:35:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:35:11 INFO [loop_until]: OK (rc = 0) 02:35:11 DEBUG --- stdout --- 02:35:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 93m 5757Mi am-55f77847b7-c9bk2 90m 5723Mi am-55f77847b7-zpsrs 89m 5734Mi ds-cts-0 7m 375Mi ds-cts-1 8m 380Mi ds-cts-2 7m 371Mi ds-idrepo-0 9029m 13815Mi ds-idrepo-1 2901m 13857Mi ds-idrepo-2 2415m 13809Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8891m 4550Mi idm-65858d8c4c-97wdf 7779m 4334Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1160m 529Mi 02:35:11 DEBUG --- stderr --- 02:35:11 DEBUG 02:35:12 INFO 02:35:12 INFO [loop_until]: kubectl --namespace=xlou top node 02:35:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:35:12 INFO [loop_until]: OK (rc = 0) 02:35:12 DEBUG --- stdout --- 02:35:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6799Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9062m 57% 5875Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2153m 13% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8109m 51% 5599Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2592m 16% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2900m 18% 14338Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9763m 61% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1216m 7% 2050Mi 3% 02:35:12 DEBUG --- stderr --- 02:35:12 DEBUG 02:36:11 INFO 02:36:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:36:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:36:11 INFO [loop_until]: OK (rc = 0) 02:36:11 DEBUG --- stdout --- 02:36:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 92m 5757Mi am-55f77847b7-c9bk2 88m 5727Mi am-55f77847b7-zpsrs 90m 5734Mi ds-cts-0 7m 375Mi ds-cts-1 7m 381Mi ds-cts-2 7m 371Mi ds-idrepo-0 9007m 13824Mi ds-idrepo-1 2100m 13850Mi ds-idrepo-2 2011m 13833Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8852m 4591Mi idm-65858d8c4c-97wdf 8058m 4365Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1142m 530Mi 02:36:11 DEBUG --- stderr --- 02:36:11 DEBUG 02:36:12 INFO 02:36:12 INFO [loop_until]: kubectl --namespace=xlou top node 02:36:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:36:12 INFO [loop_until]: OK (rc = 0) 02:36:12 DEBUG --- stdout --- 02:36:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6801Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8766m 55% 5912Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2170m 13% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8240m 51% 5625Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2168m 13% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2333m 14% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9042m 56% 14338Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1207m 7% 2049Mi 3% 02:36:12 DEBUG --- stderr --- 02:36:12 DEBUG 02:37:11 INFO 02:37:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:37:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:37:11 INFO [loop_until]: OK (rc = 0) 02:37:11 DEBUG --- stdout --- 02:37:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 93m 5758Mi am-55f77847b7-c9bk2 88m 5728Mi am-55f77847b7-zpsrs 94m 5734Mi ds-cts-0 7m 376Mi ds-cts-1 6m 380Mi ds-cts-2 7m 371Mi ds-idrepo-0 10452m 13815Mi ds-idrepo-1 3119m 13829Mi ds-idrepo-2 3047m 13817Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8634m 4627Mi idm-65858d8c4c-97wdf 7888m 4398Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1170m 530Mi 02:37:11 DEBUG --- stderr --- 02:37:11 DEBUG 02:37:12 INFO 02:37:12 INFO [loop_until]: kubectl --namespace=xlou top node 02:37:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:37:12 INFO [loop_until]: OK (rc = 0) 02:37:12 DEBUG --- stdout --- 02:37:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6801Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 140m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9169m 57% 5951Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2160m 13% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8217m 51% 5658Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2829m 17% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2898m 18% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10359m 65% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1246m 7% 2049Mi 3% 02:37:12 DEBUG --- stderr --- 02:37:12 DEBUG 02:38:11 INFO 02:38:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:38:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:38:11 INFO [loop_until]: OK (rc = 0) 02:38:11 DEBUG --- stdout --- 02:38:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 91m 5758Mi am-55f77847b7-c9bk2 82m 5728Mi am-55f77847b7-zpsrs 88m 5735Mi ds-cts-0 7m 377Mi ds-cts-1 6m 380Mi ds-cts-2 8m 371Mi ds-idrepo-0 8954m 13855Mi ds-idrepo-1 2756m 13823Mi ds-idrepo-2 2520m 13804Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8208m 4663Mi idm-65858d8c4c-97wdf 7527m 4423Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1123m 531Mi 02:38:11 DEBUG --- stderr --- 02:38:11 DEBUG 02:38:12 INFO 02:38:12 INFO [loop_until]: kubectl --namespace=xlou top node 02:38:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:38:12 INFO [loop_until]: OK (rc = 0) 02:38:12 DEBUG --- stdout --- 02:38:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8902m 56% 5984Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2135m 13% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8031m 50% 5687Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2649m 16% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2911m 18% 14353Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9251m 58% 14339Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1209m 7% 2049Mi 3% 02:38:12 DEBUG --- stderr --- 02:38:12 DEBUG 02:39:11 INFO 02:39:11 INFO [loop_until]: kubectl --namespace=xlou top pods 02:39:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:39:11 INFO [loop_until]: OK (rc = 0) 02:39:11 DEBUG --- stdout --- 02:39:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 88m 5758Mi am-55f77847b7-c9bk2 85m 5728Mi am-55f77847b7-zpsrs 87m 5735Mi ds-cts-0 6m 376Mi ds-cts-1 8m 381Mi ds-cts-2 6m 371Mi ds-idrepo-0 10033m 13822Mi ds-idrepo-1 2708m 13834Mi ds-idrepo-2 2450m 13800Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8639m 4700Mi idm-65858d8c4c-97wdf 8138m 4460Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1099m 531Mi 02:39:11 DEBUG --- stderr --- 02:39:11 DEBUG 02:39:12 INFO 02:39:12 INFO [loop_until]: kubectl --namespace=xlou top node 02:39:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:39:12 INFO [loop_until]: OK (rc = 0) 02:39:12 DEBUG --- stdout --- 02:39:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6801Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9104m 57% 6022Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2183m 13% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8299m 52% 5723Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2628m 16% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2683m 16% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9806m 61% 14338Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1211m 7% 2050Mi 3% 02:39:12 DEBUG --- stderr --- 02:39:12 DEBUG 02:40:12 INFO 02:40:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:40:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:40:12 INFO [loop_until]: OK (rc = 0) 02:40:12 DEBUG --- stdout --- 02:40:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 93m 5759Mi am-55f77847b7-c9bk2 90m 5728Mi am-55f77847b7-zpsrs 89m 5735Mi ds-cts-0 8m 376Mi ds-cts-1 8m 381Mi ds-cts-2 7m 371Mi ds-idrepo-0 9960m 13831Mi ds-idrepo-1 2496m 13790Mi ds-idrepo-2 2652m 13791Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8668m 4740Mi idm-65858d8c4c-97wdf 7916m 4491Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1165m 532Mi 02:40:12 DEBUG --- stderr --- 02:40:12 DEBUG 02:40:12 INFO 02:40:12 INFO [loop_until]: kubectl --namespace=xlou top node 02:40:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:40:12 INFO [loop_until]: OK (rc = 0) 02:40:12 DEBUG --- stdout --- 02:40:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6876Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9012m 56% 6062Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2170m 13% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8323m 52% 5766Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2514m 15% 14340Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2851m 17% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9830m 61% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1209m 7% 2053Mi 3% 02:40:12 DEBUG --- stderr --- 02:40:12 DEBUG 02:41:12 INFO 02:41:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:41:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:41:12 INFO [loop_until]: OK (rc = 0) 02:41:12 DEBUG --- stdout --- 02:41:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 91m 5759Mi am-55f77847b7-c9bk2 88m 5728Mi am-55f77847b7-zpsrs 94m 5735Mi ds-cts-0 7m 376Mi ds-cts-1 8m 383Mi ds-cts-2 7m 372Mi ds-idrepo-0 8801m 13831Mi ds-idrepo-1 2037m 13849Mi ds-idrepo-2 2171m 13827Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8572m 4773Mi idm-65858d8c4c-97wdf 7864m 4521Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1131m 532Mi 02:41:12 DEBUG --- stderr --- 02:41:12 DEBUG 02:41:12 INFO 02:41:12 INFO [loop_until]: kubectl --namespace=xlou top node 02:41:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:41:13 INFO [loop_until]: OK (rc = 0) 02:41:13 DEBUG --- stdout --- 02:41:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6876Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9008m 56% 6095Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2168m 13% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8238m 51% 5784Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2171m 13% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2160m 13% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9139m 57% 14355Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1210m 7% 2053Mi 3% 02:41:13 DEBUG --- stderr --- 02:41:13 DEBUG 02:42:12 INFO 02:42:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:42:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:42:12 INFO [loop_until]: OK (rc = 0) 02:42:12 DEBUG --- stdout --- 02:42:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 94m 5759Mi am-55f77847b7-c9bk2 87m 5728Mi am-55f77847b7-zpsrs 95m 5736Mi ds-cts-0 10m 376Mi ds-cts-1 6m 384Mi ds-cts-2 8m 372Mi ds-idrepo-0 9395m 13859Mi ds-idrepo-1 2544m 13825Mi ds-idrepo-2 2684m 13824Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8595m 4822Mi idm-65858d8c4c-97wdf 8023m 4555Mi lodemon-86f768796c-ts724 4m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1140m 532Mi 02:42:12 DEBUG --- stderr --- 02:42:12 DEBUG 02:42:13 INFO 02:42:13 INFO [loop_until]: kubectl --namespace=xlou top node 02:42:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:42:13 INFO [loop_until]: OK (rc = 0) 02:42:13 DEBUG --- stdout --- 02:42:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9043m 56% 6140Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2187m 13% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8352m 52% 5814Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2638m 16% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2792m 17% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9491m 59% 14350Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1197m 7% 2052Mi 3% 02:42:13 DEBUG --- stderr --- 02:42:13 DEBUG 02:43:12 INFO 02:43:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:43:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:43:12 INFO [loop_until]: OK (rc = 0) 02:43:12 DEBUG --- stdout --- 02:43:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 93m 5759Mi am-55f77847b7-c9bk2 85m 5729Mi am-55f77847b7-zpsrs 91m 5736Mi ds-cts-0 7m 376Mi ds-cts-1 6m 383Mi ds-cts-2 7m 372Mi ds-idrepo-0 9371m 13828Mi ds-idrepo-1 2916m 13840Mi ds-idrepo-2 2535m 13825Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8581m 4863Mi idm-65858d8c4c-97wdf 7800m 4580Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1136m 534Mi 02:43:12 DEBUG --- stderr --- 02:43:12 DEBUG 02:43:13 INFO 02:43:13 INFO [loop_until]: kubectl --namespace=xlou top node 02:43:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:43:13 INFO [loop_until]: OK (rc = 0) 02:43:13 DEBUG --- stdout --- 02:43:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8761m 55% 6178Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2161m 13% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8305m 52% 5844Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3057m 19% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2625m 16% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9553m 60% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1219m 7% 2047Mi 3% 02:43:13 DEBUG --- stderr --- 02:43:13 DEBUG 02:44:12 INFO 02:44:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:44:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:44:12 INFO [loop_until]: OK (rc = 0) 02:44:12 DEBUG --- stdout --- 02:44:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 90m 5759Mi am-55f77847b7-c9bk2 85m 5729Mi am-55f77847b7-zpsrs 90m 5736Mi ds-cts-0 9m 376Mi ds-cts-1 6m 384Mi ds-cts-2 9m 372Mi ds-idrepo-0 8516m 13833Mi ds-idrepo-1 2238m 13859Mi ds-idrepo-2 2339m 13852Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8590m 4893Mi idm-65858d8c4c-97wdf 7814m 4614Mi lodemon-86f768796c-ts724 2m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1109m 534Mi 02:44:12 DEBUG --- stderr --- 02:44:12 DEBUG 02:44:13 INFO 02:44:13 INFO [loop_until]: kubectl --namespace=xlou top node 02:44:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:44:13 INFO [loop_until]: OK (rc = 0) 02:44:13 DEBUG --- stdout --- 02:44:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6801Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9023m 56% 6215Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2145m 13% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8074m 50% 5877Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2349m 14% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2198m 13% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8631m 54% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1197m 7% 2054Mi 3% 02:44:13 DEBUG --- stderr --- 02:44:13 DEBUG 02:45:12 INFO 02:45:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:45:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:45:12 INFO [loop_until]: OK (rc = 0) 02:45:12 DEBUG --- stdout --- 02:45:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 92m 5759Mi am-55f77847b7-c9bk2 88m 5729Mi am-55f77847b7-zpsrs 92m 5736Mi ds-cts-0 8m 376Mi ds-cts-1 6m 385Mi ds-cts-2 19m 373Mi ds-idrepo-0 8779m 13843Mi ds-idrepo-1 1985m 13857Mi ds-idrepo-2 2176m 13838Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8519m 4931Mi idm-65858d8c4c-97wdf 8074m 4643Mi lodemon-86f768796c-ts724 1m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1131m 534Mi 02:45:12 DEBUG --- stderr --- 02:45:12 DEBUG 02:45:13 INFO 02:45:13 INFO [loop_until]: kubectl --namespace=xlou top node 02:45:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:45:13 INFO [loop_until]: OK (rc = 0) 02:45:13 DEBUG --- stdout --- 02:45:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6881Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9087m 57% 6250Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2106m 13% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8251m 51% 5907Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2054m 12% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2032m 12% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 76m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8907m 56% 14360Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1167m 7% 2055Mi 3% 02:45:13 DEBUG --- stderr --- 02:45:13 DEBUG 02:46:12 INFO 02:46:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:46:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:46:12 INFO [loop_until]: OK (rc = 0) 02:46:12 DEBUG --- stdout --- 02:46:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 88m 5759Mi am-55f77847b7-c9bk2 84m 5729Mi am-55f77847b7-zpsrs 89m 5736Mi ds-cts-0 11m 376Mi ds-cts-1 6m 385Mi ds-cts-2 10m 373Mi ds-idrepo-0 8950m 13851Mi ds-idrepo-1 2063m 13846Mi ds-idrepo-2 2525m 13839Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8770m 4968Mi idm-65858d8c4c-97wdf 7704m 4674Mi lodemon-86f768796c-ts724 5m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1140m 535Mi 02:46:12 DEBUG --- stderr --- 02:46:12 DEBUG 02:46:13 INFO 02:46:13 INFO [loop_until]: kubectl --namespace=xlou top node 02:46:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:46:13 INFO [loop_until]: OK (rc = 0) 02:46:13 DEBUG --- stdout --- 02:46:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8903m 56% 6300Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2162m 13% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8153m 51% 5939Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2606m 16% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2314m 14% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8957m 56% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1196m 7% 2056Mi 3% 02:46:13 DEBUG --- stderr --- 02:46:13 DEBUG 02:47:12 INFO 02:47:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:47:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:47:12 INFO [loop_until]: OK (rc = 0) 02:47:12 DEBUG --- stdout --- 02:47:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 91m 5759Mi am-55f77847b7-c9bk2 84m 5729Mi am-55f77847b7-zpsrs 93m 5737Mi ds-cts-0 7m 372Mi ds-cts-1 6m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 9155m 13842Mi ds-idrepo-1 2510m 13856Mi ds-idrepo-2 2614m 13834Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8413m 4997Mi idm-65858d8c4c-97wdf 7953m 4705Mi lodemon-86f768796c-ts724 6m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1145m 536Mi 02:47:12 DEBUG --- stderr --- 02:47:12 DEBUG 02:47:13 INFO 02:47:13 INFO [loop_until]: kubectl --namespace=xlou top node 02:47:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:47:13 INFO [loop_until]: OK (rc = 0) 02:47:13 DEBUG --- stdout --- 02:47:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6880Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9053m 56% 6320Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2090m 13% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8172m 51% 5969Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1143Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2473m 15% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2591m 16% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9295m 58% 14364Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1215m 7% 2054Mi 3% 02:47:13 DEBUG --- stderr --- 02:47:13 DEBUG 02:48:12 INFO 02:48:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:48:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:48:12 INFO [loop_until]: OK (rc = 0) 02:48:12 DEBUG --- stdout --- 02:48:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 96m 5760Mi am-55f77847b7-c9bk2 89m 5729Mi am-55f77847b7-zpsrs 87m 5737Mi ds-cts-0 7m 372Mi ds-cts-1 6m 384Mi ds-cts-2 8m 373Mi ds-idrepo-0 8364m 13854Mi ds-idrepo-1 2046m 13862Mi ds-idrepo-2 2077m 13840Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8602m 5033Mi idm-65858d8c4c-97wdf 7869m 4736Mi lodemon-86f768796c-ts724 1m 65Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1106m 536Mi 02:48:12 DEBUG --- stderr --- 02:48:12 DEBUG 02:48:13 INFO 02:48:13 INFO [loop_until]: kubectl --namespace=xlou top node 02:48:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:48:13 INFO [loop_until]: OK (rc = 0) 02:48:13 DEBUG --- stdout --- 02:48:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9012m 56% 6356Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2152m 13% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8139m 51% 5998Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2092m 13% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2227m 14% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8938m 56% 14384Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1157m 7% 2052Mi 3% 02:48:13 DEBUG --- stderr --- 02:48:13 DEBUG 02:49:12 INFO 02:49:12 INFO [loop_until]: kubectl --namespace=xlou top pods 02:49:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:49:12 INFO [loop_until]: OK (rc = 0) 02:49:12 DEBUG --- stdout --- 02:49:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 93m 5760Mi am-55f77847b7-c9bk2 89m 5730Mi am-55f77847b7-zpsrs 94m 5739Mi ds-cts-0 6m 372Mi ds-cts-1 6m 385Mi ds-cts-2 10m 373Mi ds-idrepo-0 9308m 13839Mi ds-idrepo-1 2848m 13859Mi ds-idrepo-2 2561m 13852Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8628m 5075Mi idm-65858d8c4c-97wdf 7809m 4770Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1140m 536Mi 02:49:12 DEBUG --- stderr --- 02:49:12 DEBUG 02:49:13 INFO 02:49:13 INFO [loop_until]: kubectl --namespace=xlou top node 02:49:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:49:13 INFO [loop_until]: OK (rc = 0) 02:49:13 DEBUG --- stdout --- 02:49:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9000m 56% 6395Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2071m 13% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8209m 51% 6035Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2992m 18% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2943m 18% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9429m 59% 14363Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1219m 7% 2055Mi 3% 02:49:13 DEBUG --- stderr --- 02:49:13 DEBUG 02:50:13 INFO 02:50:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:50:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:50:13 INFO [loop_until]: OK (rc = 0) 02:50:13 DEBUG --- stdout --- 02:50:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 94m 5760Mi am-55f77847b7-c9bk2 82m 5730Mi am-55f77847b7-zpsrs 91m 5739Mi ds-cts-0 7m 372Mi ds-cts-1 5m 385Mi ds-cts-2 6m 374Mi ds-idrepo-0 8857m 13864Mi ds-idrepo-1 2135m 13864Mi ds-idrepo-2 2230m 13854Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8671m 5108Mi idm-65858d8c4c-97wdf 7810m 4797Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1088m 537Mi 02:50:13 DEBUG --- stderr --- 02:50:13 DEBUG 02:50:14 INFO 02:50:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:50:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:50:14 INFO [loop_until]: OK (rc = 0) 02:50:14 DEBUG --- stdout --- 02:50:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9104m 57% 6429Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2165m 13% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8098m 50% 6066Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2204m 13% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2232m 14% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8911m 56% 14357Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1190m 7% 2058Mi 3% 02:50:14 DEBUG --- stderr --- 02:50:14 DEBUG 02:51:13 INFO 02:51:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:51:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:51:13 INFO [loop_until]: OK (rc = 0) 02:51:13 DEBUG --- stdout --- 02:51:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 95m 5760Mi am-55f77847b7-c9bk2 90m 5730Mi am-55f77847b7-zpsrs 95m 5739Mi ds-cts-0 7m 372Mi ds-cts-1 6m 385Mi ds-cts-2 6m 373Mi ds-idrepo-0 8582m 13849Mi ds-idrepo-1 2031m 13863Mi ds-idrepo-2 2089m 13854Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8583m 5148Mi idm-65858d8c4c-97wdf 7927m 4830Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1143m 537Mi 02:51:13 DEBUG --- stderr --- 02:51:13 DEBUG 02:51:14 INFO 02:51:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:51:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:51:14 INFO [loop_until]: OK (rc = 0) 02:51:14 DEBUG --- stdout --- 02:51:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6881Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8995m 56% 6462Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2098m 13% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7977m 50% 6096Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2144m 13% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2196m 13% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8833m 55% 14362Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1163m 7% 2056Mi 3% 02:51:14 DEBUG --- stderr --- 02:51:14 DEBUG 02:52:13 INFO 02:52:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:52:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:52:13 INFO [loop_until]: OK (rc = 0) 02:52:13 DEBUG --- stdout --- 02:52:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 85m 5760Mi am-55f77847b7-c9bk2 75m 5731Mi am-55f77847b7-zpsrs 70m 5741Mi ds-cts-0 6m 373Mi ds-cts-1 6m 385Mi ds-cts-2 8m 373Mi ds-idrepo-0 7349m 13822Mi ds-idrepo-1 2204m 13856Mi ds-idrepo-2 2275m 13828Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8485m 5177Mi idm-65858d8c4c-97wdf 5948m 4858Mi lodemon-86f768796c-ts724 1m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 896m 537Mi 02:52:13 DEBUG --- stderr --- 02:52:13 DEBUG 02:52:14 INFO 02:52:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:52:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:52:14 INFO [loop_until]: OK (rc = 0) 02:52:14 DEBUG --- stdout --- 02:52:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 120m 0% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6887m 43% 6498Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1354m 8% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6417m 40% 6124Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2217m 13% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2200m 13% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7844m 49% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 895m 5% 2057Mi 3% 02:52:14 DEBUG --- stderr --- 02:52:14 DEBUG 02:53:13 INFO 02:53:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:53:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:53:13 INFO [loop_until]: OK (rc = 0) 02:53:13 DEBUG --- stdout --- 02:53:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 7m 5760Mi am-55f77847b7-c9bk2 12m 5731Mi am-55f77847b7-zpsrs 9m 5741Mi ds-cts-0 6m 373Mi ds-cts-1 7m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 11m 13810Mi ds-idrepo-1 10m 13867Mi ds-idrepo-2 22m 13819Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 6m 5178Mi idm-65858d8c4c-97wdf 8m 4858Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 111Mi 02:53:13 DEBUG --- stderr --- 02:53:13 DEBUG 02:53:14 INFO 02:53:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:53:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:53:14 INFO [loop_until]: OK (rc = 0) 02:53:14 DEBUG --- stdout --- 02:53:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 6500Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6126Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 73m 0% 14351Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14331Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1635Mi 2% 02:53:14 DEBUG --- stderr --- 02:53:14 DEBUG 127.0.0.1 - - [13/Aug/2023 02:53:57] "GET /monitoring/average?start_time=23-08-13_01:23:26&stop_time=23-08-13_01:51:56 HTTP/1.1" 200 - 02:54:13 INFO 02:54:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:54:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:54:13 INFO [loop_until]: OK (rc = 0) 02:54:13 DEBUG --- stdout --- 02:54:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 7m 5760Mi am-55f77847b7-c9bk2 7m 5731Mi am-55f77847b7-zpsrs 9m 5741Mi ds-cts-0 5m 373Mi ds-cts-1 6m 385Mi ds-cts-2 5m 373Mi ds-idrepo-0 19m 13812Mi ds-idrepo-1 12m 13867Mi ds-idrepo-2 11m 13819Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 6m 5178Mi idm-65858d8c4c-97wdf 7m 4857Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 111Mi 02:54:13 DEBUG --- stderr --- 02:54:13 DEBUG 02:54:14 INFO 02:54:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:54:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:54:14 INFO [loop_until]: OK (rc = 0) 02:54:14 DEBUG --- stdout --- 02:54:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6503Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 117m 0% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 6129Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14335Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 74m 0% 1636Mi 2% 02:54:14 DEBUG --- stderr --- 02:54:14 DEBUG 02:55:13 INFO 02:55:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:55:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:55:13 INFO [loop_until]: OK (rc = 0) 02:55:13 DEBUG --- stdout --- 02:55:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 71m 5761Mi am-55f77847b7-c9bk2 75m 5731Mi am-55f77847b7-zpsrs 100m 5766Mi ds-cts-0 7m 373Mi ds-cts-1 8m 385Mi ds-cts-2 16m 373Mi ds-idrepo-0 5017m 13861Mi ds-idrepo-1 509m 13864Mi ds-idrepo-2 790m 13848Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5342m 5227Mi idm-65858d8c4c-97wdf 5220m 4906Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1562m 527Mi 02:55:13 DEBUG --- stderr --- 02:55:13 DEBUG 02:55:14 INFO 02:55:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:55:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:55:14 INFO [loop_until]: OK (rc = 0) 02:55:14 DEBUG --- stdout --- 02:55:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 187m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 115m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6032m 37% 6551Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1193m 7% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5581m 35% 6179Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1320m 8% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1331m 8% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 69m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5701m 35% 14380Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1667m 10% 2046Mi 3% 02:55:14 DEBUG --- stderr --- 02:55:14 DEBUG 02:56:13 INFO 02:56:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:56:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:56:13 INFO [loop_until]: OK (rc = 0) 02:56:13 DEBUG --- stdout --- 02:56:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 97m 5761Mi am-55f77847b7-c9bk2 91m 5731Mi am-55f77847b7-zpsrs 94m 5767Mi ds-cts-0 9m 373Mi ds-cts-1 6m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 10729m 13824Mi ds-idrepo-1 3585m 13836Mi ds-idrepo-2 2831m 13841Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8764m 5272Mi idm-65858d8c4c-97wdf 7994m 4951Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1242m 534Mi 02:56:13 DEBUG --- stderr --- 02:56:13 DEBUG 02:56:14 INFO 02:56:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:56:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:56:14 INFO [loop_until]: OK (rc = 0) 02:56:14 DEBUG --- stdout --- 02:56:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 156m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9457m 59% 6596Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2441m 15% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8561m 53% 6221Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2927m 18% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3388m 21% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10726m 67% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1334m 8% 2054Mi 3% 02:56:14 DEBUG --- stderr --- 02:56:14 DEBUG 02:57:13 INFO 02:57:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:57:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:57:13 INFO [loop_until]: OK (rc = 0) 02:57:13 DEBUG --- stdout --- 02:57:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 98m 5761Mi am-55f77847b7-c9bk2 89m 5732Mi am-55f77847b7-zpsrs 98m 5767Mi ds-cts-0 7m 376Mi ds-cts-1 7m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 10216m 13823Mi ds-idrepo-1 3623m 13816Mi ds-idrepo-2 3186m 13805Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9008m 5318Mi idm-65858d8c4c-97wdf 8252m 4992Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1222m 539Mi 02:57:13 DEBUG --- stderr --- 02:57:13 DEBUG 02:57:14 INFO 02:57:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:57:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:57:14 INFO [loop_until]: OK (rc = 0) 02:57:14 DEBUG --- stdout --- 02:57:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9356m 58% 6646Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2438m 15% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8606m 54% 6257Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3215m 20% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3605m 22% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10526m 66% 14353Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1321m 8% 2057Mi 3% 02:57:14 DEBUG --- stderr --- 02:57:14 DEBUG 02:58:13 INFO 02:58:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:58:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:58:13 INFO [loop_until]: OK (rc = 0) 02:58:13 DEBUG --- stdout --- 02:58:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 99m 5761Mi am-55f77847b7-c9bk2 93m 5731Mi am-55f77847b7-zpsrs 96m 5768Mi ds-cts-0 8m 376Mi ds-cts-1 7m 385Mi ds-cts-2 6m 373Mi ds-idrepo-0 9522m 13739Mi ds-idrepo-1 2612m 13696Mi ds-idrepo-2 2090m 13708Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8894m 5366Mi idm-65858d8c4c-97wdf 8001m 5028Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1228m 545Mi 02:58:13 DEBUG --- stderr --- 02:58:13 DEBUG 02:58:14 INFO 02:58:14 INFO [loop_until]: kubectl --namespace=xlou top node 02:58:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:58:15 INFO [loop_until]: OK (rc = 0) 02:58:15 DEBUG --- stdout --- 02:58:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 156m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9300m 58% 6691Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2347m 14% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8538m 53% 6295Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2480m 15% 14271Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2541m 15% 14255Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9807m 61% 14266Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1325m 8% 2060Mi 3% 02:58:15 DEBUG --- stderr --- 02:58:15 DEBUG 02:59:13 INFO 02:59:13 INFO [loop_until]: kubectl --namespace=xlou top pods 02:59:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:59:14 INFO [loop_until]: OK (rc = 0) 02:59:14 DEBUG --- stdout --- 02:59:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 101m 5762Mi am-55f77847b7-c9bk2 95m 5732Mi am-55f77847b7-zpsrs 98m 5763Mi ds-cts-0 6m 376Mi ds-cts-1 6m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 9277m 13614Mi ds-idrepo-1 2607m 13542Mi ds-idrepo-2 2287m 13822Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8989m 5369Mi idm-65858d8c4c-97wdf 8038m 5067Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1235m 552Mi 02:59:14 DEBUG --- stderr --- 02:59:14 DEBUG 02:59:15 INFO 02:59:15 INFO [loop_until]: kubectl --namespace=xlou top node 02:59:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:59:15 INFO [loop_until]: OK (rc = 0) 02:59:15 DEBUG --- stdout --- 02:59:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 157m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9388m 59% 6690Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2416m 15% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8500m 53% 6335Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2552m 16% 14148Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2678m 16% 14151Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9670m 60% 14122Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1314m 8% 2070Mi 3% 02:59:15 DEBUG --- stderr --- 02:59:15 DEBUG 03:00:14 INFO 03:00:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:00:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:00:14 INFO [loop_until]: OK (rc = 0) 03:00:14 DEBUG --- stdout --- 03:00:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 94m 5762Mi am-55f77847b7-c9bk2 92m 5732Mi am-55f77847b7-zpsrs 102m 5763Mi ds-cts-0 6m 376Mi ds-cts-1 7m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 10361m 13552Mi ds-idrepo-1 3692m 13477Mi ds-idrepo-2 3049m 13515Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8823m 5370Mi idm-65858d8c4c-97wdf 8277m 5107Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1230m 557Mi 03:00:14 DEBUG --- stderr --- 03:00:14 DEBUG 03:00:15 INFO 03:00:15 INFO [loop_until]: kubectl --namespace=xlou top node 03:00:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:00:15 INFO [loop_until]: OK (rc = 0) 03:00:15 DEBUG --- stdout --- 03:00:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9183m 57% 6693Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2339m 14% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8582m 54% 6371Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3224m 20% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 3694m 23% 14052Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10133m 63% 14094Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1324m 8% 2075Mi 3% 03:00:15 DEBUG --- stderr --- 03:00:15 DEBUG 03:01:14 INFO 03:01:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:01:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:01:14 INFO [loop_until]: OK (rc = 0) 03:01:14 DEBUG --- stdout --- 03:01:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 98m 5762Mi am-55f77847b7-c9bk2 98m 5732Mi am-55f77847b7-zpsrs 95m 5763Mi ds-cts-0 8m 376Mi ds-cts-1 7m 385Mi ds-cts-2 6m 373Mi ds-idrepo-0 9751m 13493Mi ds-idrepo-1 2957m 13527Mi ds-idrepo-2 2365m 13570Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8877m 5370Mi idm-65858d8c4c-97wdf 8117m 5139Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1277m 561Mi 03:01:14 DEBUG --- stderr --- 03:01:14 DEBUG 03:01:15 INFO 03:01:15 INFO [loop_until]: kubectl --namespace=xlou top node 03:01:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:01:15 INFO [loop_until]: OK (rc = 0) 03:01:15 DEBUG --- stdout --- 03:01:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9341m 58% 6692Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2408m 15% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8452m 53% 6408Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2894m 18% 14130Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3013m 18% 14105Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9968m 62% 14045Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1333m 8% 2076Mi 3% 03:01:15 DEBUG --- stderr --- 03:01:15 DEBUG 03:02:14 INFO 03:02:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:02:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:02:14 INFO [loop_until]: OK (rc = 0) 03:02:14 DEBUG --- stdout --- 03:02:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 96m 5762Mi am-55f77847b7-c9bk2 89m 5732Mi am-55f77847b7-zpsrs 93m 5763Mi ds-cts-0 6m 376Mi ds-cts-1 6m 385Mi ds-cts-2 8m 373Mi ds-idrepo-0 9630m 13664Mi ds-idrepo-1 2484m 13612Mi ds-idrepo-2 2495m 13654Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8966m 5370Mi idm-65858d8c4c-97wdf 8239m 5175Mi lodemon-86f768796c-ts724 1m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1233m 575Mi 03:02:14 DEBUG --- stderr --- 03:02:14 DEBUG 03:02:15 INFO 03:02:15 INFO [loop_until]: kubectl --namespace=xlou top node 03:02:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:02:15 INFO [loop_until]: OK (rc = 0) 03:02:15 DEBUG --- stdout --- 03:02:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9374m 58% 6692Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2376m 14% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8610m 54% 6443Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2664m 16% 14206Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2219m 13% 14199Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9577m 60% 14252Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1331m 8% 2092Mi 3% 03:02:15 DEBUG --- stderr --- 03:02:15 DEBUG 03:03:14 INFO 03:03:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:03:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:03:14 INFO [loop_until]: OK (rc = 0) 03:03:14 DEBUG --- stdout --- 03:03:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 97m 5762Mi am-55f77847b7-c9bk2 93m 5732Mi am-55f77847b7-zpsrs 97m 5764Mi ds-cts-0 6m 377Mi ds-cts-1 7m 385Mi ds-cts-2 7m 374Mi ds-idrepo-0 9262m 13794Mi ds-idrepo-1 2381m 13552Mi ds-idrepo-2 2468m 13571Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9114m 5370Mi idm-65858d8c4c-97wdf 8253m 5217Mi lodemon-86f768796c-ts724 4m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1256m 579Mi 03:03:14 DEBUG --- stderr --- 03:03:14 DEBUG 03:03:15 INFO 03:03:15 INFO [loop_until]: kubectl --namespace=xlou top node 03:03:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:03:15 INFO [loop_until]: OK (rc = 0) 03:03:15 DEBUG --- stdout --- 03:03:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1378Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9086m 57% 6690Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2428m 15% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8541m 53% 6486Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2485m 15% 14128Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2366m 14% 14123Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9648m 60% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1360m 8% 2095Mi 3% 03:03:15 DEBUG --- stderr --- 03:03:15 DEBUG 03:04:14 INFO 03:04:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:04:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:04:14 INFO [loop_until]: OK (rc = 0) 03:04:14 DEBUG --- stdout --- 03:04:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 104m 5762Mi am-55f77847b7-c9bk2 99m 5733Mi am-55f77847b7-zpsrs 100m 5764Mi ds-cts-0 5m 376Mi ds-cts-1 6m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 9351m 13858Mi ds-idrepo-1 3005m 13744Mi ds-idrepo-2 3020m 13801Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9068m 5371Mi idm-65858d8c4c-97wdf 8162m 5258Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1272m 585Mi 03:04:14 DEBUG --- stderr --- 03:04:14 DEBUG 03:04:15 INFO 03:04:15 INFO [loop_until]: kubectl --namespace=xlou top node 03:04:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:04:15 INFO [loop_until]: OK (rc = 0) 03:04:15 DEBUG --- stdout --- 03:04:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 155m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9386m 59% 6692Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2373m 14% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8532m 53% 6524Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3072m 19% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3392m 21% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9551m 60% 14415Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1340m 8% 2098Mi 3% 03:04:15 DEBUG --- stderr --- 03:04:15 DEBUG 03:05:14 INFO 03:05:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:05:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:05:14 INFO [loop_until]: OK (rc = 0) 03:05:14 DEBUG --- stdout --- 03:05:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 92m 5762Mi am-55f77847b7-c9bk2 94m 5732Mi am-55f77847b7-zpsrs 102m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 6m 385Mi ds-cts-2 8m 373Mi ds-idrepo-0 10247m 13853Mi ds-idrepo-1 2665m 13858Mi ds-idrepo-2 2409m 13856Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8832m 5375Mi idm-65858d8c4c-97wdf 8055m 5299Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1221m 591Mi 03:05:14 DEBUG --- stderr --- 03:05:14 DEBUG 03:05:15 INFO 03:05:15 INFO [loop_until]: kubectl --namespace=xlou top node 03:05:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:05:15 INFO [loop_until]: OK (rc = 0) 03:05:15 DEBUG --- stdout --- 03:05:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1393Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9285m 58% 6699Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2404m 15% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8513m 53% 6563Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2888m 18% 14428Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2596m 16% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10746m 67% 14414Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1313m 8% 2104Mi 3% 03:05:15 DEBUG --- stderr --- 03:05:15 DEBUG 03:06:14 INFO 03:06:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:06:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:06:14 INFO [loop_until]: OK (rc = 0) 03:06:14 DEBUG --- stdout --- 03:06:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 99m 5762Mi am-55f77847b7-c9bk2 98m 5732Mi am-55f77847b7-zpsrs 98m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 8m 385Mi ds-cts-2 8m 373Mi ds-idrepo-0 9316m 13855Mi ds-idrepo-1 2488m 13850Mi ds-idrepo-2 2894m 13853Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9094m 5380Mi idm-65858d8c4c-97wdf 8095m 5334Mi lodemon-86f768796c-ts724 1m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1204m 596Mi 03:06:14 DEBUG --- stderr --- 03:06:14 DEBUG 03:06:15 INFO 03:06:15 INFO [loop_until]: kubectl --namespace=xlou top node 03:06:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:06:16 INFO [loop_until]: OK (rc = 0) 03:06:16 DEBUG --- stdout --- 03:06:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9117m 57% 6704Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2352m 14% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8174m 51% 6604Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3011m 18% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2970m 18% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9200m 57% 14415Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1299m 8% 2109Mi 3% 03:06:16 DEBUG --- stderr --- 03:06:16 DEBUG 03:07:14 INFO 03:07:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:07:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:07:14 INFO [loop_until]: OK (rc = 0) 03:07:14 DEBUG --- stdout --- 03:07:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 102m 5762Mi am-55f77847b7-c9bk2 92m 5732Mi am-55f77847b7-zpsrs 95m 5764Mi ds-cts-0 7m 376Mi ds-cts-1 13m 385Mi ds-cts-2 8m 375Mi ds-idrepo-0 9913m 13821Mi ds-idrepo-1 2953m 13842Mi ds-idrepo-2 3493m 13773Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9001m 5380Mi idm-65858d8c4c-97wdf 8000m 5376Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1212m 601Mi 03:07:14 DEBUG --- stderr --- 03:07:14 DEBUG 03:07:16 INFO 03:07:16 INFO [loop_until]: kubectl --namespace=xlou top node 03:07:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:07:16 INFO [loop_until]: OK (rc = 0) 03:07:16 DEBUG --- stdout --- 03:07:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9460m 59% 6704Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2438m 15% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8580m 53% 6643Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3120m 19% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2592m 16% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10229m 64% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1279m 8% 2113Mi 3% 03:07:16 DEBUG --- stderr --- 03:07:16 DEBUG 03:08:14 INFO 03:08:14 INFO [loop_until]: kubectl --namespace=xlou top pods 03:08:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:08:14 INFO [loop_until]: OK (rc = 0) 03:08:14 DEBUG --- stdout --- 03:08:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 96m 5762Mi am-55f77847b7-c9bk2 93m 5732Mi am-55f77847b7-zpsrs 96m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 13m 386Mi ds-cts-2 7m 373Mi ds-idrepo-0 9504m 13823Mi ds-idrepo-1 2391m 13854Mi ds-idrepo-2 2458m 13881Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8974m 5385Mi idm-65858d8c4c-97wdf 8078m 5379Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1240m 606Mi 03:08:14 DEBUG --- stderr --- 03:08:14 DEBUG 03:08:16 INFO 03:08:16 INFO [loop_until]: kubectl --namespace=xlou top node 03:08:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:08:16 INFO [loop_until]: OK (rc = 0) 03:08:16 DEBUG --- stdout --- 03:08:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9333m 58% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2365m 14% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8518m 53% 6647Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2375m 14% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2613m 16% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9560m 60% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1294m 8% 2116Mi 3% 03:08:16 DEBUG --- stderr --- 03:08:16 DEBUG 03:09:15 INFO 03:09:15 INFO [loop_until]: kubectl --namespace=xlou top pods 03:09:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:09:15 INFO [loop_until]: OK (rc = 0) 03:09:15 DEBUG --- stdout --- 03:09:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 103m 5764Mi am-55f77847b7-c9bk2 95m 5733Mi am-55f77847b7-zpsrs 94m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 9m 385Mi ds-cts-2 6m 373Mi ds-idrepo-0 9226m 13859Mi ds-idrepo-1 2616m 13836Mi ds-idrepo-2 2561m 13704Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8968m 5385Mi idm-65858d8c4c-97wdf 8179m 5378Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1187m 611Mi 03:09:15 DEBUG --- stderr --- 03:09:15 DEBUG 03:09:16 INFO 03:09:16 INFO [loop_until]: kubectl --namespace=xlou top node 03:09:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:09:16 INFO [loop_until]: OK (rc = 0) 03:09:16 DEBUG --- stdout --- 03:09:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9364m 58% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2407m 15% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8393m 52% 6645Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2389m 15% 14294Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2516m 15% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9328m 58% 14419Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1295m 8% 2124Mi 3% 03:09:16 DEBUG --- stderr --- 03:09:16 DEBUG 03:10:15 INFO 03:10:15 INFO [loop_until]: kubectl --namespace=xlou top pods 03:10:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:10:15 INFO [loop_until]: OK (rc = 0) 03:10:15 DEBUG --- stdout --- 03:10:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 98m 5764Mi am-55f77847b7-c9bk2 99m 5728Mi am-55f77847b7-zpsrs 107m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 7m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 10002m 13444Mi ds-idrepo-1 3055m 13604Mi ds-idrepo-2 3087m 13558Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8972m 5386Mi idm-65858d8c4c-97wdf 8380m 5379Mi lodemon-86f768796c-ts724 1m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1236m 617Mi 03:10:15 DEBUG --- stderr --- 03:10:15 DEBUG 03:10:16 INFO 03:10:16 INFO [loop_until]: kubectl --namespace=xlou top node 03:10:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:10:16 INFO [loop_until]: OK (rc = 0) 03:10:16 DEBUG --- stdout --- 03:10:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8893m 55% 6704Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2349m 14% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8451m 53% 6645Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3029m 19% 14148Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2941m 18% 14197Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9869m 62% 14015Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1323m 8% 2129Mi 3% 03:10:16 DEBUG --- stderr --- 03:10:16 DEBUG 03:11:15 INFO 03:11:15 INFO [loop_until]: kubectl --namespace=xlou top pods 03:11:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:11:15 INFO [loop_until]: OK (rc = 0) 03:11:15 DEBUG --- stdout --- 03:11:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 97m 5764Mi am-55f77847b7-c9bk2 93m 5728Mi am-55f77847b7-zpsrs 96m 5764Mi ds-cts-0 8m 376Mi ds-cts-1 6m 385Mi ds-cts-2 7m 373Mi ds-idrepo-0 9234m 13504Mi ds-idrepo-1 2055m 13615Mi ds-idrepo-2 2386m 13547Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8752m 5388Mi idm-65858d8c4c-97wdf 8132m 5378Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1231m 622Mi 03:11:15 DEBUG --- stderr --- 03:11:15 DEBUG 03:11:16 INFO 03:11:16 INFO [loop_until]: kubectl --namespace=xlou top node 03:11:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:11:16 INFO [loop_until]: OK (rc = 0) 03:11:16 DEBUG --- stdout --- 03:11:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9299m 58% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2444m 15% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8579m 53% 6644Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2596m 16% 14130Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2451m 15% 14220Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9436m 59% 14082Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1308m 8% 2134Mi 3% 03:11:16 DEBUG --- stderr --- 03:11:16 DEBUG 03:12:15 INFO 03:12:15 INFO [loop_until]: kubectl --namespace=xlou top pods 03:12:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:12:15 INFO [loop_until]: OK (rc = 0) 03:12:15 DEBUG --- stdout --- 03:12:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 100m 5764Mi am-55f77847b7-c9bk2 90m 5728Mi am-55f77847b7-zpsrs 98m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 10m 385Mi ds-cts-2 6m 373Mi ds-idrepo-0 8914m 13643Mi ds-idrepo-1 2244m 13725Mi ds-idrepo-2 2244m 13650Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8982m 5392Mi idm-65858d8c4c-97wdf 8307m 5383Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1187m 626Mi 03:12:15 DEBUG --- stderr --- 03:12:15 DEBUG 03:12:16 INFO 03:12:16 INFO [loop_until]: kubectl --namespace=xlou top node 03:12:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:12:16 INFO [loop_until]: OK (rc = 0) 03:12:16 DEBUG --- stdout --- 03:12:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9422m 59% 6715Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2351m 14% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8482m 53% 6647Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2590m 16% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2478m 15% 14333Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9386m 59% 14222Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1300m 8% 2139Mi 3% 03:12:16 DEBUG --- stderr --- 03:12:16 DEBUG 03:13:15 INFO 03:13:15 INFO [loop_until]: kubectl --namespace=xlou top pods 03:13:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:13:15 INFO [loop_until]: OK (rc = 0) 03:13:15 DEBUG --- stdout --- 03:13:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 103m 5764Mi am-55f77847b7-c9bk2 96m 5728Mi am-55f77847b7-zpsrs 96m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 6m 386Mi ds-cts-2 6m 375Mi ds-idrepo-0 9190m 13786Mi ds-idrepo-1 2617m 13811Mi ds-idrepo-2 2295m 13737Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8916m 5393Mi idm-65858d8c4c-97wdf 7924m 5384Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1217m 631Mi 03:13:15 DEBUG --- stderr --- 03:13:15 DEBUG 03:13:16 INFO 03:13:16 INFO [loop_until]: kubectl --namespace=xlou top node 03:13:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:13:16 INFO [loop_until]: OK (rc = 0) 03:13:16 DEBUG --- stdout --- 03:13:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 152m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9135m 57% 6716Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2415m 15% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8493m 53% 6647Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2588m 16% 14317Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2687m 16% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9247m 58% 14359Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1310m 8% 2140Mi 3% 03:13:16 DEBUG --- stderr --- 03:13:16 DEBUG 03:14:15 INFO 03:14:15 INFO [loop_until]: kubectl --namespace=xlou top pods 03:14:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:14:15 INFO [loop_until]: OK (rc = 0) 03:14:15 DEBUG --- stdout --- 03:14:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 99m 5764Mi am-55f77847b7-c9bk2 97m 5729Mi am-55f77847b7-zpsrs 95m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 6m 385Mi ds-cts-2 6m 375Mi ds-idrepo-0 8972m 13819Mi ds-idrepo-1 2479m 13853Mi ds-idrepo-2 2407m 13817Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8751m 5394Mi idm-65858d8c4c-97wdf 8412m 5385Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1202m 638Mi 03:14:15 DEBUG --- stderr --- 03:14:15 DEBUG 03:14:17 INFO 03:14:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:14:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:14:17 INFO [loop_until]: OK (rc = 0) 03:14:17 DEBUG --- stdout --- 03:14:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9249m 58% 6715Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2428m 15% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8507m 53% 6650Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2158m 13% 14401Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2226m 14% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9015m 56% 14381Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1285m 8% 2149Mi 3% 03:14:17 DEBUG --- stderr --- 03:14:17 DEBUG 03:15:15 INFO 03:15:15 INFO [loop_until]: kubectl --namespace=xlou top pods 03:15:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:15:15 INFO [loop_until]: OK (rc = 0) 03:15:15 DEBUG --- stdout --- 03:15:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 96m 5764Mi am-55f77847b7-c9bk2 90m 5728Mi am-55f77847b7-zpsrs 100m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 6m 385Mi ds-cts-2 6m 375Mi ds-idrepo-0 9984m 13833Mi ds-idrepo-1 2427m 13845Mi ds-idrepo-2 2206m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9187m 5395Mi idm-65858d8c4c-97wdf 8258m 5385Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1214m 643Mi 03:15:15 DEBUG --- stderr --- 03:15:15 DEBUG 03:15:17 INFO 03:15:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:15:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:15:17 INFO [loop_until]: OK (rc = 0) 03:15:17 DEBUG --- stdout --- 03:15:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9489m 59% 6715Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2462m 15% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8609m 54% 6648Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2241m 14% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2274m 14% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10172m 64% 14358Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1267m 7% 2153Mi 3% 03:15:17 DEBUG --- stderr --- 03:15:17 DEBUG 03:16:15 INFO 03:16:15 INFO [loop_until]: kubectl --namespace=xlou top pods 03:16:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:16:16 INFO [loop_until]: OK (rc = 0) 03:16:16 DEBUG --- stdout --- 03:16:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 95m 5764Mi am-55f77847b7-c9bk2 95m 5729Mi am-55f77847b7-zpsrs 101m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 6m 385Mi ds-cts-2 8m 375Mi ds-idrepo-0 8961m 13860Mi ds-idrepo-1 2739m 13821Mi ds-idrepo-2 2597m 13803Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8866m 5395Mi idm-65858d8c4c-97wdf 7959m 5385Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1230m 648Mi 03:16:16 DEBUG --- stderr --- 03:16:16 DEBUG 03:16:17 INFO 03:16:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:16:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:16:17 INFO [loop_until]: OK (rc = 0) 03:16:17 DEBUG --- stdout --- 03:16:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9280m 58% 6717Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2430m 15% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8446m 53% 6650Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2738m 17% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2581m 16% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9293m 58% 14428Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1303m 8% 2159Mi 3% 03:16:17 DEBUG --- stderr --- 03:16:17 DEBUG 03:17:16 INFO 03:17:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:17:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:17:16 INFO [loop_until]: OK (rc = 0) 03:17:16 DEBUG --- stdout --- 03:17:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 96m 5764Mi am-55f77847b7-c9bk2 89m 5729Mi am-55f77847b7-zpsrs 100m 5764Mi ds-cts-0 6m 376Mi ds-cts-1 8m 385Mi ds-cts-2 7m 375Mi ds-idrepo-0 9150m 13846Mi ds-idrepo-1 2602m 13809Mi ds-idrepo-2 2434m 13804Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9020m 5396Mi idm-65858d8c4c-97wdf 8145m 5387Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1204m 653Mi 03:17:16 DEBUG --- stderr --- 03:17:16 DEBUG 03:17:17 INFO 03:17:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:17:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:17:17 INFO [loop_until]: OK (rc = 0) 03:17:17 DEBUG --- stdout --- 03:17:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9410m 59% 6721Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2445m 15% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8232m 51% 6652Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2735m 17% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2803m 17% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9297m 58% 14416Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1300m 8% 2158Mi 3% 03:17:17 DEBUG --- stderr --- 03:17:17 DEBUG 03:18:16 INFO 03:18:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:18:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:18:16 INFO [loop_until]: OK (rc = 0) 03:18:16 DEBUG --- stdout --- 03:18:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 97m 5764Mi am-55f77847b7-c9bk2 96m 5729Mi am-55f77847b7-zpsrs 98m 5764Mi ds-cts-0 7m 377Mi ds-cts-1 12m 385Mi ds-cts-2 15m 373Mi ds-idrepo-0 9117m 13595Mi ds-idrepo-1 2887m 13629Mi ds-idrepo-2 2842m 13479Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8899m 5397Mi idm-65858d8c4c-97wdf 8135m 5388Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1240m 658Mi 03:18:16 DEBUG --- stderr --- 03:18:16 DEBUG 03:18:17 INFO 03:18:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:18:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:18:17 INFO [loop_until]: OK (rc = 0) 03:18:17 DEBUG --- stdout --- 03:18:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 155m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9270m 58% 6722Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2430m 15% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8563m 53% 6666Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2478m 15% 14081Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3049m 19% 14231Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 73m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9324m 58% 14164Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1309m 8% 2166Mi 3% 03:18:17 DEBUG --- stderr --- 03:18:17 DEBUG 03:19:16 INFO 03:19:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:19:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:19:16 INFO [loop_until]: OK (rc = 0) 03:19:16 DEBUG --- stdout --- 03:19:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 97m 5764Mi am-55f77847b7-c9bk2 93m 5729Mi am-55f77847b7-zpsrs 97m 5765Mi ds-cts-0 8m 376Mi ds-cts-1 6m 386Mi ds-cts-2 6m 374Mi ds-idrepo-0 9233m 13671Mi ds-idrepo-1 2134m 13499Mi ds-idrepo-2 2207m 13584Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9074m 5397Mi idm-65858d8c4c-97wdf 8365m 5388Mi lodemon-86f768796c-ts724 8m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1203m 663Mi 03:19:16 DEBUG --- stderr --- 03:19:16 DEBUG 03:19:17 INFO 03:19:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:19:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:19:17 INFO [loop_until]: OK (rc = 0) 03:19:17 DEBUG --- stdout --- 03:19:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 157m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9251m 58% 6730Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2429m 15% 2195Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8556m 53% 6656Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2155m 13% 14186Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2426m 15% 14105Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9351m 58% 14248Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1298m 8% 2173Mi 3% 03:19:17 DEBUG --- stderr --- 03:19:17 DEBUG 03:20:16 INFO 03:20:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:20:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:20:16 INFO [loop_until]: OK (rc = 0) 03:20:16 DEBUG --- stdout --- 03:20:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 98m 5764Mi am-55f77847b7-c9bk2 91m 5729Mi am-55f77847b7-zpsrs 99m 5764Mi ds-cts-0 14m 376Mi ds-cts-1 6m 386Mi ds-cts-2 8m 374Mi ds-idrepo-0 9206m 13633Mi ds-idrepo-1 2849m 13610Mi ds-idrepo-2 2617m 13497Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8814m 5397Mi idm-65858d8c4c-97wdf 8223m 5388Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1199m 668Mi 03:20:16 DEBUG --- stderr --- 03:20:16 DEBUG 03:20:17 INFO 03:20:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:20:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:20:17 INFO [loop_until]: OK (rc = 0) 03:20:17 DEBUG --- stdout --- 03:20:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 157m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 160m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 8989m 56% 6718Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2404m 15% 2194Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8426m 53% 6654Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2382m 14% 14085Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3295m 20% 14242Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9173m 57% 14199Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1282m 8% 2188Mi 3% 03:20:17 DEBUG --- stderr --- 03:20:17 DEBUG 03:21:16 INFO 03:21:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:21:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:21:16 INFO [loop_until]: OK (rc = 0) 03:21:16 DEBUG --- stdout --- 03:21:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 103m 5765Mi am-55f77847b7-c9bk2 99m 5729Mi am-55f77847b7-zpsrs 98m 5765Mi ds-cts-0 6m 376Mi ds-cts-1 6m 386Mi ds-cts-2 7m 374Mi ds-idrepo-0 8746m 13722Mi ds-idrepo-1 2599m 13720Mi ds-idrepo-2 2325m 13598Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8755m 5397Mi idm-65858d8c4c-97wdf 7967m 5389Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1217m 673Mi 03:21:16 DEBUG --- stderr --- 03:21:16 DEBUG 03:21:17 INFO 03:21:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:21:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:21:17 INFO [loop_until]: OK (rc = 0) 03:21:17 DEBUG --- stdout --- 03:21:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 159m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9247m 58% 6720Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2420m 15% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8338m 52% 6653Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2297m 14% 14183Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2655m 16% 14303Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8799m 55% 14311Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1292m 8% 2184Mi 3% 03:21:17 DEBUG --- stderr --- 03:21:17 DEBUG 03:22:16 INFO 03:22:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:22:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:22:16 INFO [loop_until]: OK (rc = 0) 03:22:16 DEBUG --- stdout --- 03:22:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 104m 5765Mi am-55f77847b7-c9bk2 96m 5729Mi am-55f77847b7-zpsrs 101m 5765Mi ds-cts-0 6m 377Mi ds-cts-1 8m 386Mi ds-cts-2 7m 373Mi ds-idrepo-0 9231m 13702Mi ds-idrepo-1 2771m 13826Mi ds-idrepo-2 2509m 13819Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8985m 5398Mi idm-65858d8c4c-97wdf 8133m 5389Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1230m 674Mi 03:22:16 DEBUG --- stderr --- 03:22:16 DEBUG 03:22:17 INFO 03:22:17 INFO [loop_until]: kubectl --namespace=xlou top node 03:22:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:22:18 INFO [loop_until]: OK (rc = 0) 03:22:18 DEBUG --- stdout --- 03:22:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9560m 60% 6725Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2474m 15% 2184Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8563m 53% 6652Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2645m 16% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2728m 17% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9431m 59% 14255Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1311m 8% 2184Mi 3% 03:22:18 DEBUG --- stderr --- 03:22:18 DEBUG 03:23:16 INFO 03:23:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:23:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:23:16 INFO [loop_until]: OK (rc = 0) 03:23:16 DEBUG --- stdout --- 03:23:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 95m 5765Mi am-55f77847b7-c9bk2 90m 5729Mi am-55f77847b7-zpsrs 100m 5765Mi ds-cts-0 7m 376Mi ds-cts-1 7m 386Mi ds-cts-2 7m 373Mi ds-idrepo-0 9380m 13830Mi ds-idrepo-1 2162m 13803Mi ds-idrepo-2 2355m 13813Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9146m 5398Mi idm-65858d8c4c-97wdf 8027m 5390Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1222m 675Mi 03:23:16 DEBUG --- stderr --- 03:23:16 DEBUG 03:23:18 INFO 03:23:18 INFO [loop_until]: kubectl --namespace=xlou top node 03:23:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:23:18 INFO [loop_until]: OK (rc = 0) 03:23:18 DEBUG --- stdout --- 03:23:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9295m 58% 6718Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2353m 14% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8474m 53% 6652Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2404m 15% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2223m 13% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9269m 58% 14395Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1288m 8% 2184Mi 3% 03:23:18 DEBUG --- stderr --- 03:23:18 DEBUG 03:24:16 INFO 03:24:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:24:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:24:16 INFO [loop_until]: OK (rc = 0) 03:24:16 DEBUG --- stdout --- 03:24:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 100m 5765Mi am-55f77847b7-c9bk2 96m 5729Mi am-55f77847b7-zpsrs 93m 5765Mi ds-cts-0 6m 376Mi ds-cts-1 6m 386Mi ds-cts-2 7m 374Mi ds-idrepo-0 9113m 13821Mi ds-idrepo-1 2233m 13837Mi ds-idrepo-2 2599m 13821Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8799m 5399Mi idm-65858d8c4c-97wdf 8149m 5390Mi lodemon-86f768796c-ts724 8m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1216m 675Mi 03:24:16 DEBUG --- stderr --- 03:24:16 DEBUG 03:24:18 INFO 03:24:18 INFO [loop_until]: kubectl --namespace=xlou top node 03:24:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:24:18 INFO [loop_until]: OK (rc = 0) 03:24:18 DEBUG --- stdout --- 03:24:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 157m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 9271m 58% 6720Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2422m 15% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 8568m 53% 6653Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2621m 16% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2137m 13% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9455m 59% 14402Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1296m 8% 2184Mi 3% 03:24:18 DEBUG --- stderr --- 03:24:18 DEBUG 03:25:16 INFO 03:25:16 INFO [loop_until]: kubectl --namespace=xlou top pods 03:25:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:25:17 INFO [loop_until]: OK (rc = 0) 03:25:17 DEBUG --- stdout --- 03:25:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 6m 5765Mi am-55f77847b7-c9bk2 21m 5729Mi am-55f77847b7-zpsrs 37m 5764Mi ds-cts-0 6m 377Mi ds-cts-1 7m 386Mi ds-cts-2 6m 375Mi ds-idrepo-0 549m 13823Mi ds-idrepo-1 91m 13802Mi ds-idrepo-2 78m 13752Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 5397Mi idm-65858d8c4c-97wdf 9m 5388Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 155m 242Mi 03:25:17 DEBUG --- stderr --- 03:25:17 DEBUG 03:25:18 INFO 03:25:18 INFO [loop_until]: kubectl --namespace=xlou top node 03:25:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:25:18 INFO [loop_until]: OK (rc = 0) 03:25:18 DEBUG --- stdout --- 03:25:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6721Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 117m 0% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6655Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 137m 0% 14351Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 108m 0% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 120m 0% 14409Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 228m 1% 1759Mi 3% 03:25:18 DEBUG --- stderr --- 03:25:18 DEBUG 03:26:17 INFO 03:26:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:26:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:26:17 INFO [loop_until]: OK (rc = 0) 03:26:17 DEBUG --- stdout --- 03:26:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 7m 5765Mi am-55f77847b7-c9bk2 6m 5729Mi am-55f77847b7-zpsrs 9m 5764Mi ds-cts-0 6m 377Mi ds-cts-1 7m 386Mi ds-cts-2 6m 374Mi ds-idrepo-0 9m 13824Mi ds-idrepo-1 10m 13802Mi ds-idrepo-2 15m 13753Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 5397Mi idm-65858d8c4c-97wdf 9m 5388Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 242Mi 03:26:17 DEBUG --- stderr --- 03:26:17 DEBUG 03:26:18 INFO 03:26:18 INFO [loop_until]: kubectl --namespace=xlou top node 03:26:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:26:18 INFO [loop_until]: OK (rc = 0) 03:26:18 DEBUG --- stdout --- 03:26:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 58m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 6719Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 6654Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 71m 0% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14407Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1760Mi 3% 03:26:18 DEBUG --- stderr --- 03:26:18 DEBUG 127.0.0.1 - - [13/Aug/2023 03:26:28] "GET /monitoring/average?start_time=23-08-13_01:55:57&stop_time=23-08-13_02:24:27 HTTP/1.1" 200 - 03:27:17 INFO 03:27:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:27:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:27:17 INFO [loop_until]: OK (rc = 0) 03:27:17 DEBUG --- stdout --- 03:27:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 6m 5765Mi am-55f77847b7-c9bk2 6m 5729Mi am-55f77847b7-zpsrs 8m 5764Mi ds-cts-0 6m 377Mi ds-cts-1 6m 386Mi ds-cts-2 6m 374Mi ds-idrepo-0 9m 13823Mi ds-idrepo-1 10m 13803Mi ds-idrepo-2 10m 13752Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 8m 5397Mi idm-65858d8c4c-97wdf 8m 5388Mi lodemon-86f768796c-ts724 3m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2557m 506Mi 03:27:17 DEBUG --- stderr --- 03:27:17 DEBUG 03:27:18 INFO 03:27:18 INFO [loop_until]: kubectl --namespace=xlou top node 03:27:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:27:18 INFO [loop_until]: OK (rc = 0) 03:27:18 DEBUG --- stdout --- 03:27:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 6720Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 6652Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14410Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1967m 12% 2044Mi 3% 03:27:18 DEBUG --- stderr --- 03:27:18 DEBUG 03:28:17 INFO 03:28:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:28:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:28:17 INFO [loop_until]: OK (rc = 0) 03:28:17 DEBUG --- stdout --- 03:28:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 64m 5765Mi am-55f77847b7-c9bk2 64m 5730Mi am-55f77847b7-zpsrs 68m 5766Mi ds-cts-0 7m 377Mi ds-cts-1 6m 386Mi ds-cts-2 6m 374Mi ds-idrepo-0 4637m 13837Mi ds-idrepo-1 3363m 13842Mi ds-idrepo-2 2847m 13808Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 1847m 5427Mi idm-65858d8c4c-97wdf 1839m 5452Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1862m 1278Mi 03:28:17 DEBUG --- stderr --- 03:28:17 DEBUG 03:28:18 INFO 03:28:18 INFO [loop_until]: kubectl --namespace=xlou top node 03:28:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:28:18 INFO [loop_until]: OK (rc = 0) 03:28:18 DEBUG --- stdout --- 03:28:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 116m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 127m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 119m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2071m 13% 6737Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1825m 11% 3253Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 1931m 12% 6701Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2913m 18% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3570m 22% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4885m 30% 14446Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2258m 14% 2743Mi 4% 03:28:18 DEBUG --- stderr --- 03:28:18 DEBUG 03:29:17 INFO 03:29:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:29:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:29:17 INFO [loop_until]: OK (rc = 0) 03:29:17 DEBUG --- stdout --- 03:29:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 92m 5765Mi am-55f77847b7-c9bk2 82m 5730Mi am-55f77847b7-zpsrs 85m 5767Mi ds-cts-0 11m 377Mi ds-cts-1 11m 383Mi ds-cts-2 7m 374Mi ds-idrepo-0 5748m 13851Mi ds-idrepo-1 3888m 13823Mi ds-idrepo-2 3469m 13824Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2700m 5440Mi idm-65858d8c4c-97wdf 2476m 5446Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1404m 1670Mi 03:29:17 DEBUG --- stderr --- 03:29:17 DEBUG 03:29:18 INFO 03:29:18 INFO [loop_until]: kubectl --namespace=xlou top node 03:29:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:29:18 INFO [loop_until]: OK (rc = 0) 03:29:18 DEBUG --- stdout --- 03:29:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1390Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2893m 18% 6744Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1944m 12% 2937Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2664m 16% 6700Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3591m 22% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4088m 25% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5866m 36% 14442Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1484m 9% 3133Mi 5% 03:29:18 DEBUG --- stderr --- 03:29:18 DEBUG 03:30:17 INFO 03:30:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:30:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:30:17 INFO [loop_until]: OK (rc = 0) 03:30:17 DEBUG --- stdout --- 03:30:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 9m 5765Mi am-55f77847b7-c9bk2 8m 5730Mi am-55f77847b7-zpsrs 7m 5767Mi ds-cts-0 6m 377Mi ds-cts-1 6m 383Mi ds-cts-2 6m 375Mi ds-idrepo-0 450m 13805Mi ds-idrepo-1 1043m 13822Mi ds-idrepo-2 503m 13844Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2613m 1419Mi idm-65858d8c4c-97wdf 2687m 3295Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1105m 1654Mi 03:30:17 DEBUG --- stderr --- 03:30:17 DEBUG 03:30:18 INFO 03:30:18 INFO [loop_until]: kubectl --namespace=xlou top node 03:30:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:30:18 INFO [loop_until]: OK (rc = 0) 03:30:18 DEBUG --- stdout --- 03:30:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1208m 7% 2774Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 1796m 11% 2193Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1886m 11% 4587Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 443m 2% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 925m 5% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 281m 1% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1199m 7% 3138Mi 5% 03:30:18 DEBUG --- stderr --- 03:30:18 DEBUG 03:31:17 INFO 03:31:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:31:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:31:17 INFO [loop_until]: OK (rc = 0) 03:31:17 DEBUG --- stdout --- 03:31:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 98m 5765Mi am-55f77847b7-c9bk2 88m 5731Mi am-55f77847b7-zpsrs 96m 5767Mi ds-cts-0 8m 377Mi ds-cts-1 5m 383Mi ds-cts-2 6m 374Mi ds-idrepo-0 6618m 13826Mi ds-idrepo-1 4123m 13828Mi ds-idrepo-2 4575m 13799Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3845m 4004Mi idm-65858d8c4c-97wdf 3856m 4160Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 917m 1713Mi 03:31:17 DEBUG --- stderr --- 03:31:17 DEBUG 03:31:19 INFO 03:31:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:31:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:31:19 INFO [loop_until]: OK (rc = 0) 03:31:19 DEBUG --- stdout --- 03:31:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3803m 23% 5381Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1891m 11% 2916Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3954m 24% 5435Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4759m 29% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4101m 25% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 6751m 42% 14437Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1150m 7% 3144Mi 5% 03:31:19 DEBUG --- stderr --- 03:31:19 DEBUG 03:32:17 INFO 03:32:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:32:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:32:17 INFO [loop_until]: OK (rc = 0) 03:32:17 DEBUG --- stdout --- 03:32:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 112m 5765Mi am-55f77847b7-c9bk2 109m 5731Mi am-55f77847b7-zpsrs 116m 5767Mi ds-cts-0 14m 377Mi ds-cts-1 6m 383Mi ds-cts-2 7m 374Mi ds-idrepo-0 8666m 13825Mi ds-idrepo-1 6218m 13804Mi ds-idrepo-2 5428m 13826Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3891m 4231Mi idm-65858d8c4c-97wdf 4062m 4293Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1787m 1729Mi 03:32:17 DEBUG --- stderr --- 03:32:17 DEBUG 03:32:19 INFO 03:32:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:32:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:32:19 INFO [loop_until]: OK (rc = 0) 03:32:19 DEBUG --- stdout --- 03:32:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 165m 1% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4230m 26% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2103m 13% 3239Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 4275m 26% 5587Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 69m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5615m 35% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6130m 38% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8601m 54% 14414Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1829m 11% 3146Mi 5% 03:32:19 DEBUG --- stderr --- 03:32:19 DEBUG 03:33:17 INFO 03:33:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:33:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:33:17 INFO [loop_until]: OK (rc = 0) 03:33:17 DEBUG --- stdout --- 03:33:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 101m 5765Mi am-55f77847b7-c9bk2 99m 5731Mi am-55f77847b7-zpsrs 98m 5767Mi ds-cts-0 6m 377Mi ds-cts-1 6m 383Mi ds-cts-2 6m 374Mi ds-idrepo-0 7756m 13784Mi ds-idrepo-1 4970m 13810Mi ds-idrepo-2 4375m 13825Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2794m 4453Mi idm-65858d8c4c-97wdf 3383m 4447Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1057m 1689Mi 03:33:17 DEBUG --- stderr --- 03:33:17 DEBUG 03:33:19 INFO 03:33:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:33:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:33:19 INFO [loop_until]: OK (rc = 0) 03:33:19 DEBUG --- stdout --- 03:33:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 155m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3259m 20% 5797Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1934m 12% 2313Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3788m 23% 5732Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5359m 33% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5080m 31% 14457Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7318m 46% 14319Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1063m 6% 3146Mi 5% 03:33:19 DEBUG --- stderr --- 03:33:19 DEBUG 03:34:17 INFO 03:34:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:34:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:34:17 INFO [loop_until]: OK (rc = 0) 03:34:17 DEBUG --- stdout --- 03:34:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 143m 5765Mi am-55f77847b7-c9bk2 145m 5734Mi am-55f77847b7-zpsrs 146m 5769Mi ds-cts-0 8m 377Mi ds-cts-1 6m 383Mi ds-cts-2 7m 374Mi ds-idrepo-0 10836m 13852Mi ds-idrepo-1 7782m 13823Mi ds-idrepo-2 7396m 13808Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5662m 4519Mi idm-65858d8c4c-97wdf 4970m 4496Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1051m 1683Mi 03:34:17 DEBUG --- stderr --- 03:34:17 DEBUG 03:34:19 INFO 03:34:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:34:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:34:19 INFO [loop_until]: OK (rc = 0) 03:34:19 DEBUG --- stdout --- 03:34:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 196m 1% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 209m 1% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 204m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6025m 37% 5858Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1993m 12% 2238Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5466m 34% 5791Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7313m 46% 14428Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8474m 53% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10968m 69% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1116m 7% 3143Mi 5% 03:34:19 DEBUG --- stderr --- 03:34:19 DEBUG 03:35:17 INFO 03:35:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:35:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:35:17 INFO [loop_until]: OK (rc = 0) 03:35:17 DEBUG --- stdout --- 03:35:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 143m 5765Mi am-55f77847b7-c9bk2 139m 5734Mi am-55f77847b7-zpsrs 150m 5770Mi ds-cts-0 6m 377Mi ds-cts-1 8m 383Mi ds-cts-2 7m 374Mi ds-idrepo-0 10671m 13823Mi ds-idrepo-1 8821m 13802Mi ds-idrepo-2 6979m 13825Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 388m 1156Mi idm-65858d8c4c-97wdf 11614m 4606Mi lodemon-86f768796c-ts724 8m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 970m 1683Mi 03:35:17 DEBUG --- stderr --- 03:35:17 DEBUG 03:35:19 INFO 03:35:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:35:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:35:19 INFO [loop_until]: OK (rc = 0) 03:35:19 DEBUG --- stdout --- 03:35:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 199m 1% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 209m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 194m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 626m 3% 2510Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 1950m 12% 2244Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 12122m 76% 5895Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7633m 48% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8909m 56% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10883m 68% 14438Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1023m 6% 3148Mi 5% 03:35:19 DEBUG --- stderr --- 03:35:19 DEBUG 03:36:17 INFO 03:36:17 INFO [loop_until]: kubectl --namespace=xlou top pods 03:36:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:36:18 INFO [loop_until]: OK (rc = 0) 03:36:18 DEBUG --- stdout --- 03:36:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 139m 5765Mi am-55f77847b7-c9bk2 126m 5734Mi am-55f77847b7-zpsrs 138m 5770Mi ds-cts-0 6m 378Mi ds-cts-1 6m 383Mi ds-cts-2 7m 374Mi ds-idrepo-0 11224m 13825Mi ds-idrepo-1 7134m 13888Mi ds-idrepo-2 8012m 13832Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 7249m 4091Mi idm-65858d8c4c-97wdf 4872m 4673Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 949m 1687Mi 03:36:18 DEBUG --- stderr --- 03:36:18 DEBUG 03:36:19 INFO 03:36:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:36:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:36:19 INFO [loop_until]: OK (rc = 0) 03:36:19 DEBUG --- stdout --- 03:36:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 191m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 192m 1% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 178m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5943m 37% 5425Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1969m 12% 2283Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5211m 32% 5958Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7445m 46% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7045m 44% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 11507m 72% 14399Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1042m 6% 3148Mi 5% 03:36:19 DEBUG --- stderr --- 03:36:19 DEBUG 03:37:18 INFO 03:37:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:37:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:37:18 INFO [loop_until]: OK (rc = 0) 03:37:18 DEBUG --- stdout --- 03:37:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 151m 5766Mi am-55f77847b7-c9bk2 143m 5734Mi am-55f77847b7-zpsrs 144m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 6m 383Mi ds-cts-2 8m 374Mi ds-idrepo-0 10807m 13784Mi ds-idrepo-1 7790m 13825Mi ds-idrepo-2 7493m 13780Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 6031m 4148Mi idm-65858d8c4c-97wdf 5452m 4727Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1001m 1682Mi 03:37:18 DEBUG --- stderr --- 03:37:18 DEBUG 03:37:19 INFO 03:37:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:37:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:37:19 INFO [loop_until]: OK (rc = 0) 03:37:19 DEBUG --- stdout --- 03:37:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 206m 1% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 208m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 200m 1% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6358m 40% 5497Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1972m 12% 2196Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5722m 36% 6017Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7285m 45% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7774m 48% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10846m 68% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1068m 6% 3147Mi 5% 03:37:19 DEBUG --- stderr --- 03:37:19 DEBUG 03:38:18 INFO 03:38:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:38:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:38:18 INFO [loop_until]: OK (rc = 0) 03:38:18 DEBUG --- stdout --- 03:38:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 133m 5766Mi am-55f77847b7-c9bk2 131m 5734Mi am-55f77847b7-zpsrs 128m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 6m 383Mi ds-cts-2 13m 375Mi ds-idrepo-0 9908m 13819Mi ds-idrepo-1 8682m 13753Mi ds-idrepo-2 6337m 13778Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 4941m 4197Mi idm-65858d8c4c-97wdf 4531m 4780Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 971m 1687Mi 03:38:18 DEBUG --- stderr --- 03:38:18 DEBUG 03:38:19 INFO 03:38:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:38:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:38:19 INFO [loop_until]: OK (rc = 0) 03:38:19 DEBUG --- stdout --- 03:38:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 192m 1% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 189m 1% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 185m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4934m 31% 5556Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1941m 12% 2277Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4907m 30% 6062Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6702m 42% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8841m 55% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 75m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10105m 63% 14411Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1043m 6% 3145Mi 5% 03:38:19 DEBUG --- stderr --- 03:38:19 DEBUG 03:39:18 INFO 03:39:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:39:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:39:18 INFO [loop_until]: OK (rc = 0) 03:39:18 DEBUG --- stdout --- 03:39:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 157m 5766Mi am-55f77847b7-c9bk2 148m 5734Mi am-55f77847b7-zpsrs 147m 5771Mi ds-cts-0 5m 377Mi ds-cts-1 6m 383Mi ds-cts-2 10m 375Mi ds-idrepo-0 10249m 13613Mi ds-idrepo-1 8358m 13832Mi ds-idrepo-2 6416m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 6789m 4309Mi idm-65858d8c4c-97wdf 5396m 4844Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1022m 1684Mi 03:39:18 DEBUG --- stderr --- 03:39:18 DEBUG 03:39:19 INFO 03:39:19 INFO [loop_until]: kubectl --namespace=xlou top node 03:39:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:39:20 INFO [loop_until]: OK (rc = 0) 03:39:20 DEBUG --- stdout --- 03:39:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1379Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 205m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 208m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 211m 1% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6892m 43% 5658Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2023m 12% 2250Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6035m 37% 6134Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6788m 42% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8519m 53% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10664m 67% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1120m 7% 3145Mi 5% 03:39:20 DEBUG --- stderr --- 03:39:20 DEBUG 03:40:18 INFO 03:40:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:40:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:40:18 INFO [loop_until]: OK (rc = 0) 03:40:18 DEBUG --- stdout --- 03:40:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 144m 5766Mi am-55f77847b7-c9bk2 138m 5734Mi am-55f77847b7-zpsrs 148m 5771Mi ds-cts-0 5m 377Mi ds-cts-1 6m 383Mi ds-cts-2 14m 374Mi ds-idrepo-0 10005m 13824Mi ds-idrepo-1 6745m 13825Mi ds-idrepo-2 8802m 13825Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5212m 4355Mi idm-65858d8c4c-97wdf 5066m 4884Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 926m 1683Mi 03:40:18 DEBUG --- stderr --- 03:40:18 DEBUG 03:40:20 INFO 03:40:20 INFO [loop_until]: kubectl --namespace=xlou top node 03:40:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:40:20 INFO [loop_until]: OK (rc = 0) 03:40:20 DEBUG --- stdout --- 03:40:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 193m 1% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 205m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 191m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5492m 34% 5698Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1907m 12% 2196Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5236m 32% 6170Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8781m 55% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6061m 38% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 70m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10209m 64% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1005m 6% 3146Mi 5% 03:40:20 DEBUG --- stderr --- 03:40:20 DEBUG 03:41:18 INFO 03:41:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:41:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:41:18 INFO [loop_until]: OK (rc = 0) 03:41:18 DEBUG --- stdout --- 03:41:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 137m 5766Mi am-55f77847b7-c9bk2 135m 5734Mi am-55f77847b7-zpsrs 143m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 6m 383Mi ds-cts-2 7m 374Mi ds-idrepo-0 11037m 13825Mi ds-idrepo-1 6550m 13824Mi ds-idrepo-2 6466m 13827Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5706m 4413Mi idm-65858d8c4c-97wdf 5175m 4932Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 953m 1685Mi 03:41:18 DEBUG --- stderr --- 03:41:18 DEBUG 03:41:20 INFO 03:41:20 INFO [loop_until]: kubectl --namespace=xlou top node 03:41:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:41:20 INFO [loop_until]: OK (rc = 0) 03:41:20 DEBUG --- stdout --- 03:41:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 196m 1% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 194m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 191m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5589m 35% 5760Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1963m 12% 2196Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5169m 32% 6221Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6726m 42% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6613m 41% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10965m 69% 14399Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1037m 6% 3144Mi 5% 03:41:20 DEBUG --- stderr --- 03:41:20 DEBUG 03:42:18 INFO 03:42:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:42:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:42:18 INFO [loop_until]: OK (rc = 0) 03:42:18 DEBUG --- stdout --- 03:42:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 135m 5766Mi am-55f77847b7-c9bk2 144m 5736Mi am-55f77847b7-zpsrs 141m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 5m 384Mi ds-cts-2 7m 376Mi ds-idrepo-0 10983m 13706Mi ds-idrepo-1 7039m 13864Mi ds-idrepo-2 8379m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5308m 4466Mi idm-65858d8c4c-97wdf 5036m 4976Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 938m 1684Mi 03:42:18 DEBUG --- stderr --- 03:42:18 DEBUG 03:42:20 INFO 03:42:20 INFO [loop_until]: kubectl --namespace=xlou top node 03:42:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:42:20 INFO [loop_until]: OK (rc = 0) 03:42:20 DEBUG --- stdout --- 03:42:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 194m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 194m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 196m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5797m 36% 5803Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1930m 12% 2214Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5288m 33% 6262Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8265m 52% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7564m 47% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10996m 69% 14350Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1012m 6% 3143Mi 5% 03:42:20 DEBUG --- stderr --- 03:42:20 DEBUG 03:43:18 INFO 03:43:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:43:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:43:18 INFO [loop_until]: OK (rc = 0) 03:43:18 DEBUG --- stdout --- 03:43:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 138m 5766Mi am-55f77847b7-c9bk2 133m 5736Mi am-55f77847b7-zpsrs 134m 5771Mi ds-cts-0 6m 378Mi ds-cts-1 5m 383Mi ds-cts-2 8m 376Mi ds-idrepo-0 10392m 13823Mi ds-idrepo-1 6798m 13751Mi ds-idrepo-2 6593m 13840Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5362m 4523Mi idm-65858d8c4c-97wdf 5068m 5023Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 945m 1684Mi 03:43:18 DEBUG --- stderr --- 03:43:18 DEBUG 03:43:20 INFO 03:43:20 INFO [loop_until]: kubectl --namespace=xlou top node 03:43:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:43:20 INFO [loop_until]: OK (rc = 0) 03:43:20 DEBUG --- stdout --- 03:43:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 193m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 198m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 183m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5565m 35% 5862Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1918m 12% 2195Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5275m 33% 6310Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6794m 42% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7559m 47% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10722m 67% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1013m 6% 3147Mi 5% 03:43:20 DEBUG --- stderr --- 03:43:20 DEBUG 03:44:18 INFO 03:44:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:44:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:44:18 INFO [loop_until]: OK (rc = 0) 03:44:18 DEBUG --- stdout --- 03:44:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 131m 5766Mi am-55f77847b7-c9bk2 133m 5736Mi am-55f77847b7-zpsrs 145m 5771Mi ds-cts-0 5m 377Mi ds-cts-1 5m 383Mi ds-cts-2 11m 376Mi ds-idrepo-0 10388m 13791Mi ds-idrepo-1 8224m 13821Mi ds-idrepo-2 7973m 13702Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5102m 4590Mi idm-65858d8c4c-97wdf 5299m 5090Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 943m 1690Mi 03:44:18 DEBUG --- stderr --- 03:44:18 DEBUG 03:44:20 INFO 03:44:20 INFO [loop_until]: kubectl --namespace=xlou top node 03:44:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:44:20 INFO [loop_until]: OK (rc = 0) 03:44:20 DEBUG --- stdout --- 03:44:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 192m 1% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 199m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 190m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5567m 35% 5929Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1913m 12% 2289Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5302m 33% 6375Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7675m 48% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8791m 55% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10936m 68% 14405Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1009m 6% 3149Mi 5% 03:44:20 DEBUG --- stderr --- 03:44:20 DEBUG 03:45:18 INFO 03:45:18 INFO [loop_until]: kubectl --namespace=xlou top pods 03:45:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:45:18 INFO [loop_until]: OK (rc = 0) 03:45:18 DEBUG --- stdout --- 03:45:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 144m 5766Mi am-55f77847b7-c9bk2 137m 5736Mi am-55f77847b7-zpsrs 149m 5771Mi ds-cts-0 5m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 10050m 13823Mi ds-idrepo-1 7173m 13761Mi ds-idrepo-2 6351m 13822Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5434m 4648Mi idm-65858d8c4c-97wdf 5150m 5131Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 950m 1686Mi 03:45:18 DEBUG --- stderr --- 03:45:18 DEBUG 03:45:20 INFO 03:45:20 INFO [loop_until]: kubectl --namespace=xlou top node 03:45:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:45:20 INFO [loop_until]: OK (rc = 0) 03:45:20 DEBUG --- stdout --- 03:45:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 202m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 209m 1% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 194m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5921m 37% 5989Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1942m 12% 2202Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5001m 31% 6418Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6500m 40% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7417m 46% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9889m 62% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1036m 6% 3147Mi 5% 03:45:20 DEBUG --- stderr --- 03:45:20 DEBUG 03:46:19 INFO 03:46:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:46:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:46:19 INFO [loop_until]: OK (rc = 0) 03:46:19 DEBUG --- stdout --- 03:46:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 140m 5766Mi am-55f77847b7-c9bk2 134m 5736Mi am-55f77847b7-zpsrs 154m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 9432m 13824Mi ds-idrepo-1 7250m 13809Mi ds-idrepo-2 9120m 13747Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5373m 4716Mi idm-65858d8c4c-97wdf 4892m 5192Mi lodemon-86f768796c-ts724 4m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 925m 1689Mi 03:46:19 DEBUG --- stderr --- 03:46:19 DEBUG 03:46:20 INFO 03:46:20 INFO [loop_until]: kubectl --namespace=xlou top node 03:46:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:46:20 INFO [loop_until]: OK (rc = 0) 03:46:20 DEBUG --- stdout --- 03:46:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 190m 1% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 214m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 186m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5530m 34% 6056Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1911m 12% 2233Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5068m 31% 6480Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8452m 53% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6968m 43% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10237m 64% 14393Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 997m 6% 3143Mi 5% 03:46:20 DEBUG --- stderr --- 03:46:20 DEBUG 03:47:19 INFO 03:47:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:47:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:47:19 INFO [loop_until]: OK (rc = 0) 03:47:19 DEBUG --- stdout --- 03:47:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 140m 5766Mi am-55f77847b7-c9bk2 133m 5736Mi am-55f77847b7-zpsrs 149m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 5m 383Mi ds-cts-2 6m 376Mi ds-idrepo-0 10544m 13822Mi ds-idrepo-1 7392m 13793Mi ds-idrepo-2 7576m 13824Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5243m 4778Mi idm-65858d8c4c-97wdf 4920m 5240Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 981m 1690Mi 03:47:19 DEBUG --- stderr --- 03:47:19 DEBUG 03:47:20 INFO 03:47:20 INFO [loop_until]: kubectl --namespace=xlou top node 03:47:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:47:21 INFO [loop_until]: OK (rc = 0) 03:47:21 DEBUG --- stdout --- 03:47:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 191m 1% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 202m 1% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 189m 1% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5722m 36% 6112Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1958m 12% 2247Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5357m 33% 6524Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7662m 48% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7412m 46% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10702m 67% 14405Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1030m 6% 3142Mi 5% 03:47:21 DEBUG --- stderr --- 03:47:21 DEBUG 03:48:19 INFO 03:48:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:48:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:48:19 INFO [loop_until]: OK (rc = 0) 03:48:19 DEBUG --- stdout --- 03:48:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 140m 5766Mi am-55f77847b7-c9bk2 132m 5737Mi am-55f77847b7-zpsrs 136m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 10775m 13826Mi ds-idrepo-1 6692m 13837Mi ds-idrepo-2 5756m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 249m 1546Mi idm-65858d8c4c-97wdf 11259m 5352Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 950m 1691Mi 03:48:19 DEBUG --- stderr --- 03:48:19 DEBUG 03:48:21 INFO 03:48:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:48:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:48:21 INFO [loop_until]: OK (rc = 0) 03:48:21 DEBUG --- stdout --- 03:48:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 192m 1% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 196m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 191m 1% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1907m 12% 2902Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 1889m 11% 2269Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 11695m 73% 6624Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6181m 38% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6789m 42% 14457Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 11071m 69% 14401Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1003m 6% 3146Mi 5% 03:48:21 DEBUG --- stderr --- 03:48:21 DEBUG 03:49:19 INFO 03:49:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:49:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:49:19 INFO [loop_until]: OK (rc = 0) 03:49:19 DEBUG --- stdout --- 03:49:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 126m 5766Mi am-55f77847b7-c9bk2 131m 5737Mi am-55f77847b7-zpsrs 140m 5771Mi ds-cts-0 5m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 11117m 13701Mi ds-idrepo-1 8707m 13823Mi ds-idrepo-2 7098m 13830Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 6178m 4021Mi idm-65858d8c4c-97wdf 5049m 5417Mi lodemon-86f768796c-ts724 8m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 963m 1692Mi 03:49:19 DEBUG --- stderr --- 03:49:19 DEBUG 03:49:21 INFO 03:49:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:49:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:49:21 INFO [loop_until]: OK (rc = 0) 03:49:21 DEBUG --- stdout --- 03:49:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 193m 1% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 190m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 192m 1% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6714m 42% 5364Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1988m 12% 2296Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5333m 33% 6695Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8021m 50% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8621m 54% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 11171m 70% 14330Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1057m 6% 3146Mi 5% 03:49:21 DEBUG --- stderr --- 03:49:21 DEBUG 03:50:19 INFO 03:50:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:50:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:50:19 INFO [loop_until]: OK (rc = 0) 03:50:19 DEBUG --- stdout --- 03:50:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 131m 5766Mi am-55f77847b7-c9bk2 128m 5737Mi am-55f77847b7-zpsrs 146m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 7m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 11056m 13825Mi ds-idrepo-1 7797m 13753Mi ds-idrepo-2 7514m 13834Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5278m 4090Mi idm-65858d8c4c-97wdf 5072m 5427Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 975m 1692Mi 03:50:19 DEBUG --- stderr --- 03:50:19 DEBUG 03:50:21 INFO 03:50:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:50:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:50:21 INFO [loop_until]: OK (rc = 0) 03:50:21 DEBUG --- stdout --- 03:50:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 189m 1% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 202m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 183m 1% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5615m 35% 5423Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1878m 11% 2256Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5345m 33% 6710Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7041m 44% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6975m 43% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 11208m 70% 14385Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1041m 6% 3146Mi 5% 03:50:21 DEBUG --- stderr --- 03:50:21 DEBUG 03:51:19 INFO 03:51:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:51:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:51:19 INFO [loop_until]: OK (rc = 0) 03:51:19 DEBUG --- stdout --- 03:51:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 133m 5766Mi am-55f77847b7-c9bk2 138m 5737Mi am-55f77847b7-zpsrs 143m 5771Mi ds-cts-0 7m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 10670m 13826Mi ds-idrepo-1 8932m 13703Mi ds-idrepo-2 7657m 13805Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5558m 4202Mi idm-65858d8c4c-97wdf 4983m 5427Mi lodemon-86f768796c-ts724 4m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 970m 1688Mi 03:51:19 DEBUG --- stderr --- 03:51:19 DEBUG 03:51:21 INFO 03:51:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:51:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:51:21 INFO [loop_until]: OK (rc = 0) 03:51:21 DEBUG --- stdout --- 03:51:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 194m 1% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 201m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 196m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5959m 37% 5542Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1960m 12% 2243Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5327m 33% 6711Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7214m 45% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 8738m 54% 14337Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10271m 64% 14397Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1024m 6% 3149Mi 5% 03:51:21 DEBUG --- stderr --- 03:51:21 DEBUG 03:52:19 INFO 03:52:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:52:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:52:19 INFO [loop_until]: OK (rc = 0) 03:52:19 DEBUG --- stdout --- 03:52:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 131m 5767Mi am-55f77847b7-c9bk2 126m 5737Mi am-55f77847b7-zpsrs 133m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 10147m 13776Mi ds-idrepo-1 6578m 13799Mi ds-idrepo-2 8070m 13729Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 4821m 4258Mi idm-65858d8c4c-97wdf 4718m 5429Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 928m 1690Mi 03:52:19 DEBUG --- stderr --- 03:52:19 DEBUG 03:52:21 INFO 03:52:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:52:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:52:21 INFO [loop_until]: OK (rc = 0) 03:52:21 DEBUG --- stdout --- 03:52:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 185m 1% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 191m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5310m 33% 5609Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1885m 11% 2314Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5023m 31% 6712Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8096m 50% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5808m 36% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 11007m 69% 14404Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1031m 6% 3152Mi 5% 03:52:21 DEBUG --- stderr --- 03:52:21 DEBUG 03:53:19 INFO 03:53:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:53:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:53:19 INFO [loop_until]: OK (rc = 0) 03:53:19 DEBUG --- stdout --- 03:53:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 147m 5767Mi am-55f77847b7-c9bk2 135m 5737Mi am-55f77847b7-zpsrs 150m 5771Mi ds-cts-0 7m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 10572m 13662Mi ds-idrepo-1 7747m 13822Mi ds-idrepo-2 8617m 13822Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 534m 1242Mi idm-65858d8c4c-97wdf 12402m 5450Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 995m 1691Mi 03:53:19 DEBUG --- stderr --- 03:53:19 DEBUG 03:53:21 INFO 03:53:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:53:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:53:21 INFO [loop_until]: OK (rc = 0) 03:53:21 DEBUG --- stdout --- 03:53:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 209m 1% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 199m 1% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 185m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1740m 10% 2598Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 1972m 12% 2251Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 12603m 79% 6742Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8808m 55% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7720m 48% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10725m 67% 14244Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1089m 6% 3148Mi 5% 03:53:21 DEBUG --- stderr --- 03:53:21 DEBUG 03:54:19 INFO 03:54:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:54:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:54:19 INFO [loop_until]: OK (rc = 0) 03:54:19 DEBUG --- stdout --- 03:54:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 142m 5767Mi am-55f77847b7-c9bk2 132m 5737Mi am-55f77847b7-zpsrs 142m 5771Mi ds-cts-0 11m 378Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 10446m 13825Mi ds-idrepo-1 7900m 13756Mi ds-idrepo-2 8400m 13729Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 7114m 4096Mi idm-65858d8c4c-97wdf 4846m 5451Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 977m 1691Mi 03:54:19 DEBUG --- stderr --- 03:54:19 DEBUG 03:54:21 INFO 03:54:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:54:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:54:21 INFO [loop_until]: OK (rc = 0) 03:54:21 DEBUG --- stdout --- 03:54:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 188m 1% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 205m 1% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 196m 1% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6816m 42% 5431Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1875m 11% 2251Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5222m 32% 6735Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7501m 47% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7469m 47% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10517m 66% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1013m 6% 3148Mi 5% 03:54:21 DEBUG --- stderr --- 03:54:21 DEBUG 03:55:19 INFO 03:55:19 INFO [loop_until]: kubectl --namespace=xlou top pods 03:55:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:55:20 INFO [loop_until]: OK (rc = 0) 03:55:20 DEBUG --- stdout --- 03:55:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 127m 5767Mi am-55f77847b7-c9bk2 125m 5738Mi am-55f77847b7-zpsrs 131m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 6m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 11837m 13811Mi ds-idrepo-1 7344m 13831Mi ds-idrepo-2 6546m 13837Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 4781m 4144Mi idm-65858d8c4c-97wdf 4815m 5457Mi lodemon-86f768796c-ts724 1m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 970m 1697Mi 03:55:20 DEBUG --- stderr --- 03:55:20 DEBUG 03:55:21 INFO 03:55:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:55:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:55:21 INFO [loop_until]: OK (rc = 0) 03:55:21 DEBUG --- stdout --- 03:55:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 183m 1% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 193m 1% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 178m 1% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5216m 32% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1957m 12% 2320Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5040m 31% 6739Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6795m 42% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 7483m 47% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 11379m 71% 14402Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1047m 6% 3149Mi 5% 03:55:21 DEBUG --- stderr --- 03:55:21 DEBUG 03:56:20 INFO 03:56:20 INFO [loop_until]: kubectl --namespace=xlou top pods 03:56:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:56:20 INFO [loop_until]: OK (rc = 0) 03:56:20 DEBUG --- stdout --- 03:56:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 123m 5767Mi am-55f77847b7-c9bk2 126m 5738Mi am-55f77847b7-zpsrs 130m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 10639m 13772Mi ds-idrepo-1 6578m 13826Mi ds-idrepo-2 7669m 13787Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 4839m 4251Mi idm-65858d8c4c-97wdf 4770m 5464Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 985m 1712Mi 03:56:20 DEBUG --- stderr --- 03:56:20 DEBUG 03:56:21 INFO 03:56:21 INFO [loop_until]: kubectl --namespace=xlou top node 03:56:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:56:22 INFO [loop_until]: OK (rc = 0) 03:56:22 DEBUG --- stdout --- 03:56:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 187m 1% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 187m 1% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 180m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5194m 32% 5602Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2021m 12% 2493Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 5123m 32% 6745Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6823m 42% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6499m 40% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 11344m 71% 14412Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1102m 6% 3149Mi 5% 03:56:22 DEBUG --- stderr --- 03:56:22 DEBUG 03:57:20 INFO 03:57:20 INFO [loop_until]: kubectl --namespace=xlou top pods 03:57:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:57:20 INFO [loop_until]: OK (rc = 0) 03:57:20 DEBUG --- stdout --- 03:57:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 110m 5767Mi am-55f77847b7-c9bk2 120m 5738Mi am-55f77847b7-zpsrs 131m 5771Mi ds-cts-0 6m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 11892m 13719Mi ds-idrepo-1 5595m 13772Mi ds-idrepo-2 6065m 13756Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2235m 4338Mi idm-65858d8c4c-97wdf 7068m 5478Mi lodemon-86f768796c-ts724 4m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 886m 1693Mi 03:57:20 DEBUG --- stderr --- 03:57:20 DEBUG 03:57:22 INFO 03:57:22 INFO [loop_until]: kubectl --namespace=xlou top node 03:57:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:57:22 INFO [loop_until]: OK (rc = 0) 03:57:22 DEBUG --- stdout --- 03:57:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 176m 1% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 186m 1% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 172m 1% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2681m 16% 5680Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1975m 12% 2489Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 7378m 46% 6752Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6076m 38% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5378m 33% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 11162m 70% 14410Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 922m 5% 3144Mi 5% 03:57:22 DEBUG --- stderr --- 03:57:22 DEBUG 03:58:20 INFO 03:58:20 INFO [loop_until]: kubectl --namespace=xlou top pods 03:58:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:58:20 INFO [loop_until]: OK (rc = 0) 03:58:20 DEBUG --- stdout --- 03:58:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 9m 5767Mi am-55f77847b7-c9bk2 8m 5738Mi am-55f77847b7-zpsrs 10m 5771Mi ds-cts-0 6m 378Mi ds-cts-1 5m 383Mi ds-cts-2 6m 377Mi ds-idrepo-0 543m 13638Mi ds-idrepo-1 185m 13767Mi ds-idrepo-2 212m 13800Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 7m 4339Mi idm-65858d8c4c-97wdf 14m 5476Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 147m 296Mi 03:58:20 DEBUG --- stderr --- 03:58:20 DEBUG 03:58:22 INFO 03:58:22 INFO [loop_until]: kubectl --namespace=xlou top node 03:58:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:58:22 INFO [loop_until]: OK (rc = 0) 03:58:22 DEBUG --- stdout --- 03:58:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 84m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2202Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 79m 0% 6754Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 251m 1% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 222m 1% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 260m 1% 14234Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 223m 1% 1763Mi 3% 03:58:22 DEBUG --- stderr --- 03:58:22 DEBUG 03:59:20 INFO 03:59:20 INFO [loop_until]: kubectl --namespace=xlou top pods 03:59:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:59:20 INFO [loop_until]: OK (rc = 0) 03:59:20 DEBUG --- stdout --- 03:59:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 8m 5767Mi am-55f77847b7-c9bk2 8m 5738Mi am-55f77847b7-zpsrs 11m 5771Mi ds-cts-0 5m 378Mi ds-cts-1 5m 383Mi ds-cts-2 5m 377Mi ds-idrepo-0 9m 13638Mi ds-idrepo-1 19m 13767Mi ds-idrepo-2 11m 13801Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 6m 4340Mi idm-65858d8c4c-97wdf 14m 5471Mi lodemon-86f768796c-ts724 8m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 296Mi 03:59:20 DEBUG --- stderr --- 03:59:20 DEBUG 03:59:22 INFO 03:59:22 INFO [loop_until]: kubectl --namespace=xlou top node 03:59:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:59:22 INFO [loop_until]: OK (rc = 0) 03:59:22 DEBUG --- stdout --- 03:59:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 84m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2193Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 81m 0% 6754Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 55m 0% 14236Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1761Mi 3% 03:59:22 DEBUG --- stderr --- 03:59:22 DEBUG 127.0.0.1 - - [13/Aug/2023 03:59:35] "GET /monitoring/average?start_time=23-08-13_02:28:28&stop_time=23-08-13_02:57:35 HTTP/1.1" 200 - 04:00:20 INFO 04:00:20 INFO [loop_until]: kubectl --namespace=xlou top pods 04:00:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:00:20 INFO [loop_until]: OK (rc = 0) 04:00:20 DEBUG --- stdout --- 04:00:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 7m 5767Mi am-55f77847b7-c9bk2 9m 5738Mi am-55f77847b7-zpsrs 10m 5771Mi ds-cts-0 5m 377Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 10m 13638Mi ds-idrepo-1 10m 13767Mi ds-idrepo-2 10m 13800Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5m 4340Mi idm-65858d8c4c-97wdf 9m 5471Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1244m 555Mi 04:00:20 DEBUG --- stderr --- 04:00:20 DEBUG 04:00:22 INFO 04:00:22 INFO [loop_until]: kubectl --namespace=xlou top node 04:00:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:00:22 INFO [loop_until]: OK (rc = 0) 04:00:22 DEBUG --- stdout --- 04:00:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 5686Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2193Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 6755Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14236Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1962m 12% 2064Mi 3% 04:00:22 DEBUG --- stderr --- 04:00:22 DEBUG 04:01:20 INFO 04:01:20 INFO [loop_until]: kubectl --namespace=xlou top pods 04:01:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:01:20 INFO [loop_until]: OK (rc = 0) 04:01:20 DEBUG --- stdout --- 04:01:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 136m 5768Mi am-55f77847b7-c9bk2 217m 5743Mi am-55f77847b7-zpsrs 128m 5789Mi ds-cts-0 6m 378Mi ds-cts-1 5m 383Mi ds-cts-2 6m 377Mi ds-idrepo-0 7812m 13823Mi ds-idrepo-1 5265m 13827Mi ds-idrepo-2 4975m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2906m 4457Mi idm-65858d8c4c-97wdf 2931m 5510Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1034m 1378Mi 04:01:20 DEBUG --- stderr --- 04:01:20 DEBUG 04:01:22 INFO 04:01:22 INFO [loop_until]: kubectl --namespace=xlou top node 04:01:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:01:22 INFO [loop_until]: OK (rc = 0) 04:01:22 DEBUG --- stdout --- 04:01:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 84m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 160m 1% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 184m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3171m 19% 5798Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1843m 11% 2373Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3203m 20% 6794Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5231m 32% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5386m 33% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7987m 50% 14423Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1099m 6% 2840Mi 4% 04:01:22 DEBUG --- stderr --- 04:01:22 DEBUG 04:02:20 INFO 04:02:20 INFO [loop_until]: kubectl --namespace=xlou top pods 04:02:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:02:20 INFO [loop_until]: OK (rc = 0) 04:02:20 DEBUG --- stdout --- 04:02:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 48m 5785Mi am-55f77847b7-c9bk2 47m 5746Mi am-55f77847b7-zpsrs 39m 5790Mi ds-cts-0 6m 378Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 7525m 13823Mi ds-idrepo-1 4853m 13836Mi ds-idrepo-2 5251m 13780Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2615m 4609Mi idm-65858d8c4c-97wdf 2762m 5524Mi lodemon-86f768796c-ts724 4m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 978m 1581Mi 04:02:20 DEBUG --- stderr --- 04:02:20 DEBUG 04:02:22 INFO 04:02:22 INFO [loop_until]: kubectl --namespace=xlou top node 04:02:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:02:22 INFO [loop_until]: OK (rc = 0) 04:02:22 DEBUG --- stdout --- 04:02:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 107m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 102m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2901m 18% 5945Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1998m 12% 2843Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2884m 18% 6797Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5566m 35% 14335Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5230m 32% 14483Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7620m 47% 14428Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1131m 7% 3023Mi 5% 04:02:22 DEBUG --- stderr --- 04:02:22 DEBUG 04:03:20 INFO 04:03:20 INFO [loop_until]: kubectl --namespace=xlou top pods 04:03:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:03:20 INFO [loop_until]: OK (rc = 0) 04:03:20 DEBUG --- stdout --- 04:03:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 35m 5785Mi am-55f77847b7-c9bk2 36m 5746Mi am-55f77847b7-zpsrs 21m 5790Mi ds-cts-0 12m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 9036m 13757Mi ds-idrepo-1 5312m 13824Mi ds-idrepo-2 4896m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2557m 4718Mi idm-65858d8c4c-97wdf 2576m 5541Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2363m 1802Mi 04:03:20 DEBUG --- stderr --- 04:03:20 DEBUG 04:03:22 INFO 04:03:22 INFO [loop_until]: kubectl --namespace=xlou top node 04:03:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:03:23 INFO [loop_until]: OK (rc = 0) 04:03:23 DEBUG --- stdout --- 04:03:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 92m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 83m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2882m 18% 6039Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2425m 15% 3390Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2806m 17% 6814Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4797m 30% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5514m 34% 14472Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8915m 56% 14321Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2510m 15% 3162Mi 5% 04:03:23 DEBUG --- stderr --- 04:03:23 DEBUG 04:04:21 INFO 04:04:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:04:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:04:21 INFO [loop_until]: OK (rc = 0) 04:04:21 DEBUG --- stdout --- 04:04:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 55m 5785Mi am-55f77847b7-c9bk2 91m 5752Mi am-55f77847b7-zpsrs 78m 5790Mi ds-cts-0 7m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 8745m 13772Mi ds-idrepo-1 6153m 13820Mi ds-idrepo-2 5307m 13734Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5497m 5024Mi idm-65858d8c4c-97wdf 1504m 3355Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2205m 1818Mi 04:04:21 DEBUG --- stderr --- 04:04:21 DEBUG 04:04:23 INFO 04:04:23 INFO [loop_until]: kubectl --namespace=xlou top node 04:04:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:04:23 INFO [loop_until]: OK (rc = 0) 04:04:23 DEBUG --- stdout --- 04:04:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6937Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5897m 37% 6380Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2160m 13% 3543Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 1913m 12% 4654Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5463m 34% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6142m 38% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8744m 55% 14352Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2199m 13% 3211Mi 5% 04:04:23 DEBUG --- stderr --- 04:04:23 DEBUG 04:05:21 INFO 04:05:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:05:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:05:21 INFO [loop_until]: OK (rc = 0) 04:05:21 DEBUG --- stdout --- 04:05:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 43m 5785Mi am-55f77847b7-c9bk2 32m 5752Mi am-55f77847b7-zpsrs 45m 5790Mi ds-cts-0 5m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 8429m 13823Mi ds-idrepo-1 5218m 13823Mi ds-idrepo-2 4993m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2553m 5350Mi idm-65858d8c4c-97wdf 3181m 3953Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2191m 1838Mi 04:05:21 DEBUG --- stderr --- 04:05:21 DEBUG 04:05:23 INFO 04:05:23 INFO [loop_until]: kubectl --namespace=xlou top node 04:05:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:05:23 INFO [loop_until]: OK (rc = 0) 04:05:23 DEBUG --- stdout --- 04:05:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2765m 17% 6718Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2577m 16% 3449Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 3120m 19% 5233Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5073m 31% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5270m 33% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8772m 55% 14436Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2671m 16% 3206Mi 5% 04:05:23 DEBUG --- stderr --- 04:05:23 DEBUG 04:06:21 INFO 04:06:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:06:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:06:21 INFO [loop_until]: OK (rc = 0) 04:06:21 DEBUG --- stdout --- 04:06:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 68m 5785Mi am-55f77847b7-c9bk2 86m 5752Mi am-55f77847b7-zpsrs 90m 5791Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 8230m 13743Mi ds-idrepo-1 5608m 13824Mi ds-idrepo-2 6111m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2458m 1302Mi idm-65858d8c4c-97wdf 4826m 4302Mi lodemon-86f768796c-ts724 2m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2185m 1860Mi 04:06:21 DEBUG --- stderr --- 04:06:21 DEBUG 04:06:23 INFO 04:06:23 INFO [loop_until]: kubectl --namespace=xlou top node 04:06:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:06:23 INFO [loop_until]: OK (rc = 0) 04:06:23 DEBUG --- stdout --- 04:06:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 118m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 689m 4% 2665Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 2472m 15% 3576Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 5559m 34% 5614Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6120m 38% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6005m 37% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8602m 54% 14326Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2280m 14% 3212Mi 5% 04:06:23 DEBUG --- stderr --- 04:06:23 DEBUG 04:07:21 INFO 04:07:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:07:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:07:21 INFO [loop_until]: OK (rc = 0) 04:07:21 DEBUG --- stdout --- 04:07:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 39m 5785Mi am-55f77847b7-c9bk2 30m 5752Mi am-55f77847b7-zpsrs 28m 5790Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 378Mi ds-idrepo-0 7791m 13824Mi ds-idrepo-1 4949m 13831Mi ds-idrepo-2 5429m 13762Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2831m 3970Mi idm-65858d8c4c-97wdf 2394m 4732Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2402m 1851Mi 04:07:21 DEBUG --- stderr --- 04:07:21 DEBUG 04:07:23 INFO 04:07:23 INFO [loop_until]: kubectl --namespace=xlou top node 04:07:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:07:23 INFO [loop_until]: OK (rc = 0) 04:07:23 DEBUG --- stdout --- 04:07:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 94m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3085m 19% 5299Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2659m 16% 3455Mi 5% gke-xlou-cdm-default-pool-f05840a3-tnc9 2660m 16% 5991Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5507m 34% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5364m 33% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 7913m 49% 14439Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1973m 12% 3211Mi 5% 04:07:23 DEBUG --- stderr --- 04:07:23 DEBUG 04:08:21 INFO 04:08:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:08:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:08:21 INFO [loop_until]: OK (rc = 0) 04:08:21 DEBUG --- stdout --- 04:08:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 76m 5798Mi am-55f77847b7-c9bk2 44m 5752Mi am-55f77847b7-zpsrs 67m 5790Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 8611m 13821Mi ds-idrepo-1 5140m 13823Mi ds-idrepo-2 5728m 13824Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3728m 4286Mi idm-65858d8c4c-97wdf 1608m 4912Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 2489m 1868Mi 04:08:21 DEBUG --- stderr --- 04:08:21 DEBUG 04:08:23 INFO 04:08:23 INFO [loop_until]: kubectl --namespace=xlou top node 04:08:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:08:23 INFO [loop_until]: OK (rc = 0) 04:08:23 DEBUG --- stdout --- 04:08:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 87m 0% 1391Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 115m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3792m 23% 5630Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2424m 15% 3573Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 1898m 11% 6209Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1142Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5674m 35% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5038m 31% 14499Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 8520m 53% 14441Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 2549m 16% 3215Mi 5% 04:08:23 DEBUG --- stderr --- 04:08:23 DEBUG 04:09:21 INFO 04:09:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:09:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:09:21 INFO [loop_until]: OK (rc = 0) 04:09:21 DEBUG --- stdout --- 04:09:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 97m 5798Mi am-55f77847b7-c9bk2 100m 5753Mi am-55f77847b7-zpsrs 83m 5790Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 383Mi ds-idrepo-0 9565m 13804Mi ds-idrepo-1 5755m 13823Mi ds-idrepo-2 5067m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 5975m 4892Mi idm-65858d8c4c-97wdf 2698m 1167Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1095m 1766Mi 04:09:21 DEBUG --- stderr --- 04:09:21 DEBUG 04:09:23 INFO 04:09:23 INFO [loop_until]: kubectl --namespace=xlou top node 04:09:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:09:23 INFO [loop_until]: OK (rc = 0) 04:09:23 DEBUG --- stdout --- 04:09:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 147m 0% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 163m 1% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 157m 0% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6165m 38% 6247Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1845m 11% 2236Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1535m 9% 2469Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5276m 33% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5228m 32% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 69m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9121m 57% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1197m 7% 3215Mi 5% 04:09:23 DEBUG --- stderr --- 04:09:23 DEBUG 04:10:21 INFO 04:10:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:10:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:10:21 INFO [loop_until]: OK (rc = 0) 04:10:21 DEBUG --- stdout --- 04:10:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 42m 5798Mi am-55f77847b7-c9bk2 37m 5753Mi am-55f77847b7-zpsrs 39m 5790Mi ds-cts-0 5m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 383Mi ds-idrepo-0 8806m 13795Mi ds-idrepo-1 6164m 13833Mi ds-idrepo-2 5648m 13825Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2851m 4992Mi idm-65858d8c4c-97wdf 3532m 3920Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1010m 1799Mi 04:10:21 DEBUG --- stderr --- 04:10:21 DEBUG 04:10:23 INFO 04:10:23 INFO [loop_until]: kubectl --namespace=xlou top node 04:10:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:10:23 INFO [loop_until]: OK (rc = 0) 04:10:23 DEBUG --- stdout --- 04:10:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6843Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6924Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3261m 20% 6350Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2041m 12% 2663Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3559m 22% 5228Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5637m 35% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6034m 37% 14493Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9037m 56% 14424Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1085m 6% 3213Mi 5% 04:10:23 DEBUG --- stderr --- 04:10:23 DEBUG 04:11:21 INFO 04:11:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:11:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:11:21 INFO [loop_until]: OK (rc = 0) 04:11:21 DEBUG --- stdout --- 04:11:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 122m 5802Mi am-55f77847b7-c9bk2 71m 5753Mi am-55f77847b7-zpsrs 101m 5790Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 9885m 13779Mi ds-idrepo-1 5717m 13823Mi ds-idrepo-2 5368m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2388m 3389Mi idm-65858d8c4c-97wdf 5551m 4086Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1156m 1767Mi 04:11:21 DEBUG --- stderr --- 04:11:21 DEBUG 04:11:23 INFO 04:11:23 INFO [loop_until]: kubectl --namespace=xlou top node 04:11:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:11:23 INFO [loop_until]: OK (rc = 0) 04:11:23 DEBUG --- stdout --- 04:11:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 6846Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 185m 1% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1419m 8% 4745Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1965m 12% 2209Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5966m 37% 5466Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5649m 35% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5442m 34% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9842m 61% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1162m 7% 3215Mi 5% 04:11:23 DEBUG --- stderr --- 04:11:23 DEBUG 04:12:21 INFO 04:12:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:12:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:12:21 INFO [loop_until]: OK (rc = 0) 04:12:21 DEBUG --- stdout --- 04:12:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 46m 5802Mi am-55f77847b7-c9bk2 43m 5754Mi am-55f77847b7-zpsrs 43m 5790Mi ds-cts-0 6m 380Mi ds-cts-1 6m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 9240m 13824Mi ds-idrepo-1 5375m 13824Mi ds-idrepo-2 4797m 13820Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3488m 3973Mi idm-65858d8c4c-97wdf 3088m 4226Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 961m 1776Mi 04:12:21 DEBUG --- stderr --- 04:12:21 DEBUG 04:12:24 INFO 04:12:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:12:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:12:24 INFO [loop_until]: OK (rc = 0) 04:12:24 DEBUG --- stdout --- 04:12:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1381Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 102m 0% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3911m 24% 5323Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2030m 12% 2316Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3423m 21% 5516Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5163m 32% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5665m 35% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9424m 59% 14445Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1030m 6% 3211Mi 5% 04:12:24 DEBUG --- stderr --- 04:12:24 DEBUG 04:13:21 INFO 04:13:21 INFO [loop_until]: kubectl --namespace=xlou top pods 04:13:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:13:21 INFO [loop_until]: OK (rc = 0) 04:13:21 DEBUG --- stdout --- 04:13:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 145m 5802Mi am-55f77847b7-c9bk2 36m 5754Mi am-55f77847b7-zpsrs 99m 5795Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 9583m 13796Mi ds-idrepo-1 5998m 13738Mi ds-idrepo-2 4956m 13809Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3097m 4127Mi idm-65858d8c4c-97wdf 2747m 4287Mi lodemon-86f768796c-ts724 9m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 957m 1785Mi 04:13:21 DEBUG --- stderr --- 04:13:21 DEBUG 04:13:24 INFO 04:13:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:13:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:13:24 INFO [loop_until]: OK (rc = 0) 04:13:24 DEBUG --- stdout --- 04:13:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 84m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 109m 0% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 216m 1% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6924Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3407m 21% 5466Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1962m 12% 2208Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2048m 12% 5574Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5392m 33% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6090m 38% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9749m 61% 14445Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1104m 6% 3209Mi 5% 04:13:24 DEBUG --- stderr --- 04:13:24 DEBUG 04:14:22 INFO 04:14:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:14:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:14:22 INFO [loop_until]: OK (rc = 0) 04:14:22 DEBUG --- stdout --- 04:14:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 46m 5802Mi am-55f77847b7-c9bk2 40m 5754Mi am-55f77847b7-zpsrs 40m 5795Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 376Mi ds-idrepo-0 9271m 13741Mi ds-idrepo-1 5183m 13773Mi ds-idrepo-2 4955m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3137m 4167Mi idm-65858d8c4c-97wdf 3056m 4341Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 985m 1782Mi 04:14:22 DEBUG --- stderr --- 04:14:22 DEBUG 04:14:24 INFO 04:14:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:14:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:14:24 INFO [loop_until]: OK (rc = 0) 04:14:24 DEBUG --- stdout --- 04:14:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6924Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3546m 22% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2031m 12% 2361Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3459m 21% 5641Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5017m 31% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5063m 31% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9617m 60% 14365Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1034m 6% 3209Mi 5% 04:14:24 DEBUG --- stderr --- 04:14:24 DEBUG 04:15:22 INFO 04:15:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:15:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:15:22 INFO [loop_until]: OK (rc = 0) 04:15:22 DEBUG --- stdout --- 04:15:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 56m 5802Mi am-55f77847b7-c9bk2 99m 5770Mi am-55f77847b7-zpsrs 29m 5795Mi ds-cts-0 5m 379Mi ds-cts-1 5m 383Mi ds-cts-2 10m 377Mi ds-idrepo-0 8788m 13818Mi ds-idrepo-1 6285m 13809Mi ds-idrepo-2 5840m 13751Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3299m 4269Mi idm-65858d8c4c-97wdf 2056m 4401Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1032m 1781Mi 04:15:22 DEBUG --- stderr --- 04:15:22 DEBUG 04:15:24 INFO 04:15:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:15:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:15:24 INFO [loop_until]: OK (rc = 0) 04:15:24 DEBUG --- stdout --- 04:15:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 88m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 159m 1% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3442m 21% 5601Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1984m 12% 2215Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2051m 12% 5692Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6072m 38% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6525m 41% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9172m 57% 14443Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1130m 7% 3210Mi 5% 04:15:24 DEBUG --- stderr --- 04:15:24 DEBUG 04:16:22 INFO 04:16:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:16:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:16:22 INFO [loop_until]: OK (rc = 0) 04:16:22 DEBUG --- stdout --- 04:16:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 54m 5812Mi am-55f77847b7-c9bk2 69m 5770Mi am-55f77847b7-zpsrs 73m 5795Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 379Mi ds-idrepo-0 11306m 13632Mi ds-idrepo-1 5768m 13817Mi ds-idrepo-2 4823m 13822Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3591m 4332Mi idm-65858d8c4c-97wdf 3232m 4440Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 956m 1773Mi 04:16:22 DEBUG --- stderr --- 04:16:22 DEBUG 04:16:24 INFO 04:16:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:16:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:16:24 INFO [loop_until]: OK (rc = 0) 04:16:24 DEBUG --- stdout --- 04:16:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3877m 24% 5677Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1956m 12% 2223Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3381m 21% 5731Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4708m 29% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5734m 36% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10900m 68% 14319Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1046m 6% 3211Mi 5% 04:16:24 DEBUG --- stderr --- 04:16:24 DEBUG 04:17:22 INFO 04:17:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:17:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:17:22 INFO [loop_until]: OK (rc = 0) 04:17:22 DEBUG --- stdout --- 04:17:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 41m 5812Mi am-55f77847b7-c9bk2 34m 5770Mi am-55f77847b7-zpsrs 41m 5795Mi ds-cts-0 6m 380Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 9200m 13823Mi ds-idrepo-1 5874m 13753Mi ds-idrepo-2 5469m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2902m 4401Mi idm-65858d8c4c-97wdf 2708m 4492Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 925m 1789Mi 04:17:22 DEBUG --- stderr --- 04:17:22 DEBUG 04:17:24 INFO 04:17:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:17:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:17:24 INFO [loop_until]: OK (rc = 0) 04:17:24 DEBUG --- stdout --- 04:17:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6937Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3167m 19% 5745Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1978m 12% 2439Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3137m 19% 5803Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5504m 34% 14483Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5917m 37% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9539m 60% 14453Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1041m 6% 3209Mi 5% 04:17:24 DEBUG --- stderr --- 04:17:24 DEBUG 04:18:22 INFO 04:18:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:18:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:18:22 INFO [loop_until]: OK (rc = 0) 04:18:22 DEBUG --- stdout --- 04:18:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 102m 5812Mi am-55f77847b7-c9bk2 57m 5770Mi am-55f77847b7-zpsrs 48m 5795Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 378Mi ds-idrepo-0 9375m 13823Mi ds-idrepo-1 5690m 13823Mi ds-idrepo-2 5661m 13824Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3773m 4516Mi idm-65858d8c4c-97wdf 2987m 4582Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1010m 1792Mi 04:18:22 DEBUG --- stderr --- 04:18:22 DEBUG 04:18:24 INFO 04:18:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:18:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:18:24 INFO [loop_until]: OK (rc = 0) 04:18:24 DEBUG --- stdout --- 04:18:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 113m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3492m 21% 5848Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 2130m 13% 2553Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2350m 14% 5863Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5943m 37% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6020m 37% 14510Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 75m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9947m 62% 14334Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1195m 7% 3210Mi 5% 04:18:24 DEBUG --- stderr --- 04:18:24 DEBUG 04:19:22 INFO 04:19:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:19:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:19:22 INFO [loop_until]: OK (rc = 0) 04:19:22 DEBUG --- stdout --- 04:19:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 69m 5813Mi am-55f77847b7-c9bk2 38m 5770Mi am-55f77847b7-zpsrs 43m 5795Mi ds-cts-0 5m 381Mi ds-cts-1 5m 383Mi ds-cts-2 7m 378Mi ds-idrepo-0 10195m 13809Mi ds-idrepo-1 6664m 13815Mi ds-idrepo-2 5998m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3336m 4570Mi idm-65858d8c4c-97wdf 3136m 4619Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1000m 1781Mi 04:19:22 DEBUG --- stderr --- 04:19:22 DEBUG 04:19:24 INFO 04:19:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:19:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:19:24 INFO [loop_until]: OK (rc = 0) 04:19:24 DEBUG --- stdout --- 04:19:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3491m 21% 5922Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1974m 12% 2287Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3425m 21% 5910Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5992m 37% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6458m 40% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10379m 65% 14453Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1062m 6% 3213Mi 5% 04:19:24 DEBUG --- stderr --- 04:19:24 DEBUG 04:20:22 INFO 04:20:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:20:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:20:22 INFO [loop_until]: OK (rc = 0) 04:20:22 DEBUG --- stdout --- 04:20:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 30m 5812Mi am-55f77847b7-c9bk2 31m 5770Mi am-55f77847b7-zpsrs 34m 5795Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 8m 377Mi ds-idrepo-0 9567m 13822Mi ds-idrepo-1 5795m 13823Mi ds-idrepo-2 4972m 13825Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2238m 4625Mi idm-65858d8c4c-97wdf 2319m 4679Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 913m 1801Mi 04:20:22 DEBUG --- stderr --- 04:20:22 DEBUG 04:20:24 INFO 04:20:24 INFO [loop_until]: kubectl --namespace=xlou top node 04:20:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:20:24 INFO [loop_until]: OK (rc = 0) 04:20:24 DEBUG --- stdout --- 04:20:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2611m 16% 5982Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1937m 12% 2560Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2461m 15% 5966Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4868m 30% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6076m 38% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9486m 59% 14454Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1006m 6% 3211Mi 5% 04:20:24 DEBUG --- stderr --- 04:20:24 DEBUG 04:21:22 INFO 04:21:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:21:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:21:22 INFO [loop_until]: OK (rc = 0) 04:21:22 DEBUG --- stdout --- 04:21:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 44m 5812Mi am-55f77847b7-c9bk2 35m 5770Mi am-55f77847b7-zpsrs 44m 5795Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 9497m 13823Mi ds-idrepo-1 5567m 13820Mi ds-idrepo-2 5879m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3121m 4769Mi idm-65858d8c4c-97wdf 3057m 4805Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1057m 1806Mi 04:21:22 DEBUG --- stderr --- 04:21:22 DEBUG 04:21:25 INFO 04:21:25 INFO [loop_until]: kubectl --namespace=xlou top node 04:21:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:21:25 INFO [loop_until]: OK (rc = 0) 04:21:25 DEBUG --- stdout --- 04:21:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6939Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3456m 21% 6105Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2105m 13% 2803Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3104m 19% 6092Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5840m 36% 14486Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5746m 36% 14519Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9621m 60% 14448Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1186m 7% 3213Mi 5% 04:21:25 DEBUG --- stderr --- 04:21:25 DEBUG 04:22:22 INFO 04:22:22 INFO [loop_until]: kubectl --namespace=xlou top pods 04:22:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:22:22 INFO [loop_until]: OK (rc = 0) 04:22:22 DEBUG --- stdout --- 04:22:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 45m 5812Mi am-55f77847b7-c9bk2 37m 5770Mi am-55f77847b7-zpsrs 36m 5795Mi ds-cts-0 5m 379Mi ds-cts-1 5m 383Mi ds-cts-2 6m 377Mi ds-idrepo-0 9458m 13824Mi ds-idrepo-1 5718m 13823Mi ds-idrepo-2 4625m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2971m 4875Mi idm-65858d8c4c-97wdf 2873m 4874Mi lodemon-86f768796c-ts724 11m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 953m 1791Mi 04:22:22 DEBUG --- stderr --- 04:22:22 DEBUG 04:22:25 INFO 04:22:25 INFO [loop_until]: kubectl --namespace=xlou top node 04:22:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:22:25 INFO [loop_until]: OK (rc = 0) 04:22:25 DEBUG --- stdout --- 04:22:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 103m 0% 1382Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3167m 19% 6216Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1990m 12% 2386Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3170m 19% 6147Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5058m 31% 14493Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6012m 37% 14514Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9704m 61% 14455Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1025m 6% 3213Mi 5% 04:22:25 DEBUG --- stderr --- 04:22:25 DEBUG 04:23:23 INFO 04:23:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:23:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:23:23 INFO [loop_until]: OK (rc = 0) 04:23:23 DEBUG --- stdout --- 04:23:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 25m 5812Mi am-55f77847b7-c9bk2 44m 5770Mi am-55f77847b7-zpsrs 43m 5795Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 377Mi ds-idrepo-0 9410m 13823Mi ds-idrepo-1 6407m 13809Mi ds-idrepo-2 5549m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3781m 4983Mi idm-65858d8c4c-97wdf 2345m 4941Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1063m 1779Mi 04:23:23 DEBUG --- stderr --- 04:23:23 DEBUG 04:23:25 INFO 04:23:25 INFO [loop_until]: kubectl --namespace=xlou top node 04:23:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:23:25 INFO [loop_until]: OK (rc = 0) 04:23:25 DEBUG --- stdout --- 04:23:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 104m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3870m 24% 6312Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 2129m 13% 2548Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3234m 20% 6231Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5699m 35% 14488Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6780m 42% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9879m 62% 14358Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1185m 7% 3212Mi 5% 04:23:25 DEBUG --- stderr --- 04:23:25 DEBUG 04:24:23 INFO 04:24:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:24:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:24:23 INFO [loop_until]: OK (rc = 0) 04:24:23 DEBUG --- stdout --- 04:24:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 94m 5812Mi am-55f77847b7-c9bk2 123m 5771Mi am-55f77847b7-zpsrs 75m 5797Mi ds-cts-0 6m 379Mi ds-cts-1 5m 383Mi ds-cts-2 7m 378Mi ds-idrepo-0 10958m 13823Mi ds-idrepo-1 5688m 13799Mi ds-idrepo-2 4800m 13800Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 3466m 5029Mi idm-65858d8c4c-97wdf 3200m 4977Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 972m 1781Mi 04:24:23 DEBUG --- stderr --- 04:24:23 DEBUG 04:24:25 INFO 04:24:25 INFO [loop_until]: kubectl --namespace=xlou top node 04:24:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:24:25 INFO [loop_until]: OK (rc = 0) 04:24:25 DEBUG --- stdout --- 04:24:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 105m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 135m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3820m 24% 6385Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1958m 12% 2227Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3509m 22% 6268Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5230m 32% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5849m 36% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 10453m 65% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1034m 6% 3215Mi 5% 04:24:25 DEBUG --- stderr --- 04:24:25 DEBUG 04:25:23 INFO 04:25:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:25:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:25:23 INFO [loop_until]: OK (rc = 0) 04:25:23 DEBUG --- stdout --- 04:25:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 30m 5812Mi am-55f77847b7-c9bk2 29m 5770Mi am-55f77847b7-zpsrs 31m 5797Mi ds-cts-0 5m 381Mi ds-cts-1 5m 384Mi ds-cts-2 7m 378Mi ds-idrepo-0 10495m 13825Mi ds-idrepo-1 5756m 13823Mi ds-idrepo-2 6152m 13675Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2809m 5090Mi idm-65858d8c4c-97wdf 2893m 5020Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 945m 1794Mi 04:25:23 DEBUG --- stderr --- 04:25:23 DEBUG 04:25:25 INFO 04:25:25 INFO [loop_until]: kubectl --namespace=xlou top node 04:25:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:25:25 INFO [loop_until]: OK (rc = 0) 04:25:25 DEBUG --- stdout --- 04:25:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 105m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 88m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3093m 19% 6431Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1978m 12% 2434Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3109m 19% 6309Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5870m 36% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5586m 35% 14526Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9853m 62% 14394Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1024m 6% 3227Mi 5% 04:25:25 DEBUG --- stderr --- 04:25:25 DEBUG 04:26:23 INFO 04:26:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:26:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:26:23 INFO [loop_until]: OK (rc = 0) 04:26:23 DEBUG --- stdout --- 04:26:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 28m 5812Mi am-55f77847b7-c9bk2 33m 5771Mi am-55f77847b7-zpsrs 37m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 5m 384Mi ds-cts-2 6m 377Mi ds-idrepo-0 9235m 13796Mi ds-idrepo-1 5022m 13823Mi ds-idrepo-2 5408m 13787Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 2976m 5202Mi idm-65858d8c4c-97wdf 2993m 5155Mi lodemon-86f768796c-ts724 5m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 955m 1826Mi 04:26:23 DEBUG --- stderr --- 04:26:23 DEBUG 04:26:25 INFO 04:26:25 INFO [loop_until]: kubectl --namespace=xlou top node 04:26:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:26:25 INFO [loop_until]: OK (rc = 0) 04:26:25 DEBUG --- stdout --- 04:26:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 108m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 94m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3374m 21% 6575Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2059m 12% 2785Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 3141m 19% 6417Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5642m 35% 14489Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4910m 30% 14513Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 9490m 59% 14469Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1051m 6% 3217Mi 5% 04:26:25 DEBUG --- stderr --- 04:26:25 DEBUG 04:27:23 INFO 04:27:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:27:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:27:23 INFO [loop_until]: OK (rc = 0) 04:27:23 DEBUG --- stdout --- 04:27:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 61m 5812Mi am-55f77847b7-c9bk2 58m 5771Mi am-55f77847b7-zpsrs 86m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 5m 384Mi ds-cts-2 7m 377Mi ds-idrepo-0 5231m 13698Mi ds-idrepo-1 5391m 13835Mi ds-idrepo-2 5031m 13823Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 1869m 5281Mi idm-65858d8c4c-97wdf 3314m 5243Mi lodemon-86f768796c-ts724 6m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1119m 1781Mi 04:27:23 DEBUG --- stderr --- 04:27:23 DEBUG 04:27:25 INFO 04:27:25 INFO [loop_until]: kubectl --namespace=xlou top node 04:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:27:25 INFO [loop_until]: OK (rc = 0) 04:27:25 DEBUG --- stdout --- 04:27:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 119m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 136m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1783m 11% 6631Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 2106m 13% 2210Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3608m 22% 6516Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4972m 31% 14507Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5145m 32% 14525Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5397m 33% 14336Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1150m 7% 3212Mi 5% 04:27:25 DEBUG --- stderr --- 04:27:25 DEBUG 04:28:23 INFO 04:28:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:28:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:28:23 INFO [loop_until]: OK (rc = 0) 04:28:23 DEBUG --- stdout --- 04:28:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 10m 5812Mi am-55f77847b7-c9bk2 8m 5771Mi am-55f77847b7-zpsrs 6m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 5m 384Mi ds-cts-2 5m 377Mi ds-idrepo-0 204m 13652Mi ds-idrepo-1 320m 13398Mi ds-idrepo-2 4057m 13681Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 661m 5291Mi idm-65858d8c4c-97wdf 637m 5255Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 932m 1782Mi 04:28:23 DEBUG --- stderr --- 04:28:23 DEBUG 04:28:25 INFO 04:28:25 INFO [loop_until]: kubectl --namespace=xlou top node 04:28:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:28:25 INFO [loop_until]: OK (rc = 0) 04:28:25 DEBUG --- stdout --- 04:28:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 783m 4% 6640Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1912m 12% 2203Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 756m 4% 6540Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3674m 23% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 374m 2% 14106Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 250m 1% 14290Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1034m 6% 3210Mi 5% 04:28:25 DEBUG --- stderr --- 04:28:25 DEBUG 04:29:23 INFO 04:29:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:29:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:29:23 INFO [loop_until]: OK (rc = 0) 04:29:23 DEBUG --- stdout --- 04:29:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 10m 5813Mi am-55f77847b7-c9bk2 8m 5770Mi am-55f77847b7-zpsrs 6m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 5m 384Mi ds-cts-2 5m 377Mi ds-idrepo-0 186m 13660Mi ds-idrepo-1 11m 13399Mi ds-idrepo-2 11m 13582Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 634m 5292Mi idm-65858d8c4c-97wdf 597m 5256Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 938m 1783Mi 04:29:23 DEBUG --- stderr --- 04:29:23 DEBUG 04:29:26 INFO 04:29:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:29:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:29:26 INFO [loop_until]: OK (rc = 0) 04:29:26 DEBUG --- stdout --- 04:29:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1386Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1033m 6% 6642Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1905m 11% 2201Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 915m 5% 6559Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 286m 1% 14292Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 558m 3% 14139Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1086m 6% 14331Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1002m 6% 3214Mi 5% 04:29:26 DEBUG --- stderr --- 04:29:26 DEBUG 04:30:23 INFO 04:30:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:30:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:30:23 INFO [loop_until]: OK (rc = 0) 04:30:23 DEBUG --- stdout --- 04:30:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 120m 5815Mi am-55f77847b7-c9bk2 71m 5771Mi am-55f77847b7-zpsrs 91m 5797Mi ds-cts-0 5m 381Mi ds-cts-1 7m 384Mi ds-cts-2 6m 377Mi ds-idrepo-0 3134m 13756Mi ds-idrepo-1 2347m 13588Mi ds-idrepo-2 61m 13608Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 1356m 5294Mi idm-65858d8c4c-97wdf 1652m 5268Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 887m 1782Mi 04:30:23 DEBUG --- stderr --- 04:30:23 DEBUG 04:30:26 INFO 04:30:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:30:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:30:26 INFO [loop_until]: OK (rc = 0) 04:30:26 DEBUG --- stdout --- 04:30:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 138m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 205m 1% 6955Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 447m 2% 6641Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 218m 1% 2202Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 723m 4% 6555Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 216m 1% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 464m 2% 14305Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2074m 13% 14315Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 261m 1% 3213Mi 5% 04:30:26 DEBUG --- stderr --- 04:30:26 DEBUG 04:31:23 INFO 04:31:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:31:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:31:23 INFO [loop_until]: OK (rc = 0) 04:31:23 DEBUG --- stdout --- 04:31:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 8m 5815Mi am-55f77847b7-c9bk2 5m 5774Mi am-55f77847b7-zpsrs 6m 5797Mi ds-cts-0 5m 381Mi ds-cts-1 7m 384Mi ds-cts-2 6m 377Mi ds-idrepo-0 13m 13662Mi ds-idrepo-1 10m 13588Mi ds-idrepo-2 18m 13745Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 10m 5293Mi idm-65858d8c4c-97wdf 9m 5268Mi lodemon-86f768796c-ts724 7m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 174m 325Mi 04:31:23 DEBUG --- stderr --- 04:31:23 DEBUG 04:31:26 INFO 04:31:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:31:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:31:26 INFO [loop_until]: OK (rc = 0) 04:31:26 DEBUG --- stdout --- 04:31:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1384Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2200Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 6553Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14308Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 79m 0% 14314Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1766Mi 3% 04:31:26 DEBUG --- stderr --- 04:31:26 DEBUG 127.0.0.1 - - [13/Aug/2023 04:32:23] "GET /monitoring/average?start_time=23-08-13_03:01:35&stop_time=23-08-13_03:30:22 HTTP/1.1" 200 - 04:32:23 INFO 04:32:23 INFO [loop_until]: kubectl --namespace=xlou top pods 04:32:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:32:23 INFO [loop_until]: OK (rc = 0) 04:32:23 DEBUG --- stdout --- 04:32:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 6m 5815Mi am-55f77847b7-c9bk2 5m 5774Mi am-55f77847b7-zpsrs 7m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 7m 384Mi ds-cts-2 7m 378Mi ds-idrepo-0 10m 13661Mi ds-idrepo-1 11m 13588Mi ds-idrepo-2 10m 13745Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9m 5293Mi idm-65858d8c4c-97wdf 7m 5267Mi lodemon-86f768796c-ts724 11m 66Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 325Mi 04:32:23 DEBUG --- stderr --- 04:32:23 DEBUG 04:32:26 INFO 04:32:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:32:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:32:26 INFO [loop_until]: OK (rc = 0) 04:32:26 DEBUG --- stdout --- 04:32:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1385Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 81m 0% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 131m 0% 2200Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 6555Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14308Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 83m 0% 1766Mi 3% 04:32:26 DEBUG --- stderr --- 04:32:26 DEBUG 04:33:24 INFO 04:33:24 INFO [loop_until]: kubectl --namespace=xlou top pods 04:33:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:33:24 INFO [loop_until]: OK (rc = 0) 04:33:24 DEBUG --- stdout --- 04:33:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 4Mi am-55f77847b7-5qsm5 8m 5815Mi am-55f77847b7-c9bk2 6m 5774Mi am-55f77847b7-zpsrs 7m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 7m 384Mi ds-cts-2 5m 378Mi ds-idrepo-0 9m 13662Mi ds-idrepo-1 11m 13589Mi ds-idrepo-2 10m 13745Mi end-user-ui-6845bc78c7-ztmfn 1m 4Mi idm-65858d8c4c-4jclh 9m 5293Mi idm-65858d8c4c-97wdf 8m 5267Mi lodemon-86f768796c-ts724 5m 67Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1m 325Mi 04:33:24 DEBUG --- stderr --- 04:33:24 DEBUG 04:33:26 INFO 04:33:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:33:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:33:26 INFO [loop_until]: OK (rc = 0) 04:33:26 DEBUG --- stdout --- 04:33:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 87m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 6640Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2194Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 6550Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 138m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 402m 2% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 373m 2% 14310Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 123m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 539m 3% 14320Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1418m 8% 2123Mi 3% 04:33:26 DEBUG --- stderr --- 04:33:26 DEBUG 04:34:24 INFO 04:34:24 INFO [loop_until]: kubectl --namespace=xlou top pods 04:34:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:34:24 INFO [loop_until]: OK (rc = 0) 04:34:24 DEBUG --- stdout --- 04:34:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 5Mi am-55f77847b7-5qsm5 6m 5815Mi am-55f77847b7-c9bk2 5m 5774Mi am-55f77847b7-zpsrs 7m 5797Mi ds-cts-0 5m 381Mi ds-cts-1 8m 384Mi ds-cts-2 5m 378Mi ds-idrepo-0 246m 13661Mi ds-idrepo-1 136m 13587Mi ds-idrepo-2 140m 13744Mi end-user-ui-6845bc78c7-ztmfn 1m 6Mi idm-65858d8c4c-4jclh 8m 5292Mi idm-65858d8c4c-97wdf 7m 5267Mi lodemon-86f768796c-ts724 8m 67Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1014m 788Mi 04:34:24 DEBUG --- stderr --- 04:34:24 DEBUG 04:34:26 INFO 04:34:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:34:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:34:26 INFO [loop_until]: OK (rc = 0) 04:34:26 DEBUG --- stdout --- 04:34:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 6641Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2196Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 6550Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14307Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1123m 7% 2056Mi 3% 04:34:26 DEBUG --- stderr --- 04:34:26 DEBUG 04:35:24 INFO 04:35:24 INFO [loop_until]: kubectl --namespace=xlou top pods 04:35:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:35:24 INFO [loop_until]: OK (rc = 0) 04:35:24 DEBUG --- stdout --- 04:35:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 5Mi am-55f77847b7-5qsm5 7m 5815Mi am-55f77847b7-c9bk2 5m 5774Mi am-55f77847b7-zpsrs 7m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 7m 384Mi ds-cts-2 5m 378Mi ds-idrepo-0 9m 13662Mi ds-idrepo-1 10m 13587Mi ds-idrepo-2 10m 13744Mi end-user-ui-6845bc78c7-ztmfn 1m 6Mi idm-65858d8c4c-4jclh 8m 5292Mi idm-65858d8c4c-97wdf 8m 5266Mi lodemon-86f768796c-ts724 8m 67Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 525m 895Mi 04:35:24 DEBUG --- stderr --- 04:35:24 DEBUG 04:35:26 INFO 04:35:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:35:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:35:26 INFO [loop_until]: OK (rc = 0) 04:35:26 DEBUG --- stdout --- 04:35:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1387Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 6640Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2195Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 6552Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14316Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1081m 6% 2464Mi 4% 04:35:26 DEBUG --- stderr --- 04:35:26 DEBUG 04:36:24 INFO 04:36:24 INFO [loop_until]: kubectl --namespace=xlou top pods 04:36:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:36:24 INFO [loop_until]: OK (rc = 0) 04:36:24 DEBUG --- stdout --- 04:36:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 5Mi am-55f77847b7-5qsm5 7m 5815Mi am-55f77847b7-c9bk2 5m 5774Mi am-55f77847b7-zpsrs 7m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 8m 384Mi ds-cts-2 5m 378Mi ds-idrepo-0 10m 13661Mi ds-idrepo-1 10m 13587Mi ds-idrepo-2 10m 13745Mi end-user-ui-6845bc78c7-ztmfn 1m 6Mi idm-65858d8c4c-4jclh 8m 5292Mi idm-65858d8c4c-97wdf 8m 5266Mi lodemon-86f768796c-ts724 7m 67Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 1025m 1089Mi 04:36:24 DEBUG --- stderr --- 04:36:24 DEBUG 04:36:26 INFO 04:36:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:36:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:36:26 INFO [loop_until]: OK (rc = 0) 04:36:26 DEBUG --- stdout --- 04:36:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 6648Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2198Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 6551Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14315Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 736m 4% 2380Mi 4% 04:36:26 DEBUG --- stderr --- 04:36:26 DEBUG 04:37:24 INFO 04:37:24 INFO [loop_until]: kubectl --namespace=xlou top pods 04:37:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:37:24 INFO [loop_until]: OK (rc = 0) 04:37:24 DEBUG --- stdout --- 04:37:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 5Mi am-55f77847b7-5qsm5 6m 5815Mi am-55f77847b7-c9bk2 6m 5774Mi am-55f77847b7-zpsrs 8m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 8m 384Mi ds-cts-2 5m 380Mi ds-idrepo-0 10m 13661Mi ds-idrepo-1 10m 13588Mi ds-idrepo-2 10m 13744Mi end-user-ui-6845bc78c7-ztmfn 1m 6Mi idm-65858d8c4c-4jclh 9m 5292Mi idm-65858d8c4c-97wdf 8m 5266Mi lodemon-86f768796c-ts724 6m 67Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 688m 1224Mi 04:37:24 DEBUG --- stderr --- 04:37:24 DEBUG 04:37:26 INFO 04:37:26 INFO [loop_until]: kubectl --namespace=xlou top node 04:37:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:37:27 INFO [loop_until]: OK (rc = 0) 04:37:27 DEBUG --- stdout --- 04:37:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 6551Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14311Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14313Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1081m 6% 2859Mi 4% 04:37:27 DEBUG --- stderr --- 04:37:27 DEBUG 04:38:24 INFO 04:38:24 INFO [loop_until]: kubectl --namespace=xlou top pods 04:38:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:38:24 INFO [loop_until]: OK (rc = 0) 04:38:24 DEBUG --- stdout --- 04:38:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 5Mi am-55f77847b7-5qsm5 7m 5815Mi am-55f77847b7-c9bk2 5m 5774Mi am-55f77847b7-zpsrs 8m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 17m 385Mi ds-cts-2 5m 378Mi ds-idrepo-0 9m 13662Mi ds-idrepo-1 10m 13587Mi ds-idrepo-2 10m 13744Mi end-user-ui-6845bc78c7-ztmfn 1m 6Mi idm-65858d8c4c-4jclh 9m 5292Mi idm-65858d8c4c-97wdf 8m 5266Mi lodemon-86f768796c-ts724 7m 67Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 689m 1202Mi 04:38:24 DEBUG --- stderr --- 04:38:24 DEBUG 04:38:27 INFO 04:38:27 INFO [loop_until]: kubectl --namespace=xlou top node 04:38:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:38:27 INFO [loop_until]: OK (rc = 0) 04:38:27 DEBUG --- stdout --- 04:38:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1389Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 145m 0% 2188Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 6549Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 70m 0% 14313Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14317Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1100m 6% 2982Mi 5% 04:38:27 DEBUG --- stderr --- 04:38:27 DEBUG 04:39:24 INFO 04:39:24 INFO [loop_until]: kubectl --namespace=xlou top pods 04:39:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:39:24 INFO [loop_until]: OK (rc = 0) 04:39:24 DEBUG --- stdout --- 04:39:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-f7lgj 1m 5Mi am-55f77847b7-5qsm5 7m 5815Mi am-55f77847b7-c9bk2 5m 5774Mi am-55f77847b7-zpsrs 8m 5797Mi ds-cts-0 6m 381Mi ds-cts-1 7m 385Mi ds-cts-2 5m 377Mi ds-idrepo-0 10m 13661Mi ds-idrepo-1 10m 13588Mi ds-idrepo-2 10m 13744Mi end-user-ui-6845bc78c7-ztmfn 1m 6Mi idm-65858d8c4c-4jclh 8m 5291Mi idm-65858d8c4c-97wdf 9m 5266Mi lodemon-86f768796c-ts724 6m 67Mi login-ui-74d6fb46c-kprf6 1m 3Mi overseer-0-78bdc846-p8mnn 997m 1647Mi 04:39:24 DEBUG --- stderr --- 04:39:24 DEBUG 04:39:27 INFO 04:39:27 INFO [loop_until]: kubectl --namespace=xlou top node 04:39:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:39:27 INFO [loop_until]: OK (rc = 0) 04:39:27 DEBUG --- stdout --- 04:39:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1388Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6940Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2193Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6550Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14317Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1127m 7% 3182Mi 5% 04:39:27 DEBUG --- stderr --- 04:39:27 DEBUG 04:39:37 INFO Finished: True 04:39:37 INFO Waiting for threads to register finish flag 04:40:27 INFO Done. Have a nice day! :) 127.0.0.1 - - [13/Aug/2023 04:40:27] "GET /monitoring/stop HTTP/1.1" 200 - 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Cpu_cores_used_per_pod.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Memory_usage_per_pod.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Disk_tps_read_per_pod.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Disk_tps_writes_per_pod.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Cpu_cores_used_per_node.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Memory_usage_used_per_node.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Cpu_iowait_per_node.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Network_receive_per_node.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Network_transmit_per_node.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/am_cts_task_count_token_session.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/am_authentication_rate.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/am_authentication_count_per_pod.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/Cts_reaper_Deletion_count.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/AM_oauth2_authorization_codes.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/ds_pods_replication_delay.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/am_cts_reaper_cache_size.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/node_disk_read_bytes_total.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/node_disk_written_bytes_total.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/ds_backend_entry_count.json does not exist. Skipping... 04:40:30 INFO File /tmp/lodemon_data-23-08-13_01:57:29/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [13/Aug/2023 04:40:32] "GET /monitoring/process HTTP/1.1" 200 -