==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-97b6d75b7-fknft Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sat, 12 Aug 2023 04:09:06 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=97b6d75b7 skaffold.dev/run-id=9aa4da98-aaed-4128-8e01-03c0e4192b0a Annotations: Status: Running IP: 10.106.45.44 IPs: IP: 10.106.45.44 Controlled By: ReplicaSet/lodemon-97b6d75b7 Containers: lodemon: Container ID: containerd://58cf49fb164d36dce79f74beb5253bce1ca7c67afa4e7b69299e8b8c89d27d17 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sat, 12 Aug 2023 04:09:07 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9xkkm (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-9xkkm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 05:09:08 INFO 05:09:08 INFO --------------------- Get expected number of pods --------------------- 05:09:08 INFO 05:09:08 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 05:09:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:08 INFO [loop_until]: OK (rc = 0) 05:09:08 DEBUG --- stdout --- 05:09:08 DEBUG 3 05:09:08 DEBUG --- stderr --- 05:09:08 DEBUG 05:09:08 INFO 05:09:08 INFO ---------------------------- Get pod list ---------------------------- 05:09:08 INFO 05:09:08 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 05:09:08 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 05:09:08 INFO [loop_until]: OK (rc = 0) 05:09:08 DEBUG --- stdout --- 05:09:08 DEBUG am-55f77847b7-778wv am-55f77847b7-nv9k2 am-55f77847b7-v7x55 05:09:08 DEBUG --- stderr --- 05:09:08 DEBUG 05:09:08 INFO 05:09:08 INFO -------------- Check pod am-55f77847b7-778wv is running -------------- 05:09:08 INFO 05:09:08 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-778wv -o=jsonpath={.status.phase} | grep "Running" 05:09:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG Running 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-778wv -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG true 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-778wv --output jsonpath={.status.startTime} 05:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG 2023-08-12T03:59:45Z 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO ------- Check pod am-55f77847b7-778wv filesystem is accessible ------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-778wv --container openam -- ls / | grep "bin" 05:09:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO ------------- Check pod am-55f77847b7-778wv restart count ------------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-778wv --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG 0 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO Pod am-55f77847b7-778wv has been restarted 0 times. 05:09:09 INFO 05:09:09 INFO -------------- Check pod am-55f77847b7-nv9k2 is running -------------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-nv9k2 -o=jsonpath={.status.phase} | grep "Running" 05:09:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG Running 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-nv9k2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG true 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-nv9k2 --output jsonpath={.status.startTime} 05:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG 2023-08-12T03:59:45Z 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO ------- Check pod am-55f77847b7-nv9k2 filesystem is accessible ------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-nv9k2 --container openam -- ls / | grep "bin" 05:09:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO ------------- Check pod am-55f77847b7-nv9k2 restart count ------------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-nv9k2 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG 0 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO Pod am-55f77847b7-nv9k2 has been restarted 0 times. 05:09:09 INFO 05:09:09 INFO -------------- Check pod am-55f77847b7-v7x55 is running -------------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-v7x55 -o=jsonpath={.status.phase} | grep "Running" 05:09:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG Running 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-v7x55 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG true 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-v7x55 --output jsonpath={.status.startTime} 05:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG 2023-08-12T03:59:45Z 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO ------- Check pod am-55f77847b7-v7x55 filesystem is accessible ------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-v7x55 --container openam -- ls / | grep "bin" 05:09:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO ------------- Check pod am-55f77847b7-v7x55 restart count ------------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-v7x55 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG 0 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO Pod am-55f77847b7-v7x55 has been restarted 0 times. 05:09:09 INFO 05:09:09 INFO --------------------- Get expected number of pods --------------------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 05:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:09 INFO [loop_until]: OK (rc = 0) 05:09:09 DEBUG --- stdout --- 05:09:09 DEBUG 2 05:09:09 DEBUG --- stderr --- 05:09:09 DEBUG 05:09:09 INFO 05:09:09 INFO ---------------------------- Get pod list ---------------------------- 05:09:09 INFO 05:09:09 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 05:09:09 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG idm-65858d8c4c-5tvr8 idm-65858d8c4c-zvhxh 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO -------------- Check pod idm-65858d8c4c-5tvr8 is running -------------- 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-5tvr8 -o=jsonpath={.status.phase} | grep "Running" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG Running 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-5tvr8 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG true 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-5tvr8 --output jsonpath={.status.startTime} 05:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG 2023-08-12T03:59:45Z 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO ------- Check pod idm-65858d8c4c-5tvr8 filesystem is accessible ------- 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-5tvr8 --container openidm -- ls / | grep "bin" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO ------------ Check pod idm-65858d8c4c-5tvr8 restart count ------------ 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-5tvr8 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG 0 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO Pod idm-65858d8c4c-5tvr8 has been restarted 0 times. 05:09:10 INFO 05:09:10 INFO -------------- Check pod idm-65858d8c4c-zvhxh is running -------------- 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-zvhxh -o=jsonpath={.status.phase} | grep "Running" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG Running 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-zvhxh -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG true 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-zvhxh --output jsonpath={.status.startTime} 05:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG 2023-08-12T03:59:45Z 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO ------- Check pod idm-65858d8c4c-zvhxh filesystem is accessible ------- 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-zvhxh --container openidm -- ls / | grep "bin" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO ------------ Check pod idm-65858d8c4c-zvhxh restart count ------------ 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-zvhxh --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG 0 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO Pod idm-65858d8c4c-zvhxh has been restarted 0 times. 05:09:10 INFO 05:09:10 INFO --------------------- Get expected number of pods --------------------- 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 05:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG 3 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO ---------------------------- Get pod list ---------------------------- 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 05:09:10 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG Running 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG true 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 05:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG 2023-08-12T03:25:47Z 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 05:09:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:10 INFO [loop_until]: OK (rc = 0) 05:09:10 DEBUG --- stdout --- 05:09:10 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:09:10 DEBUG --- stderr --- 05:09:10 DEBUG 05:09:10 INFO 05:09:10 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 05:09:10 INFO 05:09:10 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG 0 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO Pod ds-idrepo-0 has been restarted 0 times. 05:09:11 INFO 05:09:11 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG Running 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG true 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 05:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG 2023-08-12T03:37:47Z 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG 0 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO Pod ds-idrepo-1 has been restarted 0 times. 05:09:11 INFO 05:09:11 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG Running 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG true 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 05:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG 2023-08-12T03:48:50Z 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG 0 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO Pod ds-idrepo-2 has been restarted 0 times. 05:09:11 INFO 05:09:11 INFO --------------------- Get expected number of pods --------------------- 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 05:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG 3 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO ---------------------------- Get pod list ---------------------------- 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 05:09:11 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO -------------------- Check pod ds-cts-0 is running -------------------- 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG Running 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG true 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 05:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG 2023-08-12T03:25:47Z 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 05:09:11 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:11 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:11 INFO [loop_until]: OK (rc = 0) 05:09:11 DEBUG --- stdout --- 05:09:11 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:09:11 DEBUG --- stderr --- 05:09:11 DEBUG 05:09:11 INFO 05:09:11 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 05:09:11 INFO 05:09:11 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG 0 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO Pod ds-cts-0 has been restarted 0 times. 05:09:12 INFO 05:09:12 INFO -------------------- Check pod ds-cts-1 is running -------------------- 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 05:09:12 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG Running 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:12 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG true 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 05:09:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG 2023-08-12T03:26:15Z 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO 05:09:12 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 05:09:12 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO 05:09:12 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG 0 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO Pod ds-cts-1 has been restarted 0 times. 05:09:12 INFO 05:09:12 INFO -------------------- Check pod ds-cts-2 is running -------------------- 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 05:09:12 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG Running 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 05:09:12 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG true 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 05:09:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG 2023-08-12T03:26:40Z 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO 05:09:12 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 05:09:12 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO 05:09:12 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 05:09:12 INFO 05:09:12 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 05:09:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:12 INFO [loop_until]: OK (rc = 0) 05:09:12 DEBUG --- stdout --- 05:09:12 DEBUG 0 05:09:12 DEBUG --- stderr --- 05:09:12 DEBUG 05:09:12 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.44:8080 Press CTRL+C to quit 05:09:33 INFO 05:09:33 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:33 INFO [loop_until]: OK (rc = 0) 05:09:33 DEBUG --- stdout --- 05:09:33 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:33 DEBUG --- stderr --- 05:09:33 DEBUG 05:09:33 INFO 05:09:33 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:33 INFO [loop_until]: OK (rc = 0) 05:09:33 DEBUG --- stdout --- 05:09:33 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:33 DEBUG --- stderr --- 05:09:33 DEBUG 05:09:33 INFO 05:09:33 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:33 INFO [loop_until]: OK (rc = 0) 05:09:33 DEBUG --- stdout --- 05:09:33 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:33 DEBUG --- stderr --- 05:09:33 DEBUG 05:09:33 INFO 05:09:33 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:34 INFO [loop_until]: OK (rc = 0) 05:09:34 DEBUG --- stdout --- 05:09:34 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:34 DEBUG --- stderr --- 05:09:34 DEBUG 05:09:34 INFO 05:09:34 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:34 INFO [loop_until]: OK (rc = 0) 05:09:34 DEBUG --- stdout --- 05:09:34 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:34 DEBUG --- stderr --- 05:09:34 DEBUG 05:09:34 INFO 05:09:34 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:34 INFO [loop_until]: OK (rc = 0) 05:09:34 DEBUG --- stdout --- 05:09:34 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:34 DEBUG --- stderr --- 05:09:34 DEBUG 05:09:34 INFO 05:09:34 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:34 INFO [loop_until]: OK (rc = 0) 05:09:34 DEBUG --- stdout --- 05:09:34 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:34 DEBUG --- stderr --- 05:09:34 DEBUG 05:09:34 INFO 05:09:34 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:34 INFO [loop_until]: OK (rc = 0) 05:09:34 DEBUG --- stdout --- 05:09:34 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:34 DEBUG --- stderr --- 05:09:34 DEBUG 05:09:34 INFO 05:09:34 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:34 INFO [loop_until]: OK (rc = 0) 05:09:34 DEBUG --- stdout --- 05:09:34 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:34 DEBUG --- stderr --- 05:09:34 DEBUG 05:09:34 INFO 05:09:34 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:34 INFO [loop_until]: OK (rc = 0) 05:09:34 DEBUG --- stdout --- 05:09:34 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:34 DEBUG --- stderr --- 05:09:34 DEBUG 05:09:34 INFO 05:09:34 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:34 INFO [loop_until]: OK (rc = 0) 05:09:34 DEBUG --- stdout --- 05:09:34 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:34 DEBUG --- stderr --- 05:09:34 DEBUG 05:09:35 INFO 05:09:35 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:35 INFO [loop_until]: OK (rc = 0) 05:09:35 DEBUG --- stdout --- 05:09:35 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:35 DEBUG --- stderr --- 05:09:35 DEBUG 05:09:35 INFO 05:09:35 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:35 INFO [loop_until]: OK (rc = 0) 05:09:35 DEBUG --- stdout --- 05:09:35 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:35 DEBUG --- stderr --- 05:09:35 DEBUG 05:09:35 INFO 05:09:35 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:35 INFO [loop_until]: OK (rc = 0) 05:09:35 DEBUG --- stdout --- 05:09:35 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:35 DEBUG --- stderr --- 05:09:35 DEBUG 05:09:35 INFO 05:09:35 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:35 INFO [loop_until]: OK (rc = 0) 05:09:35 DEBUG --- stdout --- 05:09:35 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:35 DEBUG --- stderr --- 05:09:35 DEBUG 05:09:35 INFO 05:09:35 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:35 INFO [loop_until]: OK (rc = 0) 05:09:35 DEBUG --- stdout --- 05:09:35 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:35 DEBUG --- stderr --- 05:09:35 DEBUG 05:09:35 INFO 05:09:35 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:35 INFO [loop_until]: OK (rc = 0) 05:09:35 DEBUG --- stdout --- 05:09:35 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:35 DEBUG --- stderr --- 05:09:35 DEBUG 05:09:35 INFO 05:09:35 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:35 INFO [loop_until]: OK (rc = 0) 05:09:35 DEBUG --- stdout --- 05:09:35 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:35 DEBUG --- stderr --- 05:09:35 DEBUG 05:09:35 INFO 05:09:35 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:35 INFO [loop_until]: OK (rc = 0) 05:09:35 DEBUG --- stdout --- 05:09:35 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:35 DEBUG --- stderr --- 05:09:35 DEBUG 05:09:36 INFO 05:09:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:36 INFO [loop_until]: OK (rc = 0) 05:09:36 DEBUG --- stdout --- 05:09:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:36 DEBUG --- stderr --- 05:09:36 DEBUG 05:09:36 INFO 05:09:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:36 INFO [loop_until]: OK (rc = 0) 05:09:36 DEBUG --- stdout --- 05:09:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:36 DEBUG --- stderr --- 05:09:36 DEBUG 05:09:36 INFO 05:09:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:36 INFO [loop_until]: OK (rc = 0) 05:09:36 DEBUG --- stdout --- 05:09:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:36 DEBUG --- stderr --- 05:09:36 DEBUG 05:09:36 INFO 05:09:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:36 INFO [loop_until]: OK (rc = 0) 05:09:36 DEBUG --- stdout --- 05:09:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:36 DEBUG --- stderr --- 05:09:36 DEBUG 05:09:36 INFO 05:09:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:36 INFO [loop_until]: OK (rc = 0) 05:09:36 DEBUG --- stdout --- 05:09:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:36 DEBUG --- stderr --- 05:09:36 DEBUG 05:09:36 INFO 05:09:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:36 INFO [loop_until]: OK (rc = 0) 05:09:36 DEBUG --- stdout --- 05:09:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:36 DEBUG --- stderr --- 05:09:36 DEBUG 05:09:36 INFO 05:09:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:36 INFO [loop_until]: OK (rc = 0) 05:09:36 DEBUG --- stdout --- 05:09:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:36 DEBUG --- stderr --- 05:09:36 DEBUG 05:09:36 INFO 05:09:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 05:09:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:37 INFO [loop_until]: OK (rc = 0) 05:09:37 DEBUG --- stdout --- 05:09:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 05:09:37 DEBUG --- stderr --- 05:09:37 DEBUG 05:09:37 INFO Initializing monitoring instance threads 05:09:37 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 05:09:37 INFO Starting instance threads 05:09:37 INFO 05:09:37 INFO Thread started 05:09:37 INFO [loop_until]: kubectl --namespace=xlou top node 05:09:37 INFO 05:09:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:37 INFO Thread started 05:09:37 INFO [loop_until]: kubectl --namespace=xlou top pods 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377" 05:09:37 INFO Thread started Exception in thread Thread-23: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 05:09:37 INFO Thread started Exception in thread Thread-24: 05:09:37 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-25: Traceback (most recent call last): 05:09:37 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691813377" self.run() 05:09:37 INFO Thread started self.run() 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691813377" File "/usr/local/lib/python3.9/threading.py", line 910, in run 05:09:37 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/usr/local/lib/python3.9/threading.py", line 910, in run 05:09:37 INFO Thread started Exception in thread Thread-28: 05:09:37 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377" self._target(*self._args, **self._kwargs) 05:09:37 INFO Thread started 05:09:37 INFO All threads has been started Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 127.0.0.1 - - [12/Aug/2023 05:09:37] "GET /monitoring/start HTTP/1.1" 200 - File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: self.run() if self.prom_data['functions']: KeyError: 'functions' KeyError: 'functions' if self.prom_data['functions']: 05:09:37 INFO [loop_until]: OK (rc = 0) 05:09:37 DEBUG --- stdout --- File "/usr/local/lib/python3.9/threading.py", line 910, in run 05:09:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 17m 2603Mi am-55f77847b7-nv9k2 20m 4357Mi am-55f77847b7-v7x55 20m 4350Mi ds-cts-0 7m 358Mi ds-cts-1 7m 352Mi ds-cts-2 9m 359Mi ds-idrepo-0 23m 10352Mi ds-idrepo-1 21m 10284Mi ds-idrepo-2 54m 10260Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 1614Mi idm-65858d8c4c-zvhxh 6m 1412Mi lodemon-97b6d75b7-fknft 573m 60Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 15Mi 05:09:37 DEBUG --- stderr --- KeyError: 'functions' 05:09:37 DEBUG self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: KeyError: 'functions' 05:09:37 INFO [loop_until]: OK (rc = 0) 05:09:37 DEBUG --- stdout --- 05:09:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 343m 2% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 91m 0% 5370Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 84m 0% 5479Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 3713Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2932Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2103Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2667Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 69m 0% 11004Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 101m 0% 10899Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 76m 0% 10913Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1626Mi 2% 05:09:37 DEBUG --- stderr --- 05:09:37 DEBUG 05:09:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:38 WARNING Response is NONE 05:09:38 DEBUG Exception is preset. Setting retry_loop to true 05:09:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:40 WARNING Response is NONE 05:09:40 WARNING Response is NONE 05:09:40 DEBUG Exception is preset. Setting retry_loop to true 05:09:40 DEBUG Exception is preset. Setting retry_loop to true 05:09:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:44 WARNING Response is NONE 05:09:44 WARNING Response is NONE 05:09:44 WARNING Response is NONE 05:09:44 WARNING Response is NONE 05:09:44 WARNING Response is NONE 05:09:44 DEBUG Exception is preset. Setting retry_loop to true 05:09:44 DEBUG Exception is preset. Setting retry_loop to true 05:09:44 DEBUG Exception is preset. Setting retry_loop to true 05:09:44 DEBUG Exception is preset. Setting retry_loop to true 05:09:44 DEBUG Exception is preset. Setting retry_loop to true 05:09:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:49 WARNING Response is NONE 05:09:49 DEBUG Exception is preset. Setting retry_loop to true 05:09:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:51 WARNING Response is NONE 05:09:51 WARNING Response is NONE 05:09:51 DEBUG Exception is preset. Setting retry_loop to true 05:09:51 DEBUG Exception is preset. Setting retry_loop to true 05:09:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:52 WARNING Response is NONE 05:09:52 WARNING Response is NONE 05:09:52 WARNING Response is NONE 05:09:52 WARNING Response is NONE 05:09:52 WARNING Response is NONE 05:09:52 DEBUG Exception is preset. Setting retry_loop to true 05:09:52 DEBUG Exception is preset. Setting retry_loop to true 05:09:52 DEBUG Exception is preset. Setting retry_loop to true 05:09:52 DEBUG Exception is preset. Setting retry_loop to true 05:09:52 DEBUG Exception is preset. Setting retry_loop to true 05:09:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:55 WARNING Response is NONE 05:09:55 DEBUG Exception is preset. Setting retry_loop to true 05:09:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:09:57 WARNING Response is NONE 05:09:57 WARNING Response is NONE 05:09:57 DEBUG Exception is preset. Setting retry_loop to true 05:09:57 DEBUG Exception is preset. Setting retry_loop to true 05:09:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:09:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:00 WARNING Response is NONE 05:10:00 DEBUG Exception is preset. Setting retry_loop to true 05:10:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:02 WARNING Response is NONE 05:10:02 DEBUG Exception is preset. Setting retry_loop to true 05:10:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:03 WARNING Response is NONE 05:10:03 DEBUG Exception is preset. Setting retry_loop to true 05:10:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:04 WARNING Response is NONE 05:10:04 DEBUG Exception is preset. Setting retry_loop to true 05:10:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:06 WARNING Response is NONE 05:10:06 DEBUG Exception is preset. Setting retry_loop to true 05:10:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:08 WARNING Response is NONE 05:10:08 DEBUG Exception is preset. Setting retry_loop to true 05:10:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:09 WARNING Response is NONE 05:10:09 DEBUG Exception is preset. Setting retry_loop to true 05:10:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:11 WARNING Response is NONE 05:10:11 DEBUG Exception is preset. Setting retry_loop to true 05:10:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:13 WARNING Response is NONE 05:10:13 DEBUG Exception is preset. Setting retry_loop to true 05:10:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:15 WARNING Response is NONE 05:10:15 DEBUG Exception is preset. Setting retry_loop to true 05:10:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:17 WARNING Response is NONE 05:10:17 DEBUG Exception is preset. Setting retry_loop to true 05:10:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:18 WARNING Response is NONE 05:10:18 DEBUG Exception is preset. Setting retry_loop to true 05:10:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:20 WARNING Response is NONE 05:10:20 DEBUG Exception is preset. Setting retry_loop to true 05:10:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:22 WARNING Response is NONE 05:10:22 DEBUG Exception is preset. Setting retry_loop to true 05:10:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:24 WARNING Response is NONE 05:10:24 DEBUG Exception is preset. Setting retry_loop to true 05:10:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:26 WARNING Response is NONE 05:10:26 DEBUG Exception is preset. Setting retry_loop to true 05:10:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:29 WARNING Response is NONE 05:10:29 DEBUG Exception is preset. Setting retry_loop to true 05:10:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:31 WARNING Response is NONE 05:10:31 DEBUG Exception is preset. Setting retry_loop to true 05:10:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:33 WARNING Response is NONE 05:10:33 DEBUG Exception is preset. Setting retry_loop to true 05:10:33 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:10:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:35 WARNING Response is NONE 05:10:35 DEBUG Exception is preset. Setting retry_loop to true 05:10:35 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:10:37 INFO 05:10:37 INFO [loop_until]: kubectl --namespace=xlou top pods 05:10:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:10:37 INFO 05:10:37 INFO [loop_until]: kubectl --namespace=xlou top node 05:10:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:10:37 INFO [loop_until]: OK (rc = 0) 05:10:37 DEBUG --- stdout --- 05:10:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 17m 2604Mi am-55f77847b7-nv9k2 15m 4358Mi am-55f77847b7-v7x55 18m 4347Mi ds-cts-0 10m 361Mi ds-cts-1 160m 353Mi ds-cts-2 64m 361Mi ds-idrepo-0 740m 10361Mi ds-idrepo-1 31m 10290Mi ds-idrepo-2 238m 10276Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 10m 1626Mi idm-65858d8c4c-zvhxh 9m 1412Mi lodemon-97b6d75b7-fknft 3m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 163m 48Mi 05:10:37 DEBUG --- stderr --- 05:10:37 DEBUG 05:10:37 INFO [loop_until]: OK (rc = 0) 05:10:37 DEBUG --- stdout --- 05:10:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5370Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 5476Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 3714Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 2941Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2102Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2666Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 614m 3% 11016Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 85m 0% 10909Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 123m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 95m 0% 10920Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 272m 1% 1626Mi 2% 05:10:37 DEBUG --- stderr --- 05:10:37 DEBUG 05:10:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:37 WARNING Response is NONE 05:10:37 DEBUG Exception is preset. Setting retry_loop to true 05:10:37 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:10:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:39 WARNING Response is NONE 05:10:39 DEBUG Exception is preset. Setting retry_loop to true 05:10:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:40 WARNING Response is NONE 05:10:40 DEBUG Exception is preset. Setting retry_loop to true 05:10:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:42 WARNING Response is NONE 05:10:42 DEBUG Exception is preset. Setting retry_loop to true 05:10:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:45 WARNING Response is NONE 05:10:45 DEBUG Exception is preset. Setting retry_loop to true 05:10:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:50 WARNING Response is NONE 05:10:50 DEBUG Exception is preset. Setting retry_loop to true 05:10:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:50 WARNING Response is NONE 05:10:50 DEBUG Exception is preset. Setting retry_loop to true 05:10:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:51 WARNING Response is NONE 05:10:51 DEBUG Exception is preset. Setting retry_loop to true 05:10:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:53 WARNING Response is NONE 05:10:53 DEBUG Exception is preset. Setting retry_loop to true 05:10:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:56 WARNING Response is NONE 05:10:56 DEBUG Exception is preset. Setting retry_loop to true 05:10:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:58 WARNING Response is NONE 05:10:58 WARNING Response is NONE 05:10:58 DEBUG Exception is preset. Setting retry_loop to true 05:10:58 DEBUG Exception is preset. Setting retry_loop to true 05:10:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:10:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:10:59 WARNING Response is NONE 05:10:59 DEBUG Exception is preset. Setting retry_loop to true 05:10:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:01 WARNING Response is NONE 05:11:01 DEBUG Exception is preset. Setting retry_loop to true 05:11:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:01 WARNING Response is NONE 05:11:01 DEBUG Exception is preset. Setting retry_loop to true 05:11:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:02 WARNING Response is NONE 05:11:02 DEBUG Exception is preset. Setting retry_loop to true 05:11:02 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:04 WARNING Response is NONE 05:11:04 DEBUG Exception is preset. Setting retry_loop to true 05:11:04 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:06 WARNING Response is NONE 05:11:06 WARNING Response is NONE 05:11:06 WARNING Response is NONE 05:11:06 DEBUG Exception is preset. Setting retry_loop to true 05:11:06 DEBUG Exception is preset. Setting retry_loop to true 05:11:06 DEBUG Exception is preset. Setting retry_loop to true 05:11:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:07 WARNING Response is NONE 05:11:07 DEBUG Exception is preset. Setting retry_loop to true 05:11:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:10 WARNING Response is NONE 05:11:10 DEBUG Exception is preset. Setting retry_loop to true 05:11:10 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:11 WARNING Response is NONE 05:11:11 DEBUG Exception is preset. Setting retry_loop to true 05:11:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:12 WARNING Response is NONE 05:11:12 DEBUG Exception is preset. Setting retry_loop to true 05:11:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:14 WARNING Response is NONE 05:11:14 DEBUG Exception is preset. Setting retry_loop to true 05:11:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:15 WARNING Response is NONE 05:11:15 DEBUG Exception is preset. Setting retry_loop to true 05:11:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:18 WARNING Response is NONE 05:11:18 DEBUG Exception is preset. Setting retry_loop to true 05:11:18 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:20 WARNING Response is NONE 05:11:20 DEBUG Exception is preset. Setting retry_loop to true 05:11:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:22 WARNING Response is NONE 05:11:22 DEBUG Exception is preset. Setting retry_loop to true 05:11:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:24 WARNING Response is NONE 05:11:24 WARNING Response is NONE 05:11:24 DEBUG Exception is preset. Setting retry_loop to true 05:11:24 DEBUG Exception is preset. Setting retry_loop to true 05:11:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:26 WARNING Response is NONE 05:11:26 DEBUG Exception is preset. Setting retry_loop to true 05:11:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:31 WARNING Response is NONE 05:11:31 DEBUG Exception is preset. Setting retry_loop to true 05:11:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:33 WARNING Response is NONE 05:11:33 DEBUG Exception is preset. Setting retry_loop to true 05:11:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:35 WARNING Response is NONE 05:11:35 WARNING Response is NONE 05:11:35 DEBUG Exception is preset. Setting retry_loop to true 05:11:35 DEBUG Exception is preset. Setting retry_loop to true 05:11:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:37 INFO 05:11:37 INFO [loop_until]: kubectl --namespace=xlou top pods 05:11:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:11:37 INFO 05:11:37 INFO [loop_until]: kubectl --namespace=xlou top node 05:11:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:11:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:37 WARNING Response is NONE 05:11:37 DEBUG Exception is preset. Setting retry_loop to true 05:11:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:37 INFO [loop_until]: OK (rc = 0) 05:11:37 DEBUG --- stdout --- 05:11:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 16m 2604Mi am-55f77847b7-nv9k2 11m 4358Mi am-55f77847b7-v7x55 12m 4347Mi ds-cts-0 11m 362Mi ds-cts-1 11m 354Mi ds-cts-2 8m 361Mi ds-idrepo-0 17m 10361Mi ds-idrepo-1 23m 10293Mi ds-idrepo-2 21m 10278Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 8m 1640Mi idm-65858d8c4c-zvhxh 7m 1414Mi lodemon-97b6d75b7-fknft 3m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 48Mi 05:11:37 DEBUG --- stderr --- 05:11:37 DEBUG 05:11:37 INFO [loop_until]: OK (rc = 0) 05:11:37 DEBUG --- stdout --- 05:11:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 5371Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 5489Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 3714Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2948Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2108Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2666Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 11015Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 10911Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 79m 0% 10920Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1626Mi 2% 05:11:37 DEBUG --- stderr --- 05:11:37 DEBUG 05:11:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:42 WARNING Response is NONE 05:11:42 DEBUG Exception is preset. Setting retry_loop to true 05:11:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:44 WARNING Response is NONE 05:11:44 DEBUG Exception is preset. Setting retry_loop to true 05:11:44 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:11:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 WARNING Response is NONE 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 DEBUG Exception is preset. Setting retry_loop to true 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:11:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:48 WARNING Response is NONE 05:11:48 DEBUG Exception is preset. Setting retry_loop to true 05:11:48 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:53 WARNING Response is NONE 05:11:53 DEBUG Exception is preset. Setting retry_loop to true 05:11:53 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:57 WARNING Response is NONE 05:11:57 DEBUG Exception is preset. Setting retry_loop to true 05:11:57 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:11:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:11:59 WARNING Response is NONE 05:11:59 DEBUG Exception is preset. Setting retry_loop to true 05:11:59 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:12:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:00 WARNING Response is NONE 05:12:00 DEBUG Exception is preset. Setting retry_loop to true 05:12:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:04 WARNING Response is NONE 05:12:04 WARNING Response is NONE 05:12:04 WARNING Response is NONE 05:12:04 WARNING Response is NONE 05:12:04 DEBUG Exception is preset. Setting retry_loop to true 05:12:04 DEBUG Exception is preset. Setting retry_loop to true 05:12:04 DEBUG Exception is preset. Setting retry_loop to true 05:12:04 DEBUG Exception is preset. Setting retry_loop to true 05:12:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:11 WARNING Response is NONE 05:12:11 DEBUG Exception is preset. Setting retry_loop to true 05:12:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:12 WARNING Response is NONE 05:12:12 WARNING Response is NONE 05:12:12 WARNING Response is NONE 05:12:12 WARNING Response is NONE 05:12:12 DEBUG Exception is preset. Setting retry_loop to true 05:12:12 DEBUG Exception is preset. Setting retry_loop to true 05:12:12 DEBUG Exception is preset. Setting retry_loop to true 05:12:12 DEBUG Exception is preset. Setting retry_loop to true 05:12:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:15 WARNING Response is NONE 05:12:15 WARNING Response is NONE 05:12:15 DEBUG Exception is preset. Setting retry_loop to true 05:12:15 DEBUG Exception is preset. Setting retry_loop to true 05:12:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:17 WARNING Response is NONE 05:12:17 WARNING Response is NONE 05:12:17 DEBUG Exception is preset. Setting retry_loop to true 05:12:17 DEBUG Exception is preset. Setting retry_loop to true 05:12:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:22 WARNING Response is NONE 05:12:22 DEBUG Exception is preset. Setting retry_loop to true 05:12:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:23 WARNING Response is NONE 05:12:23 DEBUG Exception is preset. Setting retry_loop to true 05:12:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:25 WARNING Response is NONE 05:12:25 DEBUG Exception is preset. Setting retry_loop to true 05:12:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:28 WARNING Response is NONE 05:12:28 WARNING Response is NONE 05:12:28 DEBUG Exception is preset. Setting retry_loop to true 05:12:28 DEBUG Exception is preset. Setting retry_loop to true 05:12:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:29 WARNING Response is NONE 05:12:29 WARNING Response is NONE 05:12:29 DEBUG Exception is preset. Setting retry_loop to true 05:12:29 DEBUG Exception is preset. Setting retry_loop to true 05:12:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:32 WARNING Response is NONE 05:12:32 WARNING Response is NONE 05:12:32 DEBUG Exception is preset. Setting retry_loop to true 05:12:32 DEBUG Exception is preset. Setting retry_loop to true 05:12:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:33 WARNING Response is NONE 05:12:33 DEBUG Exception is preset. Setting retry_loop to true 05:12:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:34 WARNING Response is NONE 05:12:34 DEBUG Exception is preset. Setting retry_loop to true 05:12:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:36 WARNING Response is NONE 05:12:36 DEBUG Exception is preset. Setting retry_loop to true 05:12:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:37 INFO 05:12:37 INFO [loop_until]: kubectl --namespace=xlou top pods 05:12:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:12:37 INFO [loop_until]: OK (rc = 0) 05:12:37 DEBUG --- stdout --- 05:12:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 13m 2605Mi am-55f77847b7-nv9k2 12m 4359Mi am-55f77847b7-v7x55 10m 4348Mi ds-cts-0 9m 363Mi ds-cts-1 9m 354Mi ds-cts-2 11m 361Mi ds-idrepo-0 26m 10364Mi ds-idrepo-1 38m 10302Mi ds-idrepo-2 41m 10278Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 9m 1649Mi idm-65858d8c4c-zvhxh 6m 1414Mi lodemon-97b6d75b7-fknft 3m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 200m 99Mi 05:12:37 DEBUG --- stderr --- 05:12:37 DEBUG 05:12:37 INFO 05:12:37 INFO [loop_until]: kubectl --namespace=xlou top node 05:12:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:12:37 INFO [loop_until]: OK (rc = 0) 05:12:37 DEBUG --- stdout --- 05:12:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1330Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5372Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5479Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3710Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2960Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2105Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2669Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 78m 0% 11018Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 91m 0% 10907Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 88m 0% 10931Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 289m 1% 1730Mi 2% 05:12:37 DEBUG --- stderr --- 05:12:37 DEBUG 05:12:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:39 WARNING Response is NONE 05:12:39 DEBUG Exception is preset. Setting retry_loop to true 05:12:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:40 WARNING Response is NONE 05:12:40 DEBUG Exception is preset. Setting retry_loop to true 05:12:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:42 WARNING Response is NONE 05:12:42 DEBUG Exception is preset. Setting retry_loop to true 05:12:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:44 WARNING Response is NONE 05:12:44 DEBUG Exception is preset. Setting retry_loop to true 05:12:44 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:12:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:45 WARNING Response is NONE 05:12:45 DEBUG Exception is preset. Setting retry_loop to true 05:12:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:47 WARNING Response is NONE 05:12:47 DEBUG Exception is preset. Setting retry_loop to true 05:12:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:50 WARNING Response is NONE 05:12:50 DEBUG Exception is preset. Setting retry_loop to true 05:12:50 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:12:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:51 WARNING Response is NONE 05:12:51 DEBUG Exception is preset. Setting retry_loop to true 05:12:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:53 WARNING Response is NONE 05:12:53 DEBUG Exception is preset. Setting retry_loop to true 05:12:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:56 WARNING Response is NONE 05:12:56 DEBUG Exception is preset. Setting retry_loop to true 05:12:56 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:12:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:57 WARNING Response is NONE 05:12:57 WARNING Response is NONE 05:12:57 DEBUG Exception is preset. Setting retry_loop to true 05:12:57 DEBUG Exception is preset. Setting retry_loop to true 05:12:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:12:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:12:58 WARNING Response is NONE 05:12:58 DEBUG Exception is preset. Setting retry_loop to true 05:12:58 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:13:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:13:02 WARNING Response is NONE 05:13:02 DEBUG Exception is preset. Setting retry_loop to true 05:13:02 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:13:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:13:04 WARNING Response is NONE 05:13:04 DEBUG Exception is preset. Setting retry_loop to true 05:13:04 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:13:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:13:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:13:08 WARNING Response is NONE 05:13:08 WARNING Response is NONE 05:13:08 DEBUG Exception is preset. Setting retry_loop to true 05:13:08 DEBUG Exception is preset. Setting retry_loop to true 05:13:08 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: 05:13:08 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Traceback (most recent call last): Exception in thread Thread-15: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner return self.request_cmd(url=url, **kwargs) self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): instance.run() File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:13:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:13:09 WARNING Response is NONE 05:13:09 DEBUG Exception is preset. Setting retry_loop to true 05:13:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 05:13:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691813377 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 05:13:20 WARNING Response is NONE 05:13:20 DEBUG Exception is preset. Setting retry_loop to true 05:13:20 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 05:13:37 INFO 05:13:37 INFO [loop_until]: kubectl --namespace=xlou top pods 05:13:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:13:37 INFO 05:13:37 INFO [loop_until]: kubectl --namespace=xlou top node 05:13:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:13:37 INFO [loop_until]: OK (rc = 0) 05:13:37 DEBUG --- stdout --- 05:13:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 12m 2606Mi am-55f77847b7-nv9k2 11m 4359Mi am-55f77847b7-v7x55 10m 4348Mi ds-cts-0 8m 362Mi ds-cts-1 8m 354Mi ds-cts-2 9m 361Mi ds-idrepo-0 26m 10360Mi ds-idrepo-1 24m 10292Mi ds-idrepo-2 23m 10279Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 9m 1661Mi idm-65858d8c4c-zvhxh 5m 1414Mi lodemon-97b6d75b7-fknft 3m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 98Mi 05:13:37 DEBUG --- stderr --- 05:13:37 DEBUG 05:13:37 INFO [loop_until]: OK (rc = 0) 05:13:37 DEBUG --- stdout --- 05:13:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5371Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5475Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3713Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2974Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2107Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2669Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 84m 0% 11030Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 10912Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 10920Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1624Mi 2% 05:13:37 DEBUG --- stderr --- 05:13:37 DEBUG 05:14:37 INFO 05:14:37 INFO [loop_until]: kubectl --namespace=xlou top node 05:14:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:14:37 INFO 05:14:37 INFO [loop_until]: kubectl --namespace=xlou top pods 05:14:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:14:37 INFO [loop_until]: OK (rc = 0) 05:14:37 DEBUG --- stdout --- 05:14:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 102m 2834Mi am-55f77847b7-nv9k2 57m 4396Mi am-55f77847b7-v7x55 9m 4349Mi ds-cts-0 84m 364Mi ds-cts-1 8m 355Mi ds-cts-2 8m 361Mi ds-idrepo-0 330m 10374Mi ds-idrepo-1 55m 10340Mi ds-idrepo-2 28m 10284Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 1670Mi idm-65858d8c4c-zvhxh 43m 1438Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 98Mi 05:14:37 DEBUG --- stderr --- 05:14:37 DEBUG 05:14:37 INFO [loop_until]: OK (rc = 0) 05:14:37 DEBUG --- stdout --- 05:14:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 5407Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 5477Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 3721Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 115m 0% 3047Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 131m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2668Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 133m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 135m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 490m 3% 11031Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 70m 0% 10916Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 156m 0% 10925Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 75m 0% 1624Mi 2% 05:14:37 DEBUG --- stderr --- 05:14:37 DEBUG 05:15:37 INFO 05:15:37 INFO [loop_until]: kubectl --namespace=xlou top node 05:15:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:15:37 INFO 05:15:37 INFO [loop_until]: kubectl --namespace=xlou top pods 05:15:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:15:37 INFO [loop_until]: OK (rc = 0) 05:15:37 DEBUG --- stdout --- 05:15:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1343Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 89m 0% 5398Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5477Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 3942Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 3049Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2109Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2683Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 417m 2% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 134m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3234m 20% 13832Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 341m 2% 10918Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 192m 1% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 278m 1% 10929Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1134m 7% 1870Mi 3% 05:15:37 DEBUG --- stderr --- 05:15:37 DEBUG 05:15:37 INFO [loop_until]: OK (rc = 0) 05:15:37 DEBUG --- stdout --- 05:15:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 15m 2836Mi am-55f77847b7-nv9k2 31m 4386Mi am-55f77847b7-v7x55 10m 4349Mi ds-cts-0 346m 365Mi ds-cts-1 127m 356Mi ds-cts-2 140m 363Mi ds-idrepo-0 3084m 13259Mi ds-idrepo-1 231m 10299Mi ds-idrepo-2 278m 10284Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 11m 1735Mi idm-65858d8c4c-zvhxh 7m 1433Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1075m 347Mi 05:15:37 DEBUG --- stderr --- 05:15:37 DEBUG 05:16:38 INFO 05:16:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:16:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:16:38 INFO 05:16:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:16:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:16:38 INFO [loop_until]: OK (rc = 0) 05:16:38 DEBUG --- stdout --- 05:16:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5399Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5476Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 3951Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3051Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2111Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 2695Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2871m 18% 13982Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 10922Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 82m 0% 10927Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1220m 7% 1870Mi 3% 05:16:38 DEBUG --- stderr --- 05:16:38 DEBUG 05:16:38 INFO [loop_until]: OK (rc = 0) 05:16:38 DEBUG --- stdout --- 05:16:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 19m 2849Mi am-55f77847b7-nv9k2 14m 4387Mi am-55f77847b7-v7x55 16m 4351Mi ds-cts-0 7m 364Mi ds-cts-1 6m 356Mi ds-cts-2 7m 363Mi ds-idrepo-0 2809m 13407Mi ds-idrepo-1 33m 10297Mi ds-idrepo-2 23m 10287Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 1735Mi idm-65858d8c4c-zvhxh 8m 1435Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1140m 347Mi 05:16:38 DEBUG --- stderr --- 05:16:38 DEBUG 05:17:38 INFO 05:17:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:17:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:17:38 INFO 05:17:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:17:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:17:38 INFO [loop_until]: OK (rc = 0) 05:17:38 DEBUG --- stdout --- 05:17:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5398Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 5476Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3965Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 3048Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2688Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2829m 17% 13984Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 82m 0% 10920Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 10930Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1288m 8% 1870Mi 3% 05:17:38 DEBUG --- stderr --- 05:17:38 DEBUG 05:17:38 INFO [loop_until]: OK (rc = 0) 05:17:38 DEBUG --- stdout --- 05:17:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 11m 2860Mi am-55f77847b7-nv9k2 12m 4388Mi am-55f77847b7-v7x55 12m 4350Mi ds-cts-0 7m 366Mi ds-cts-1 7m 359Mi ds-cts-2 8m 363Mi ds-idrepo-0 2745m 13414Mi ds-idrepo-1 19m 10298Mi ds-idrepo-2 33m 10287Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 8m 1735Mi idm-65858d8c4c-zvhxh 9m 1436Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1194m 348Mi 05:17:38 DEBUG --- stderr --- 05:17:38 DEBUG 05:18:38 INFO 05:18:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:18:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:18:38 INFO 05:18:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:18:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:18:38 INFO [loop_until]: OK (rc = 0) 05:18:38 DEBUG --- stdout --- 05:18:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 13m 2870Mi am-55f77847b7-nv9k2 12m 4387Mi am-55f77847b7-v7x55 11m 4350Mi ds-cts-0 7m 366Mi ds-cts-1 6m 358Mi ds-cts-2 7m 363Mi ds-idrepo-0 2979m 13414Mi ds-idrepo-1 22m 10299Mi ds-idrepo-2 18m 10288Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 1735Mi idm-65858d8c4c-zvhxh 12m 1436Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1286m 348Mi 05:18:38 DEBUG --- stderr --- 05:18:38 DEBUG 05:18:38 INFO [loop_until]: OK (rc = 0) 05:18:38 DEBUG --- stdout --- 05:18:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5400Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5477Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 3974Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 3045Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2103Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2688Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3200m 20% 14166Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 10920Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 10929Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1368m 8% 1869Mi 3% 05:18:38 DEBUG --- stderr --- 05:18:38 DEBUG 05:19:38 INFO 05:19:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:19:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:19:38 INFO 05:19:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:19:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:19:38 INFO [loop_until]: OK (rc = 0) 05:19:38 DEBUG --- stdout --- 05:19:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5402Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5477Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 3985Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 3048Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2689Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3113m 19% 14198Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 72m 0% 10924Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10928Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1422m 8% 1870Mi 3% 05:19:38 DEBUG --- stderr --- 05:19:38 DEBUG 05:19:38 INFO [loop_until]: OK (rc = 0) 05:19:38 DEBUG --- stdout --- 05:19:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 14m 2879Mi am-55f77847b7-nv9k2 10m 4388Mi am-55f77847b7-v7x55 10m 4352Mi ds-cts-0 7m 366Mi ds-cts-1 7m 358Mi ds-cts-2 8m 363Mi ds-idrepo-0 3033m 13625Mi ds-idrepo-1 22m 10300Mi ds-idrepo-2 24m 10289Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 1736Mi idm-65858d8c4c-zvhxh 10m 1436Mi lodemon-97b6d75b7-fknft 5m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1355m 348Mi 05:19:38 DEBUG --- stderr --- 05:19:38 DEBUG 05:20:38 INFO 05:20:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:20:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:20:38 INFO 05:20:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:20:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:20:38 INFO [loop_until]: OK (rc = 0) 05:20:38 DEBUG --- stdout --- 05:20:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 10m 2891Mi am-55f77847b7-nv9k2 9m 4388Mi am-55f77847b7-v7x55 8m 4352Mi ds-cts-0 8m 366Mi ds-cts-1 6m 358Mi ds-cts-2 7m 364Mi ds-idrepo-0 13m 13626Mi ds-idrepo-1 17m 10300Mi ds-idrepo-2 17m 10294Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 1736Mi idm-65858d8c4c-zvhxh 7m 1436Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 98Mi 05:20:38 DEBUG --- stderr --- 05:20:38 DEBUG 05:20:38 INFO [loop_until]: OK (rc = 0) 05:20:38 DEBUG --- stdout --- 05:20:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1339Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5401Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5479Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3994Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 3046Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2691Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14187Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 10927Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 10930Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1624Mi 2% 05:20:38 DEBUG --- stderr --- 05:20:38 DEBUG 05:21:38 INFO 05:21:38 INFO 05:21:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:21:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:21:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:21:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:21:38 INFO [loop_until]: OK (rc = 0) 05:21:38 DEBUG --- stdout --- 05:21:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 2900Mi am-55f77847b7-nv9k2 12m 4388Mi am-55f77847b7-v7x55 10m 4354Mi ds-cts-0 8m 366Mi ds-cts-1 5m 359Mi ds-cts-2 8m 364Mi ds-idrepo-0 16m 13627Mi ds-idrepo-1 2421m 13053Mi ds-idrepo-2 15m 10297Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 15m 1735Mi idm-65858d8c4c-zvhxh 15m 1433Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 955m 374Mi 05:21:38 DEBUG --- stderr --- 05:21:38 DEBUG 05:21:38 INFO [loop_until]: OK (rc = 0) 05:21:38 DEBUG --- stdout --- 05:21:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5402Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5478Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 4006Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3049Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 79m 0% 2688Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14185Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10930Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2689m 16% 13624Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1031m 6% 1896Mi 3% 05:21:38 DEBUG --- stderr --- 05:21:38 DEBUG 05:22:38 INFO 05:22:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:22:38 INFO 05:22:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:22:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:22:38 INFO [loop_until]: OK (rc = 0) 05:22:38 DEBUG --- stdout --- 05:22:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 25m 2918Mi am-55f77847b7-nv9k2 10m 4388Mi am-55f77847b7-v7x55 8m 4354Mi ds-cts-0 13m 366Mi ds-cts-1 6m 360Mi ds-cts-2 7m 364Mi ds-idrepo-0 15m 13627Mi ds-idrepo-1 2715m 13354Mi ds-idrepo-2 15m 10291Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 1735Mi idm-65858d8c4c-zvhxh 5m 1433Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1063m 374Mi 05:22:38 DEBUG --- stderr --- 05:22:38 DEBUG 05:22:38 INFO [loop_until]: OK (rc = 0) 05:22:38 DEBUG --- stdout --- 05:22:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5399Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5481Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 82m 0% 4021Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3048Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2688Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 69m 0% 14187Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 10927Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2747m 17% 13899Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1134m 7% 1897Mi 3% 05:22:38 DEBUG --- stderr --- 05:22:38 DEBUG 05:23:38 INFO 05:23:38 INFO 05:23:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:23:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:23:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:23:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:23:38 INFO [loop_until]: OK (rc = 0) 05:23:38 DEBUG --- stdout --- 05:23:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 2928Mi am-55f77847b7-nv9k2 10m 4388Mi am-55f77847b7-v7x55 16m 4359Mi ds-cts-0 8m 366Mi ds-cts-1 5m 358Mi ds-cts-2 15m 364Mi ds-idrepo-0 13m 13627Mi ds-idrepo-1 2681m 13386Mi ds-idrepo-2 15m 10291Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 1735Mi idm-65858d8c4c-zvhxh 8m 1433Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1077m 380Mi 05:23:38 DEBUG --- stderr --- 05:23:38 DEBUG 05:23:38 INFO [loop_until]: OK (rc = 0) 05:23:38 DEBUG --- stdout --- 05:23:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5401Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 4033Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 3044Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2688Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14186Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 10926Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2723m 17% 13934Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1169m 7% 1904Mi 3% 05:23:38 DEBUG --- stderr --- 05:23:38 DEBUG 05:24:38 INFO 05:24:38 INFO [loop_until]: kubectl --namespace=xlou top pods 05:24:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:24:38 INFO 05:24:38 INFO [loop_until]: kubectl --namespace=xlou top node 05:24:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:24:38 INFO [loop_until]: OK (rc = 0) 05:24:38 DEBUG --- stdout --- 05:24:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 12m 2939Mi am-55f77847b7-nv9k2 13m 4389Mi am-55f77847b7-v7x55 9m 4359Mi ds-cts-0 8m 366Mi ds-cts-1 14m 359Mi ds-cts-2 13m 363Mi ds-idrepo-0 16m 13627Mi ds-idrepo-1 2851m 13500Mi ds-idrepo-2 15m 10293Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 1735Mi idm-65858d8c4c-zvhxh 7m 1433Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1159m 380Mi 05:24:38 DEBUG --- stderr --- 05:24:38 DEBUG 05:24:38 INFO [loop_until]: OK (rc = 0) 05:24:38 DEBUG --- stdout --- 05:24:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5400Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5485Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 4045Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 3047Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2687Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14186Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10926Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2993m 18% 14041Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1271m 7% 1902Mi 3% 05:24:38 DEBUG --- stderr --- 05:24:38 DEBUG 05:25:39 INFO 05:25:39 INFO 05:25:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:25:39 INFO [loop_until]: kubectl --namespace=xlou top node 05:25:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:25:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:25:39 INFO [loop_until]: OK (rc = 0) 05:25:39 DEBUG --- stdout --- 05:25:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 2950Mi am-55f77847b7-nv9k2 13m 4388Mi am-55f77847b7-v7x55 8m 4359Mi ds-cts-0 7m 366Mi ds-cts-1 8m 360Mi ds-cts-2 7m 363Mi ds-idrepo-0 14m 13627Mi ds-idrepo-1 3468m 13649Mi ds-idrepo-2 20m 10298Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 1736Mi idm-65858d8c4c-zvhxh 6m 1434Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1279m 381Mi 05:25:39 DEBUG --- stderr --- 05:25:39 DEBUG 05:25:39 INFO [loop_until]: OK (rc = 0) 05:25:39 DEBUG --- stdout --- 05:25:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5401Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5489Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 4052Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 3047Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2686Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14189Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10932Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3337m 21% 14193Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1360m 8% 1902Mi 3% 05:25:39 DEBUG --- stderr --- 05:25:39 DEBUG 05:26:39 INFO 05:26:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:26:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:26:39 INFO 05:26:39 INFO [loop_until]: kubectl --namespace=xlou top node 05:26:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:26:39 INFO [loop_until]: OK (rc = 0) 05:26:39 DEBUG --- stdout --- 05:26:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 2961Mi am-55f77847b7-nv9k2 8m 4389Mi am-55f77847b7-v7x55 9m 4359Mi ds-cts-0 8m 366Mi ds-cts-1 6m 360Mi ds-cts-2 8m 363Mi ds-idrepo-0 13m 13627Mi ds-idrepo-1 15m 13660Mi ds-idrepo-2 16m 10299Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 1736Mi idm-65858d8c4c-zvhxh 7m 1434Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 98Mi 05:26:39 DEBUG --- stderr --- 05:26:39 DEBUG 05:26:39 INFO [loop_until]: OK (rc = 0) 05:26:39 DEBUG --- stdout --- 05:26:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 5403Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5489Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 4066Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 3048Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 2686Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14191Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 10931Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14195Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1622Mi 2% 05:26:39 DEBUG --- stderr --- 05:26:39 DEBUG 05:27:39 INFO 05:27:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:27:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:27:39 INFO 05:27:39 INFO [loop_until]: kubectl --namespace=xlou top node 05:27:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:27:39 INFO [loop_until]: OK (rc = 0) 05:27:39 DEBUG --- stdout --- 05:27:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 10m 2972Mi am-55f77847b7-nv9k2 8m 4389Mi am-55f77847b7-v7x55 10m 4359Mi ds-cts-0 8m 366Mi ds-cts-1 7m 359Mi ds-cts-2 6m 363Mi ds-idrepo-0 14m 13627Mi ds-idrepo-1 23m 13661Mi ds-idrepo-2 2536m 12342Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 1736Mi idm-65858d8c4c-zvhxh 6m 1439Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1301m 371Mi 05:27:39 DEBUG --- stderr --- 05:27:39 DEBUG 05:27:39 INFO [loop_until]: OK (rc = 0) 05:27:39 DEBUG --- stdout --- 05:27:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5403Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5489Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4078Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3048Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2697Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14190Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2706m 17% 12819Mi 21% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1282m 8% 1894Mi 3% 05:27:39 DEBUG --- stderr --- 05:27:39 DEBUG 05:28:39 INFO 05:28:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:28:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:28:39 INFO [loop_until]: OK (rc = 0) 05:28:39 DEBUG --- stdout --- 05:28:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 2989Mi am-55f77847b7-nv9k2 9m 4389Mi am-55f77847b7-v7x55 11m 4360Mi ds-cts-0 7m 367Mi ds-cts-1 7m 360Mi ds-cts-2 8m 363Mi ds-idrepo-0 13m 13627Mi ds-idrepo-1 12m 13661Mi ds-idrepo-2 2596m 13336Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 1736Mi idm-65858d8c4c-zvhxh 7m 1434Mi lodemon-97b6d75b7-fknft 5m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1103m 370Mi 05:28:39 DEBUG --- stderr --- 05:28:39 DEBUG 05:28:39 INFO 05:28:39 INFO [loop_until]: kubectl --namespace=xlou top node 05:28:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:28:39 INFO [loop_until]: OK (rc = 0) 05:28:39 DEBUG --- stdout --- 05:28:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1330Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 5404Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 5489Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 4093Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 67m 0% 3045Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2691Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1056Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14189Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2574m 16% 13883Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14198Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1171m 7% 1892Mi 3% 05:28:39 DEBUG --- stderr --- 05:28:39 DEBUG 05:29:39 INFO 05:29:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:29:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:39 INFO [loop_until]: OK (rc = 0) 05:29:39 DEBUG --- stdout --- 05:29:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 3000Mi am-55f77847b7-nv9k2 8m 4389Mi am-55f77847b7-v7x55 8m 4360Mi ds-cts-0 8m 367Mi ds-cts-1 8m 359Mi ds-cts-2 6m 363Mi ds-idrepo-0 13m 13627Mi ds-idrepo-1 14m 13661Mi ds-idrepo-2 2731m 13374Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 1736Mi idm-65858d8c4c-zvhxh 10m 1434Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1157m 373Mi 05:29:39 DEBUG --- stderr --- 05:29:39 DEBUG 05:29:39 INFO 05:29:39 INFO [loop_until]: kubectl --namespace=xlou top node 05:29:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:39 INFO [loop_until]: OK (rc = 0) 05:29:39 DEBUG --- stdout --- 05:29:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5405Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5490Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4104Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 66m 0% 3049Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2688Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14190Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2786m 17% 13921Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1213m 7% 1894Mi 3% 05:29:39 DEBUG --- stderr --- 05:29:39 DEBUG 05:30:39 INFO 05:30:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:30:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:30:39 INFO [loop_until]: OK (rc = 0) 05:30:39 DEBUG --- stdout --- 05:30:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 3011Mi am-55f77847b7-nv9k2 15m 4390Mi am-55f77847b7-v7x55 11m 4366Mi ds-cts-0 8m 367Mi ds-cts-1 8m 359Mi ds-cts-2 7m 363Mi ds-idrepo-0 14m 13627Mi ds-idrepo-1 15m 13662Mi ds-idrepo-2 2817m 13481Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 5m 1737Mi idm-65858d8c4c-zvhxh 6m 1437Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1160m 373Mi 05:30:39 DEBUG --- stderr --- 05:30:39 DEBUG 05:30:39 INFO 05:30:39 INFO [loop_until]: kubectl --namespace=xlou top node 05:30:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:30:39 INFO [loop_until]: OK (rc = 0) 05:30:39 DEBUG --- stdout --- 05:30:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5402Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5490Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 4139Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 3050Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2689Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1058Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14192Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2858m 17% 14033Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14201Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1250m 7% 1894Mi 3% 05:30:39 DEBUG --- stderr --- 05:30:39 DEBUG 05:31:39 INFO 05:31:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:31:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:31:39 INFO [loop_until]: OK (rc = 0) 05:31:39 DEBUG --- stdout --- 05:31:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 12m 3044Mi am-55f77847b7-nv9k2 12m 4390Mi am-55f77847b7-v7x55 9m 4366Mi ds-cts-0 8m 371Mi ds-cts-1 7m 360Mi ds-cts-2 8m 365Mi ds-idrepo-0 14m 13626Mi ds-idrepo-1 29m 13662Mi ds-idrepo-2 2924m 13510Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 16m 1737Mi idm-65858d8c4c-zvhxh 10m 1438Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1322m 374Mi 05:31:39 DEBUG --- stderr --- 05:31:39 DEBUG 05:31:39 INFO 05:31:39 INFO [loop_until]: kubectl --namespace=xlou top node 05:31:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:31:39 INFO [loop_until]: OK (rc = 0) 05:31:39 DEBUG --- stdout --- 05:31:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5402Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5495Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 4150Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 3048Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2694Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3054m 19% 14048Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 94m 0% 14199Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1373m 8% 1896Mi 3% 05:31:39 DEBUG --- stderr --- 05:31:39 DEBUG 05:32:39 INFO 05:32:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:32:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:32:39 INFO [loop_until]: OK (rc = 0) 05:32:39 DEBUG --- stdout --- 05:32:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 8m 3056Mi am-55f77847b7-nv9k2 14m 4398Mi am-55f77847b7-v7x55 8m 4366Mi ds-cts-0 10m 371Mi ds-cts-1 6m 361Mi ds-cts-2 8m 365Mi ds-idrepo-0 13m 13626Mi ds-idrepo-1 12m 13649Mi ds-idrepo-2 11m 13706Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 1738Mi idm-65858d8c4c-zvhxh 10m 1439Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 99Mi 05:32:39 DEBUG --- stderr --- 05:32:39 DEBUG 05:32:39 INFO 05:32:39 INFO [loop_until]: kubectl --namespace=xlou top node 05:32:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:32:40 INFO [loop_until]: OK (rc = 0) 05:32:40 DEBUG --- stdout --- 05:32:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5409Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5493Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4161Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 3050Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2694Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14190Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 14240Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 76m 0% 1623Mi 2% 05:32:40 DEBUG --- stderr --- 05:32:40 DEBUG 05:33:39 INFO 05:33:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:33:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:33:39 INFO [loop_until]: OK (rc = 0) 05:33:39 DEBUG --- stdout --- 05:33:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 34m 3102Mi am-55f77847b7-nv9k2 9m 4398Mi am-55f77847b7-v7x55 15m 4376Mi ds-cts-0 13m 371Mi ds-cts-1 5m 361Mi ds-cts-2 6m 365Mi ds-idrepo-0 107m 13628Mi ds-idrepo-1 30m 13650Mi ds-idrepo-2 10m 13708Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 288m 1892Mi idm-65858d8c4c-zvhxh 6m 1439Mi lodemon-97b6d75b7-fknft 5m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1767m 401Mi 05:33:39 DEBUG --- stderr --- 05:33:39 DEBUG 05:33:40 INFO 05:33:40 INFO [loop_until]: kubectl --namespace=xlou top node 05:33:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:33:40 INFO [loop_until]: OK (rc = 0) 05:33:40 DEBUG --- stdout --- 05:33:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5412Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5494Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4173Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3053Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2112Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 2693Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14192Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 14253Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 136m 0% 14199Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1857m 11% 1921Mi 3% 05:33:40 DEBUG --- stderr --- 05:33:40 DEBUG 05:34:39 INFO 05:34:39 INFO [loop_until]: kubectl --namespace=xlou top pods 05:34:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:34:40 INFO [loop_until]: OK (rc = 0) 05:34:40 DEBUG --- stdout --- 05:34:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 51m 3472Mi am-55f77847b7-nv9k2 55m 4402Mi am-55f77847b7-v7x55 45m 4398Mi ds-cts-0 8m 373Mi ds-cts-1 7m 363Mi ds-cts-2 7m 365Mi ds-idrepo-0 1747m 13628Mi ds-idrepo-1 534m 13665Mi ds-idrepo-2 438m 13710Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 2311m 3456Mi idm-65858d8c4c-zvhxh 2082m 3506Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 509m 504Mi 05:34:40 DEBUG --- stderr --- 05:34:40 DEBUG 05:34:40 INFO 05:34:40 INFO [loop_until]: kubectl --namespace=xlou top node 05:34:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:34:40 INFO [loop_until]: OK (rc = 0) 05:34:40 DEBUG --- stdout --- 05:34:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 116m 0% 5523Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 4568Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 2345m 14% 4763Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 597m 3% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2116m 13% 4754Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1806m 11% 14192Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 450m 2% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 585m 3% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 556m 3% 2022Mi 3% 05:34:40 DEBUG --- stderr --- 05:34:40 DEBUG 05:35:40 INFO 05:35:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:35:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:35:40 INFO [loop_until]: OK (rc = 0) 05:35:40 DEBUG --- stdout --- 05:35:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 49m 3848Mi am-55f77847b7-nv9k2 39m 4404Mi am-55f77847b7-v7x55 48m 4401Mi ds-cts-0 7m 373Mi ds-cts-1 6m 363Mi ds-cts-2 8m 365Mi ds-idrepo-0 1709m 13629Mi ds-idrepo-1 409m 13675Mi ds-idrepo-2 377m 13711Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1864m 3489Mi idm-65858d8c4c-zvhxh 1669m 3594Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 332m 498Mi 05:35:40 DEBUG --- stderr --- 05:35:40 DEBUG 05:35:40 INFO 05:35:40 INFO [loop_until]: kubectl --namespace=xlou top node 05:35:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:35:40 INFO [loop_until]: OK (rc = 0) 05:35:40 DEBUG --- stdout --- 05:35:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 5421Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 5527Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 101m 0% 4987Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 1959m 12% 4795Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 619m 3% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1723m 10% 4841Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1802m 11% 14189Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 441m 2% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 465m 2% 14208Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 404m 2% 2016Mi 3% 05:35:40 DEBUG --- stderr --- 05:35:40 DEBUG 05:36:40 INFO 05:36:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:36:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:36:40 INFO [loop_until]: OK (rc = 0) 05:36:40 DEBUG --- stdout --- 05:36:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 44m 4304Mi am-55f77847b7-nv9k2 50m 4410Mi am-55f77847b7-v7x55 38m 4405Mi ds-cts-0 6m 373Mi ds-cts-1 8m 364Mi ds-cts-2 6m 366Mi ds-idrepo-0 1867m 13631Mi ds-idrepo-1 714m 13696Mi ds-idrepo-2 594m 13742Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1879m 3496Mi idm-65858d8c4c-zvhxh 1634m 3605Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 316m 498Mi 05:36:40 DEBUG --- stderr --- 05:36:40 DEBUG 05:36:40 INFO 05:36:40 INFO [loop_until]: kubectl --namespace=xlou top node 05:36:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:36:40 INFO [loop_until]: OK (rc = 0) 05:36:40 DEBUG --- stdout --- 05:36:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 5423Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 5530Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 1965m 12% 4801Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 601m 3% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1739m 10% 4853Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1915m 12% 14188Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 615m 3% 14188Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 754m 4% 14228Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 386m 2% 2017Mi 3% 05:36:40 DEBUG --- stderr --- 05:36:40 DEBUG 05:37:40 INFO 05:37:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:37:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:37:40 INFO [loop_until]: OK (rc = 0) 05:37:40 DEBUG --- stdout --- 05:37:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 43m 4714Mi am-55f77847b7-nv9k2 39m 4409Mi am-55f77847b7-v7x55 46m 4408Mi ds-cts-0 7m 373Mi ds-cts-1 8m 364Mi ds-cts-2 7m 365Mi ds-idrepo-0 2602m 13757Mi ds-idrepo-1 756m 13700Mi ds-idrepo-2 700m 13682Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1891m 3511Mi idm-65858d8c4c-zvhxh 1668m 3667Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 317m 497Mi 05:37:40 DEBUG --- stderr --- 05:37:40 DEBUG 05:37:40 INFO 05:37:40 INFO [loop_until]: kubectl --namespace=xlou top node 05:37:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:37:40 INFO [loop_until]: OK (rc = 0) 05:37:40 DEBUG --- stdout --- 05:37:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 5422Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 101m 0% 5827Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2030m 12% 4816Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 635m 3% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1810m 11% 4863Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2603m 16% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 739m 4% 14204Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 843m 5% 14222Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 393m 2% 2019Mi 3% 05:37:40 DEBUG --- stderr --- 05:37:40 DEBUG 05:38:40 INFO 05:38:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:38:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:38:40 INFO [loop_until]: OK (rc = 0) 05:38:40 DEBUG --- stdout --- 05:38:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5114Mi am-55f77847b7-nv9k2 36m 4572Mi am-55f77847b7-v7x55 46m 4563Mi ds-cts-0 6m 374Mi ds-cts-1 10m 364Mi ds-cts-2 6m 366Mi ds-idrepo-0 1841m 13772Mi ds-idrepo-1 498m 13661Mi ds-idrepo-2 423m 13654Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1774m 3516Mi idm-65858d8c4c-zvhxh 1578m 3620Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 311m 499Mi 05:38:40 DEBUG --- stderr --- 05:38:40 DEBUG 05:38:40 INFO 05:38:40 INFO [loop_until]: kubectl --namespace=xlou top node 05:38:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:38:40 INFO [loop_until]: OK (rc = 0) 05:38:40 DEBUG --- stdout --- 05:38:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 5584Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 5694Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6244Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 1937m 12% 4835Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 593m 3% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1694m 10% 4869Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1938m 12% 14324Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 487m 3% 14186Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 534m 3% 14195Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 371m 2% 2019Mi 3% 05:38:40 DEBUG --- stderr --- 05:38:40 DEBUG 05:39:40 INFO 05:39:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:39:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:39:40 INFO [loop_until]: OK (rc = 0) 05:39:40 DEBUG --- stdout --- 05:39:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5501Mi am-55f77847b7-nv9k2 35m 4573Mi am-55f77847b7-v7x55 36m 4564Mi ds-cts-0 10m 374Mi ds-cts-1 10m 365Mi ds-cts-2 6m 367Mi ds-idrepo-0 1813m 13804Mi ds-idrepo-1 470m 13699Mi ds-idrepo-2 358m 13682Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1801m 3522Mi idm-65858d8c4c-zvhxh 1595m 3625Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 310m 499Mi 05:39:40 DEBUG --- stderr --- 05:39:40 DEBUG 05:39:40 INFO 05:39:40 INFO [loop_until]: kubectl --namespace=xlou top node 05:39:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:39:40 INFO [loop_until]: OK (rc = 0) 05:39:40 DEBUG --- stdout --- 05:39:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 5586Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6664Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1963m 12% 4827Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 612m 3% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1674m 10% 4869Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1885m 11% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 412m 2% 14214Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 521m 3% 14236Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 383m 2% 2020Mi 3% 05:39:40 DEBUG --- stderr --- 05:39:40 DEBUG 05:40:40 INFO 05:40:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:40:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:40:40 INFO [loop_until]: OK (rc = 0) 05:40:40 DEBUG --- stdout --- 05:40:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 42m 5716Mi am-55f77847b7-nv9k2 40m 4573Mi am-55f77847b7-v7x55 43m 4560Mi ds-cts-0 8m 374Mi ds-cts-1 7m 365Mi ds-cts-2 11m 366Mi ds-idrepo-0 2053m 13823Mi ds-idrepo-1 571m 13700Mi ds-idrepo-2 366m 13681Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1835m 3528Mi idm-65858d8c4c-zvhxh 1630m 3631Mi lodemon-97b6d75b7-fknft 2m 65Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 315m 499Mi 05:40:40 DEBUG --- stderr --- 05:40:40 DEBUG 05:40:41 INFO 05:40:41 INFO [loop_until]: kubectl --namespace=xlou top node 05:40:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:40:41 INFO [loop_until]: OK (rc = 0) 05:40:41 DEBUG --- stdout --- 05:40:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 5588Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 105m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1972m 12% 4834Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 626m 3% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1672m 10% 4880Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1985m 12% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 421m 2% 14216Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 635m 3% 14252Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 393m 2% 2019Mi 3% 05:40:41 DEBUG --- stderr --- 05:40:41 DEBUG 05:41:40 INFO 05:41:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:41:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:41:40 INFO [loop_until]: OK (rc = 0) 05:41:40 DEBUG --- stdout --- 05:41:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5716Mi am-55f77847b7-nv9k2 35m 4573Mi am-55f77847b7-v7x55 32m 4561Mi ds-cts-0 8m 374Mi ds-cts-1 7m 365Mi ds-cts-2 7m 366Mi ds-idrepo-0 1989m 13823Mi ds-idrepo-1 486m 13811Mi ds-idrepo-2 371m 13682Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1780m 3534Mi idm-65858d8c4c-zvhxh 1602m 3636Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 316m 500Mi 05:41:40 DEBUG --- stderr --- 05:41:40 DEBUG 05:41:41 INFO 05:41:41 INFO [loop_until]: kubectl --namespace=xlou top node 05:41:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:41:41 INFO [loop_until]: OK (rc = 0) 05:41:41 DEBUG --- stdout --- 05:41:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 5587Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 89m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1959m 12% 4839Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 619m 3% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1725m 10% 4881Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2033m 12% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 423m 2% 14216Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 566m 3% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 383m 2% 2019Mi 3% 05:41:41 DEBUG --- stderr --- 05:41:41 DEBUG 05:42:40 INFO 05:42:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:42:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:42:40 INFO [loop_until]: OK (rc = 0) 05:42:40 DEBUG --- stdout --- 05:42:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5712Mi am-55f77847b7-nv9k2 36m 4574Mi am-55f77847b7-v7x55 33m 4561Mi ds-cts-0 7m 374Mi ds-cts-1 8m 365Mi ds-cts-2 7m 366Mi ds-idrepo-0 2078m 13823Mi ds-idrepo-1 580m 13811Mi ds-idrepo-2 361m 13683Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1779m 3540Mi idm-65858d8c4c-zvhxh 1601m 3641Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 311m 499Mi 05:42:40 DEBUG --- stderr --- 05:42:40 DEBUG 05:42:41 INFO 05:42:41 INFO [loop_until]: kubectl --namespace=xlou top node 05:42:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:42:41 INFO [loop_until]: OK (rc = 0) 05:42:41 DEBUG --- stdout --- 05:42:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 5588Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 5689Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 89m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1910m 12% 4846Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 614m 3% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1726m 10% 4886Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2041m 12% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 421m 2% 14213Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 654m 4% 14345Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 375m 2% 2019Mi 3% 05:42:41 DEBUG --- stderr --- 05:42:41 DEBUG 05:43:40 INFO 05:43:40 INFO [loop_until]: kubectl --namespace=xlou top pods 05:43:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:43:41 INFO [loop_until]: OK (rc = 0) 05:43:41 DEBUG --- stdout --- 05:43:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 36m 5712Mi am-55f77847b7-nv9k2 35m 4574Mi am-55f77847b7-v7x55 41m 4565Mi ds-cts-0 7m 374Mi ds-cts-1 11m 366Mi ds-cts-2 8m 366Mi ds-idrepo-0 2055m 13822Mi ds-idrepo-1 620m 13812Mi ds-idrepo-2 369m 13683Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1879m 3545Mi idm-65858d8c4c-zvhxh 1677m 3647Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 314m 500Mi 05:43:41 DEBUG --- stderr --- 05:43:41 DEBUG 05:43:41 INFO 05:43:41 INFO [loop_until]: kubectl --namespace=xlou top node 05:43:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:43:41 INFO [loop_until]: OK (rc = 0) 05:43:41 DEBUG --- stdout --- 05:43:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 5588Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 105m 0% 5693Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1974m 12% 4853Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 625m 3% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1733m 10% 4892Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2088m 13% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 570m 3% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 623m 3% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 389m 2% 2019Mi 3% 05:43:41 DEBUG --- stderr --- 05:43:41 DEBUG 05:44:41 INFO 05:44:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:44:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:44:41 INFO [loop_until]: OK (rc = 0) 05:44:41 DEBUG --- stdout --- 05:44:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 34m 5735Mi am-55f77847b7-nv9k2 37m 4574Mi am-55f77847b7-v7x55 40m 4567Mi ds-cts-0 6m 374Mi ds-cts-1 9m 373Mi ds-cts-2 6m 366Mi ds-idrepo-0 2454m 13829Mi ds-idrepo-1 982m 13815Mi ds-idrepo-2 820m 13814Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1824m 3551Mi idm-65858d8c4c-zvhxh 1657m 3653Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 315m 500Mi 05:44:41 DEBUG --- stderr --- 05:44:41 DEBUG 05:44:41 INFO 05:44:41 INFO [loop_until]: kubectl --namespace=xlou top node 05:44:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:44:41 INFO [loop_until]: OK (rc = 0) 05:44:41 DEBUG --- stdout --- 05:44:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 5588Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 103m 0% 5694Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1966m 12% 4859Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 635m 3% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1802m 11% 4902Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2454m 15% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1042m 6% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1141m 7% 14350Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 380m 2% 2019Mi 3% 05:44:41 DEBUG --- stderr --- 05:44:41 DEBUG 05:45:41 INFO 05:45:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:45:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:45:41 INFO [loop_until]: OK (rc = 0) 05:45:41 DEBUG --- stdout --- 05:45:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 35m 5746Mi am-55f77847b7-nv9k2 44m 4575Mi am-55f77847b7-v7x55 37m 4567Mi ds-cts-0 7m 375Mi ds-cts-1 7m 374Mi ds-cts-2 8m 366Mi ds-idrepo-0 2058m 13830Mi ds-idrepo-1 467m 13817Mi ds-idrepo-2 512m 13818Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1799m 3560Mi idm-65858d8c4c-zvhxh 1663m 3659Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 325m 500Mi 05:45:41 DEBUG --- stderr --- 05:45:41 DEBUG 05:45:41 INFO 05:45:41 INFO [loop_until]: kubectl --namespace=xlou top node 05:45:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:45:41 INFO [loop_until]: OK (rc = 0) 05:45:41 DEBUG --- stdout --- 05:45:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1330Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 5586Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 5694Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6846Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1943m 12% 4870Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 621m 3% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1763m 11% 4907Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2112m 13% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 553m 3% 14353Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 519m 3% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 395m 2% 2019Mi 3% 05:45:41 DEBUG --- stderr --- 05:45:41 DEBUG 05:46:41 INFO 05:46:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:46:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:46:41 INFO [loop_until]: OK (rc = 0) 05:46:41 DEBUG --- stdout --- 05:46:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 36m 5747Mi am-55f77847b7-nv9k2 44m 4749Mi am-55f77847b7-v7x55 37m 4578Mi ds-cts-0 6m 374Mi ds-cts-1 7m 373Mi ds-cts-2 6m 366Mi ds-idrepo-0 2111m 13821Mi ds-idrepo-1 803m 13795Mi ds-idrepo-2 893m 13793Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1817m 3570Mi idm-65858d8c4c-zvhxh 1591m 3667Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 305m 501Mi 05:46:41 DEBUG --- stderr --- 05:46:41 DEBUG 05:46:41 INFO 05:46:41 INFO [loop_until]: kubectl --namespace=xlou top node 05:46:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:46:41 INFO [loop_until]: OK (rc = 0) 05:46:41 DEBUG --- stdout --- 05:46:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 5709Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 5711Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1963m 12% 4878Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 621m 3% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1682m 10% 4913Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1059Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2244m 14% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 663m 4% 14318Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 818m 5% 14331Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 385m 2% 2020Mi 3% 05:46:41 DEBUG --- stderr --- 05:46:41 DEBUG 05:47:41 INFO 05:47:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:47:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:47:41 INFO [loop_until]: OK (rc = 0) 05:47:41 DEBUG --- stdout --- 05:47:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 33m 5749Mi am-55f77847b7-nv9k2 42m 5103Mi am-55f77847b7-v7x55 39m 4991Mi ds-cts-0 6m 374Mi ds-cts-1 6m 373Mi ds-cts-2 7m 366Mi ds-idrepo-0 2019m 13823Mi ds-idrepo-1 604m 13820Mi ds-idrepo-2 337m 13821Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1768m 3575Mi idm-65858d8c4c-zvhxh 1732m 3680Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 307m 500Mi 05:47:41 DEBUG --- stderr --- 05:47:41 DEBUG 05:47:41 INFO 05:47:41 INFO [loop_until]: kubectl --namespace=xlou top node 05:47:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:47:41 INFO [loop_until]: OK (rc = 0) 05:47:41 DEBUG --- stdout --- 05:47:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 6124Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 103m 0% 6135Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6846Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1906m 11% 4882Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 625m 3% 2136Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1808m 11% 4923Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2100m 13% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 552m 3% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 671m 4% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 384m 2% 2020Mi 3% 05:47:41 DEBUG --- stderr --- 05:47:41 DEBUG 05:48:41 INFO 05:48:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:48:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:48:41 INFO [loop_until]: OK (rc = 0) 05:48:41 DEBUG --- stdout --- 05:48:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 35m 5749Mi am-55f77847b7-nv9k2 42m 5559Mi am-55f77847b7-v7x55 42m 5373Mi ds-cts-0 8m 374Mi ds-cts-1 9m 373Mi ds-cts-2 7m 366Mi ds-idrepo-0 1979m 13823Mi ds-idrepo-1 772m 13827Mi ds-idrepo-2 509m 13823Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1781m 3581Mi idm-65858d8c4c-zvhxh 1629m 3677Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 314m 501Mi 05:48:41 DEBUG --- stderr --- 05:48:41 DEBUG 05:48:42 INFO 05:48:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:48:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:48:42 INFO [loop_until]: OK (rc = 0) 05:48:42 DEBUG --- stdout --- 05:48:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6540Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 102m 0% 6536Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1963m 12% 4888Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 625m 3% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1706m 10% 4923Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1061Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2088m 13% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 559m 3% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 760m 4% 14360Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 383m 2% 2020Mi 3% 05:48:42 DEBUG --- stderr --- 05:48:42 DEBUG 05:49:41 INFO 05:49:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:49:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:49:41 INFO [loop_until]: OK (rc = 0) 05:49:41 DEBUG --- stdout --- 05:49:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 43m 5752Mi am-55f77847b7-nv9k2 33m 5668Mi am-55f77847b7-v7x55 31m 5669Mi ds-cts-0 7m 375Mi ds-cts-1 6m 373Mi ds-cts-2 7m 366Mi ds-idrepo-0 2144m 13823Mi ds-idrepo-1 606m 13826Mi ds-idrepo-2 450m 13818Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1873m 3639Mi idm-65858d8c4c-zvhxh 1610m 3682Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 311m 501Mi 05:49:41 DEBUG --- stderr --- 05:49:41 DEBUG 05:49:42 INFO 05:49:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:49:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:49:42 INFO [loop_until]: OK (rc = 0) 05:49:42 DEBUG --- stdout --- 05:49:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6676Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6793Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1972m 12% 4945Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 615m 3% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1669m 10% 4932Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2101m 13% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 550m 3% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 661m 4% 14357Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 384m 2% 2019Mi 3% 05:49:42 DEBUG --- stderr --- 05:49:42 DEBUG 05:50:41 INFO 05:50:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:50:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:50:41 INFO [loop_until]: OK (rc = 0) 05:50:41 DEBUG --- stdout --- 05:50:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 35m 5754Mi am-55f77847b7-nv9k2 36m 5668Mi am-55f77847b7-v7x55 33m 5670Mi ds-cts-0 7m 374Mi ds-cts-1 7m 374Mi ds-cts-2 8m 366Mi ds-idrepo-0 2762m 13820Mi ds-idrepo-1 1031m 13806Mi ds-idrepo-2 936m 13822Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1902m 3601Mi idm-65858d8c4c-zvhxh 1658m 3689Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 328m 501Mi 05:50:41 DEBUG --- stderr --- 05:50:41 DEBUG 05:50:42 INFO 05:50:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:50:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:50:42 INFO [loop_until]: OK (rc = 0) 05:50:42 DEBUG --- stdout --- 05:50:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6679Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2063m 12% 4908Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 640m 4% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1780m 11% 4937Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3018m 18% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 941m 5% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1058m 6% 14348Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 392m 2% 2020Mi 3% 05:50:42 DEBUG --- stderr --- 05:50:42 DEBUG 05:51:41 INFO 05:51:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:51:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:51:41 INFO [loop_until]: OK (rc = 0) 05:51:41 DEBUG --- stdout --- 05:51:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 31m 5796Mi am-55f77847b7-nv9k2 37m 5668Mi am-55f77847b7-v7x55 36m 5670Mi ds-cts-0 6m 374Mi ds-cts-1 6m 373Mi ds-cts-2 6m 366Mi ds-idrepo-0 2399m 13841Mi ds-idrepo-1 871m 13748Mi ds-idrepo-2 575m 13830Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1787m 3607Mi idm-65858d8c4c-zvhxh 1585m 3693Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 329m 501Mi 05:51:41 DEBUG --- stderr --- 05:51:41 DEBUG 05:51:42 INFO 05:51:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:51:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:51:42 INFO [loop_until]: OK (rc = 0) 05:51:42 DEBUG --- stdout --- 05:51:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6681Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6797Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1957m 12% 4914Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 625m 3% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1755m 11% 4940Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2373m 14% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 748m 4% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 901m 5% 14278Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 394m 2% 2021Mi 3% 05:51:42 DEBUG --- stderr --- 05:51:42 DEBUG 05:52:41 INFO 05:52:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:52:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:52:41 INFO [loop_until]: OK (rc = 0) 05:52:41 DEBUG --- stdout --- 05:52:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 31m 5796Mi am-55f77847b7-nv9k2 33m 5668Mi am-55f77847b7-v7x55 35m 5670Mi ds-cts-0 8m 374Mi ds-cts-1 6m 374Mi ds-cts-2 7m 366Mi ds-idrepo-0 2198m 13823Mi ds-idrepo-1 794m 13785Mi ds-idrepo-2 560m 13788Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1854m 3612Mi idm-65858d8c4c-zvhxh 1666m 3699Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 317m 502Mi 05:52:41 DEBUG --- stderr --- 05:52:41 DEBUG 05:52:42 INFO 05:52:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:52:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:52:42 INFO [loop_until]: OK (rc = 0) 05:52:42 DEBUG --- stdout --- 05:52:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 90m 0% 6682Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6794Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1980m 12% 4918Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 629m 3% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1726m 10% 4950Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2158m 13% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 584m 3% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 734m 4% 14322Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 396m 2% 2020Mi 3% 05:52:42 DEBUG --- stderr --- 05:52:42 DEBUG 05:53:41 INFO 05:53:41 INFO [loop_until]: kubectl --namespace=xlou top pods 05:53:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:53:42 INFO [loop_until]: OK (rc = 0) 05:53:42 DEBUG --- stdout --- 05:53:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5796Mi am-55f77847b7-nv9k2 40m 5668Mi am-55f77847b7-v7x55 34m 5670Mi ds-cts-0 15m 377Mi ds-cts-1 6m 374Mi ds-cts-2 8m 366Mi ds-idrepo-0 2115m 13823Mi ds-idrepo-1 534m 13838Mi ds-idrepo-2 530m 13823Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1739m 3617Mi idm-65858d8c4c-zvhxh 1620m 3732Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 325m 502Mi 05:53:42 DEBUG --- stderr --- 05:53:42 DEBUG 05:53:42 INFO 05:53:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:53:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:53:42 INFO [loop_until]: OK (rc = 0) 05:53:42 DEBUG --- stdout --- 05:53:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6676Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1834m 11% 4926Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 612m 3% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1749m 11% 4979Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2121m 13% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 579m 3% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 607m 3% 14373Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 392m 2% 2020Mi 3% 05:53:42 DEBUG --- stderr --- 05:53:42 DEBUG 05:54:42 INFO 05:54:42 INFO [loop_until]: kubectl --namespace=xlou top pods 05:54:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:54:42 INFO [loop_until]: OK (rc = 0) 05:54:42 DEBUG --- stdout --- 05:54:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 31m 5796Mi am-55f77847b7-nv9k2 34m 5668Mi am-55f77847b7-v7x55 34m 5670Mi ds-cts-0 8m 378Mi ds-cts-1 6m 374Mi ds-cts-2 6m 366Mi ds-idrepo-0 2603m 13825Mi ds-idrepo-1 1195m 13836Mi ds-idrepo-2 905m 13824Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1882m 3624Mi idm-65858d8c4c-zvhxh 1598m 3712Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 323m 502Mi 05:54:42 DEBUG --- stderr --- 05:54:42 DEBUG 05:54:42 INFO 05:54:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:54:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:54:42 INFO [loop_until]: OK (rc = 0) 05:54:42 DEBUG --- stdout --- 05:54:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6678Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6794Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1985m 12% 4931Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 625m 3% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1737m 10% 4956Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2476m 15% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1061m 6% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1056m 6% 14369Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 394m 2% 2020Mi 3% 05:54:42 DEBUG --- stderr --- 05:54:42 DEBUG 05:55:42 INFO 05:55:42 INFO [loop_until]: kubectl --namespace=xlou top pods 05:55:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:55:42 INFO [loop_until]: OK (rc = 0) 05:55:42 DEBUG --- stdout --- 05:55:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 31m 5796Mi am-55f77847b7-nv9k2 35m 5668Mi am-55f77847b7-v7x55 36m 5670Mi ds-cts-0 8m 377Mi ds-cts-1 6m 373Mi ds-cts-2 8m 366Mi ds-idrepo-0 2083m 13837Mi ds-idrepo-1 627m 13843Mi ds-idrepo-2 512m 13840Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1790m 3629Mi idm-65858d8c4c-zvhxh 1608m 3718Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 319m 502Mi 05:55:42 DEBUG --- stderr --- 05:55:42 DEBUG 05:55:42 INFO 05:55:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:55:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:55:42 INFO [loop_until]: OK (rc = 0) 05:55:42 DEBUG --- stdout --- 05:55:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6678Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6795Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1919m 12% 4937Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 619m 3% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1701m 10% 4965Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2282m 14% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 592m 3% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 692m 4% 14380Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 382m 2% 2020Mi 3% 05:55:42 DEBUG --- stderr --- 05:55:42 DEBUG 05:56:42 INFO 05:56:42 INFO [loop_until]: kubectl --namespace=xlou top pods 05:56:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:56:42 INFO [loop_until]: OK (rc = 0) 05:56:42 DEBUG --- stdout --- 05:56:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 33m 5796Mi am-55f77847b7-nv9k2 34m 5668Mi am-55f77847b7-v7x55 33m 5670Mi ds-cts-0 7m 377Mi ds-cts-1 7m 374Mi ds-cts-2 7m 366Mi ds-idrepo-0 2185m 13841Mi ds-idrepo-1 652m 13836Mi ds-idrepo-2 502m 13839Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1723m 3636Mi idm-65858d8c4c-zvhxh 1676m 3723Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 325m 502Mi 05:56:42 DEBUG --- stderr --- 05:56:42 DEBUG 05:56:42 INFO 05:56:42 INFO [loop_until]: kubectl --namespace=xlou top node 05:56:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:56:43 INFO [loop_until]: OK (rc = 0) 05:56:43 DEBUG --- stdout --- 05:56:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 6681Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6793Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1838m 11% 4943Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 619m 3% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1793m 11% 4971Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2078m 13% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 423m 2% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 713m 4% 14375Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 394m 2% 2020Mi 3% 05:56:43 DEBUG --- stderr --- 05:56:43 DEBUG 05:57:42 INFO 05:57:42 INFO [loop_until]: kubectl --namespace=xlou top pods 05:57:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:57:42 INFO [loop_until]: OK (rc = 0) 05:57:42 DEBUG --- stdout --- 05:57:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 34m 5796Mi am-55f77847b7-nv9k2 35m 5674Mi am-55f77847b7-v7x55 34m 5670Mi ds-cts-0 6m 377Mi ds-cts-1 6m 374Mi ds-cts-2 9m 366Mi ds-idrepo-0 2184m 13824Mi ds-idrepo-1 622m 13830Mi ds-idrepo-2 628m 13834Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1752m 3641Mi idm-65858d8c4c-zvhxh 1617m 3727Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 310m 502Mi 05:57:42 DEBUG --- stderr --- 05:57:42 DEBUG 05:57:43 INFO 05:57:43 INFO [loop_until]: kubectl --namespace=xlou top node 05:57:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:57:43 INFO [loop_until]: OK (rc = 0) 05:57:43 DEBUG --- stdout --- 05:57:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6683Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6798Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1887m 11% 4942Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 610m 3% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1749m 11% 4974Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2259m 14% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 642m 4% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 690m 4% 14374Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 377m 2% 2021Mi 3% 05:57:43 DEBUG --- stderr --- 05:57:43 DEBUG 05:58:42 INFO 05:58:42 INFO [loop_until]: kubectl --namespace=xlou top pods 05:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:58:42 INFO [loop_until]: OK (rc = 0) 05:58:42 DEBUG --- stdout --- 05:58:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 36m 5798Mi am-55f77847b7-nv9k2 37m 5677Mi am-55f77847b7-v7x55 35m 5680Mi ds-cts-0 6m 377Mi ds-cts-1 6m 374Mi ds-cts-2 7m 366Mi ds-idrepo-0 2590m 13806Mi ds-idrepo-1 980m 13825Mi ds-idrepo-2 904m 13822Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1857m 3647Mi idm-65858d8c4c-zvhxh 1676m 3736Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 333m 502Mi 05:58:42 DEBUG --- stderr --- 05:58:42 DEBUG 05:58:43 INFO 05:58:43 INFO [loop_until]: kubectl --namespace=xlou top node 05:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:58:43 INFO [loop_until]: OK (rc = 0) 05:58:43 DEBUG --- stdout --- 05:58:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6691Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1969m 12% 4952Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 633m 3% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1809m 11% 4984Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3207m 20% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 974m 6% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1202m 7% 14361Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 385m 2% 2020Mi 3% 05:58:43 DEBUG --- stderr --- 05:58:43 DEBUG 05:59:42 INFO 05:59:42 INFO [loop_until]: kubectl --namespace=xlou top pods 05:59:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:59:42 INFO [loop_until]: OK (rc = 0) 05:59:42 DEBUG --- stdout --- 05:59:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 36m 5798Mi am-55f77847b7-nv9k2 36m 5679Mi am-55f77847b7-v7x55 35m 5681Mi ds-cts-0 8m 377Mi ds-cts-1 7m 374Mi ds-cts-2 7m 366Mi ds-idrepo-0 2358m 13829Mi ds-idrepo-1 625m 13835Mi ds-idrepo-2 618m 13843Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1835m 3653Mi idm-65858d8c4c-zvhxh 1663m 3742Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 325m 502Mi 05:59:42 DEBUG --- stderr --- 05:59:42 DEBUG 05:59:43 INFO 05:59:43 INFO [loop_until]: kubectl --namespace=xlou top node 05:59:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:59:43 INFO [loop_until]: OK (rc = 0) 05:59:43 DEBUG --- stdout --- 05:59:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6692Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1968m 12% 4954Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 622m 3% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1793m 11% 4991Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2216m 13% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 594m 3% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 727m 4% 14374Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 395m 2% 2020Mi 3% 05:59:43 DEBUG --- stderr --- 05:59:43 DEBUG 06:00:42 INFO 06:00:42 INFO [loop_until]: kubectl --namespace=xlou top pods 06:00:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:00:42 INFO [loop_until]: OK (rc = 0) 06:00:42 DEBUG --- stdout --- 06:00:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 38m 5798Mi am-55f77847b7-nv9k2 38m 5685Mi am-55f77847b7-v7x55 37m 5688Mi ds-cts-0 6m 377Mi ds-cts-1 6m 374Mi ds-cts-2 8m 366Mi ds-idrepo-0 2236m 13835Mi ds-idrepo-1 568m 13833Mi ds-idrepo-2 427m 13842Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1871m 3660Mi idm-65858d8c4c-zvhxh 1645m 3748Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 329m 502Mi 06:00:42 DEBUG --- stderr --- 06:00:42 DEBUG 06:00:43 INFO 06:00:43 INFO [loop_until]: kubectl --namespace=xlou top node 06:00:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:00:43 INFO [loop_until]: OK (rc = 0) 06:00:43 DEBUG --- stdout --- 06:00:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6699Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2033m 12% 4964Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 663m 4% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1774m 11% 4995Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2313m 14% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 604m 3% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 631m 3% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 398m 2% 2023Mi 3% 06:00:43 DEBUG --- stderr --- 06:00:43 DEBUG 06:01:42 INFO 06:01:42 INFO [loop_until]: kubectl --namespace=xlou top pods 06:01:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:01:42 INFO [loop_until]: OK (rc = 0) 06:01:42 DEBUG --- stdout --- 06:01:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 34m 5798Mi am-55f77847b7-nv9k2 34m 5685Mi am-55f77847b7-v7x55 31m 5688Mi ds-cts-0 10m 378Mi ds-cts-1 6m 374Mi ds-cts-2 7m 366Mi ds-idrepo-0 2452m 13820Mi ds-idrepo-1 905m 13824Mi ds-idrepo-2 954m 13824Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1759m 3664Mi idm-65858d8c4c-zvhxh 1636m 3756Mi lodemon-97b6d75b7-fknft 4m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 315m 503Mi 06:01:42 DEBUG --- stderr --- 06:01:42 DEBUG 06:01:43 INFO 06:01:43 INFO [loop_until]: kubectl --namespace=xlou top node 06:01:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:01:43 INFO [loop_until]: OK (rc = 0) 06:01:43 DEBUG --- stdout --- 06:01:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 6696Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1908m 12% 4971Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 597m 3% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1741m 10% 4999Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2443m 15% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 992m 6% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1087m 6% 14373Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 371m 2% 2024Mi 3% 06:01:43 DEBUG --- stderr --- 06:01:43 DEBUG 06:02:42 INFO 06:02:42 INFO [loop_until]: kubectl --namespace=xlou top pods 06:02:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:02:43 INFO [loop_until]: OK (rc = 0) 06:02:43 DEBUG --- stdout --- 06:02:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5802Mi am-55f77847b7-nv9k2 36m 5685Mi am-55f77847b7-v7x55 32m 5688Mi ds-cts-0 11m 377Mi ds-cts-1 6m 374Mi ds-cts-2 13m 370Mi ds-idrepo-0 2135m 13841Mi ds-idrepo-1 670m 13835Mi ds-idrepo-2 520m 13827Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1853m 3673Mi idm-65858d8c4c-zvhxh 1628m 3763Mi lodemon-97b6d75b7-fknft 3m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 316m 502Mi 06:02:43 DEBUG --- stderr --- 06:02:43 DEBUG 06:02:43 INFO 06:02:43 INFO [loop_until]: kubectl --namespace=xlou top node 06:02:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:02:43 INFO [loop_until]: OK (rc = 0) 06:02:43 DEBUG --- stdout --- 06:02:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6699Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1898m 11% 4978Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 623m 3% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1790m 11% 5012Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2286m 14% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 605m 3% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 743m 4% 14381Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 390m 2% 2032Mi 3% 06:02:43 DEBUG --- stderr --- 06:02:43 DEBUG 06:03:43 INFO 06:03:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:03:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:03:43 INFO [loop_until]: OK (rc = 0) 06:03:43 DEBUG --- stdout --- 06:03:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 26m 5802Mi am-55f77847b7-nv9k2 28m 5687Mi am-55f77847b7-v7x55 25m 5688Mi ds-cts-0 8m 377Mi ds-cts-1 6m 374Mi ds-cts-2 9m 370Mi ds-idrepo-0 1825m 13847Mi ds-idrepo-1 424m 13835Mi ds-idrepo-2 462m 13842Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1152m 3676Mi idm-65858d8c4c-zvhxh 1124m 3767Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 311m 503Mi 06:03:43 DEBUG --- stderr --- 06:03:43 DEBUG 06:03:43 INFO 06:03:43 INFO [loop_until]: kubectl --namespace=xlou top node 06:03:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:03:43 INFO [loop_until]: OK (rc = 0) 06:03:43 DEBUG --- stdout --- 06:03:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 88m 0% 6699Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 82m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1683m 10% 4983Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 528m 3% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1261m 7% 5014Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2009m 12% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 531m 3% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 497m 3% 14376Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 320m 2% 2023Mi 3% 06:03:43 DEBUG --- stderr --- 06:03:43 DEBUG 06:04:43 INFO 06:04:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:04:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:04:43 INFO [loop_until]: OK (rc = 0) 06:04:43 DEBUG --- stdout --- 06:04:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 5799Mi am-55f77847b7-nv9k2 8m 5687Mi am-55f77847b7-v7x55 6m 5688Mi ds-cts-0 7m 378Mi ds-cts-1 5m 374Mi ds-cts-2 6m 370Mi ds-idrepo-0 21m 13841Mi ds-idrepo-1 22m 13829Mi ds-idrepo-2 9m 13837Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 7m 3676Mi idm-65858d8c4c-zvhxh 6m 3766Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 103Mi 06:04:43 DEBUG --- stderr --- 06:04:43 DEBUG 06:04:43 INFO 06:04:43 INFO [loop_until]: kubectl --namespace=xlou top node 06:04:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:04:44 INFO [loop_until]: OK (rc = 0) 06:04:44 DEBUG --- stdout --- 06:04:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6698Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4984Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 5013Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1627Mi 2% 06:04:44 DEBUG --- stderr --- 06:04:44 DEBUG 127.0.0.1 - - [12/Aug/2023 06:05:26] "GET /monitoring/average?start_time=23-08-12_04:34:55&stop_time=23-08-12_05:03:26 HTTP/1.1" 200 - 06:05:43 INFO 06:05:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:05:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:05:43 INFO [loop_until]: OK (rc = 0) 06:05:43 DEBUG --- stdout --- 06:05:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 6m 5799Mi am-55f77847b7-nv9k2 7m 5687Mi am-55f77847b7-v7x55 6m 5688Mi ds-cts-0 6m 378Mi ds-cts-1 5m 374Mi ds-cts-2 6m 370Mi ds-idrepo-0 12m 13841Mi ds-idrepo-1 13m 13829Mi ds-idrepo-2 11m 13837Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 3675Mi idm-65858d8c4c-zvhxh 5m 3766Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 103Mi 06:05:43 DEBUG --- stderr --- 06:05:43 DEBUG 06:05:44 INFO 06:05:44 INFO [loop_until]: kubectl --namespace=xlou top node 06:05:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:05:44 INFO [loop_until]: OK (rc = 0) 06:05:44 DEBUG --- stdout --- 06:05:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6698Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 4985Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 5016Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1629Mi 2% 06:05:44 DEBUG --- stderr --- 06:05:44 DEBUG 06:06:43 INFO 06:06:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:06:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:06:43 INFO [loop_until]: OK (rc = 0) 06:06:43 DEBUG --- stdout --- 06:06:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 19m 5798Mi am-55f77847b7-nv9k2 30m 5689Mi am-55f77847b7-v7x55 26m 5688Mi ds-cts-0 7m 378Mi ds-cts-1 9m 374Mi ds-cts-2 8m 370Mi ds-idrepo-0 1039m 13851Mi ds-idrepo-1 418m 13841Mi ds-idrepo-2 169m 13834Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1326m 3687Mi idm-65858d8c4c-zvhxh 787m 3791Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 636m 490Mi 06:06:43 DEBUG --- stderr --- 06:06:43 DEBUG 06:06:44 INFO 06:06:44 INFO [loop_until]: kubectl --namespace=xlou top node 06:06:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:06:44 INFO [loop_until]: OK (rc = 0) 06:06:44 DEBUG --- stdout --- 06:06:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 92m 0% 6700Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1191m 7% 4995Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 403m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1267m 7% 5040Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1094m 6% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 299m 1% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 548m 3% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 714m 4% 2008Mi 3% 06:06:44 DEBUG --- stderr --- 06:06:44 DEBUG 06:07:43 INFO 06:07:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:07:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:07:43 INFO [loop_until]: OK (rc = 0) 06:07:43 DEBUG --- stdout --- 06:07:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 42m 5798Mi am-55f77847b7-nv9k2 36m 5689Mi am-55f77847b7-v7x55 40m 5688Mi ds-cts-0 7m 379Mi ds-cts-1 6m 374Mi ds-cts-2 7m 366Mi ds-idrepo-0 2591m 13829Mi ds-idrepo-1 1067m 13812Mi ds-idrepo-2 800m 13822Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1802m 3697Mi idm-65858d8c4c-zvhxh 1515m 3813Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 380m 508Mi 06:07:43 DEBUG --- stderr --- 06:07:43 DEBUG 06:07:44 INFO 06:07:44 INFO [loop_until]: kubectl --namespace=xlou top node 06:07:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:07:44 INFO [loop_until]: OK (rc = 0) 06:07:44 DEBUG --- stdout --- 06:07:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6698Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 102m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 101m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1969m 12% 5007Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 616m 3% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1663m 10% 5063Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2546m 16% 14401Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1175m 7% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1162m 7% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 401m 2% 2028Mi 3% 06:07:44 DEBUG --- stderr --- 06:07:44 DEBUG 06:08:43 INFO 06:08:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:08:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:08:43 INFO [loop_until]: OK (rc = 0) 06:08:43 DEBUG --- stdout --- 06:08:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 36m 5798Mi am-55f77847b7-nv9k2 34m 5689Mi am-55f77847b7-v7x55 37m 5688Mi ds-cts-0 7m 378Mi ds-cts-1 6m 374Mi ds-cts-2 7m 366Mi ds-idrepo-0 2269m 13827Mi ds-idrepo-1 848m 13839Mi ds-idrepo-2 865m 13830Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1633m 3703Mi idm-65858d8c4c-zvhxh 1508m 3819Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 331m 510Mi 06:08:43 DEBUG --- stderr --- 06:08:43 DEBUG 06:08:44 INFO 06:08:44 INFO [loop_until]: kubectl --namespace=xlou top node 06:08:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:08:44 INFO [loop_until]: OK (rc = 0) 06:08:44 DEBUG --- stdout --- 06:08:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6702Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1859m 11% 5027Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 623m 3% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1564m 9% 5067Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2345m 14% 14401Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1019m 6% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 992m 6% 14361Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 411m 2% 2026Mi 3% 06:08:44 DEBUG --- stderr --- 06:08:44 DEBUG 06:09:43 INFO 06:09:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:09:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:09:43 INFO [loop_until]: OK (rc = 0) 06:09:43 DEBUG --- stdout --- 06:09:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 37m 5799Mi am-55f77847b7-nv9k2 36m 5689Mi am-55f77847b7-v7x55 33m 5688Mi ds-cts-0 9m 378Mi ds-cts-1 6m 374Mi ds-cts-2 7m 367Mi ds-idrepo-0 2548m 13823Mi ds-idrepo-1 1106m 13797Mi ds-idrepo-2 1166m 13808Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1674m 3709Mi idm-65858d8c4c-zvhxh 1519m 3825Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 333m 511Mi 06:09:43 DEBUG --- stderr --- 06:09:43 DEBUG 06:09:44 INFO 06:09:44 INFO [loop_until]: kubectl --namespace=xlou top node 06:09:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:09:44 INFO [loop_until]: OK (rc = 0) 06:09:44 DEBUG --- stdout --- 06:09:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6699Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1810m 11% 5017Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 612m 3% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1610m 10% 5072Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2629m 16% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1291m 8% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1196m 7% 14355Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 411m 2% 2028Mi 3% 06:09:44 DEBUG --- stderr --- 06:09:44 DEBUG 06:10:43 INFO 06:10:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:10:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:10:43 INFO [loop_until]: OK (rc = 0) 06:10:43 DEBUG --- stdout --- 06:10:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 35m 5799Mi am-55f77847b7-nv9k2 37m 5689Mi am-55f77847b7-v7x55 36m 5689Mi ds-cts-0 6m 378Mi ds-cts-1 6m 374Mi ds-cts-2 7m 367Mi ds-idrepo-0 2023m 13836Mi ds-idrepo-1 1184m 13811Mi ds-idrepo-2 751m 13823Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1679m 3715Mi idm-65858d8c4c-zvhxh 1496m 3829Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 335m 513Mi 06:10:43 DEBUG --- stderr --- 06:10:43 DEBUG 06:10:44 INFO 06:10:44 INFO [loop_until]: kubectl --namespace=xlou top node 06:10:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:10:45 INFO [loop_until]: OK (rc = 0) 06:10:45 DEBUG --- stdout --- 06:10:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6700Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1800m 11% 5024Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 622m 3% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1607m 10% 5078Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2072m 13% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 770m 4% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1290m 8% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 398m 2% 2032Mi 3% 06:10:45 DEBUG --- stderr --- 06:10:45 DEBUG 06:11:43 INFO 06:11:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:11:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:11:43 INFO [loop_until]: OK (rc = 0) 06:11:43 DEBUG --- stdout --- 06:11:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5799Mi am-55f77847b7-nv9k2 37m 5689Mi am-55f77847b7-v7x55 39m 5689Mi ds-cts-0 7m 378Mi ds-cts-1 6m 374Mi ds-cts-2 8m 367Mi ds-idrepo-0 2040m 13838Mi ds-idrepo-1 827m 13788Mi ds-idrepo-2 493m 13814Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1716m 3724Mi idm-65858d8c4c-zvhxh 1494m 3845Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 325m 514Mi 06:11:43 DEBUG --- stderr --- 06:11:43 DEBUG 06:11:45 INFO 06:11:45 INFO [loop_until]: kubectl --namespace=xlou top node 06:11:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:11:45 INFO [loop_until]: OK (rc = 0) 06:11:45 DEBUG --- stdout --- 06:11:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6698Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1828m 11% 5032Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 614m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1545m 9% 5089Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2286m 14% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 577m 3% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 725m 4% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 405m 2% 2033Mi 3% 06:11:45 DEBUG --- stderr --- 06:11:45 DEBUG 06:12:43 INFO 06:12:43 INFO [loop_until]: kubectl --namespace=xlou top pods 06:12:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:12:44 INFO [loop_until]: OK (rc = 0) 06:12:44 DEBUG --- stdout --- 06:12:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 37m 5799Mi am-55f77847b7-nv9k2 36m 5689Mi am-55f77847b7-v7x55 34m 5689Mi ds-cts-0 6m 378Mi ds-cts-1 7m 374Mi ds-cts-2 6m 367Mi ds-idrepo-0 2629m 13821Mi ds-idrepo-1 1197m 13811Mi ds-idrepo-2 957m 13806Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1686m 3730Mi idm-65858d8c4c-zvhxh 1443m 3850Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 327m 515Mi 06:12:44 DEBUG --- stderr --- 06:12:44 DEBUG 06:12:45 INFO 06:12:45 INFO [loop_until]: kubectl --namespace=xlou top node 06:12:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:12:45 INFO [loop_until]: OK (rc = 0) 06:12:45 DEBUG --- stdout --- 06:12:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6700Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1820m 11% 5039Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 620m 3% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1581m 9% 5098Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2524m 15% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1090m 6% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1367m 8% 14369Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 407m 2% 2032Mi 3% 06:12:45 DEBUG --- stderr --- 06:12:45 DEBUG 06:13:44 INFO 06:13:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:13:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:13:44 INFO [loop_until]: OK (rc = 0) 06:13:44 DEBUG --- stdout --- 06:13:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 33m 5799Mi am-55f77847b7-nv9k2 38m 5689Mi am-55f77847b7-v7x55 34m 5689Mi ds-cts-0 7m 378Mi ds-cts-1 8m 374Mi ds-cts-2 6m 367Mi ds-idrepo-0 2346m 13804Mi ds-idrepo-1 524m 13844Mi ds-idrepo-2 535m 13792Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1632m 3735Mi idm-65858d8c4c-zvhxh 1504m 3856Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 336m 517Mi 06:13:44 DEBUG --- stderr --- 06:13:44 DEBUG 06:13:45 INFO 06:13:45 INFO [loop_until]: kubectl --namespace=xlou top node 06:13:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:13:45 INFO [loop_until]: OK (rc = 0) 06:13:45 DEBUG --- stdout --- 06:13:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6706Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1774m 11% 5046Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 615m 3% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1603m 10% 5104Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2271m 14% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 747m 4% 14347Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 576m 3% 14394Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 398m 2% 2036Mi 3% 06:13:45 DEBUG --- stderr --- 06:13:45 DEBUG 06:14:44 INFO 06:14:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:14:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:14:44 INFO [loop_until]: OK (rc = 0) 06:14:44 DEBUG --- stdout --- 06:14:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 35m 5799Mi am-55f77847b7-nv9k2 33m 5693Mi am-55f77847b7-v7x55 31m 5691Mi ds-cts-0 7m 379Mi ds-cts-1 13m 374Mi ds-cts-2 10m 367Mi ds-idrepo-0 2383m 13696Mi ds-idrepo-1 687m 13726Mi ds-idrepo-2 862m 13699Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1696m 3741Mi idm-65858d8c4c-zvhxh 1470m 3858Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 330m 518Mi 06:14:44 DEBUG --- stderr --- 06:14:44 DEBUG 06:14:45 INFO 06:14:45 INFO [loop_until]: kubectl --namespace=xlou top node 06:14:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:14:45 INFO [loop_until]: OK (rc = 0) 06:14:45 DEBUG --- stdout --- 06:14:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6718Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1849m 11% 5049Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 618m 3% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1608m 10% 5109Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2252m 14% 14284Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 769m 4% 14251Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 806m 5% 14287Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 414m 2% 2037Mi 3% 06:14:45 DEBUG --- stderr --- 06:14:45 DEBUG 06:15:44 INFO 06:15:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:15:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:15:44 INFO [loop_until]: OK (rc = 0) 06:15:44 DEBUG --- stdout --- 06:15:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 38m 5801Mi am-55f77847b7-nv9k2 36m 5693Mi am-55f77847b7-v7x55 32m 5691Mi ds-cts-0 8m 377Mi ds-cts-1 10m 375Mi ds-cts-2 7m 367Mi ds-idrepo-0 1979m 13751Mi ds-idrepo-1 615m 13761Mi ds-idrepo-2 679m 13722Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1645m 3748Mi idm-65858d8c4c-zvhxh 1457m 3865Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 347m 519Mi 06:15:44 DEBUG --- stderr --- 06:15:44 DEBUG 06:15:45 INFO 06:15:45 INFO [loop_until]: kubectl --namespace=xlou top node 06:15:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:15:45 INFO [loop_until]: OK (rc = 0) 06:15:45 DEBUG --- stdout --- 06:15:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6702Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1815m 11% 5055Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 628m 3% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1611m 10% 5113Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2105m 13% 14340Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 744m 4% 14280Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 703m 4% 14320Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 420m 2% 2038Mi 3% 06:15:45 DEBUG --- stderr --- 06:15:45 DEBUG 06:16:44 INFO 06:16:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:16:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:16:44 INFO [loop_until]: OK (rc = 0) 06:16:44 DEBUG --- stdout --- 06:16:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5801Mi am-55f77847b7-nv9k2 37m 5693Mi am-55f77847b7-v7x55 32m 5691Mi ds-cts-0 15m 380Mi ds-cts-1 10m 375Mi ds-cts-2 6m 367Mi ds-idrepo-0 2179m 13843Mi ds-idrepo-1 1174m 13803Mi ds-idrepo-2 1141m 13797Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1614m 3754Mi idm-65858d8c4c-zvhxh 1413m 3870Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 337m 520Mi 06:16:44 DEBUG --- stderr --- 06:16:44 DEBUG 06:16:45 INFO 06:16:45 INFO [loop_until]: kubectl --namespace=xlou top node 06:16:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:16:45 INFO [loop_until]: OK (rc = 0) 06:16:45 DEBUG --- stdout --- 06:16:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6704Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1738m 10% 5060Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 609m 3% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1574m 9% 5118Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2315m 14% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1021m 6% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1390m 8% 14371Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 406m 2% 2041Mi 3% 06:16:45 DEBUG --- stderr --- 06:16:45 DEBUG 06:17:44 INFO 06:17:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:17:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:17:44 INFO [loop_until]: OK (rc = 0) 06:17:44 DEBUG --- stdout --- 06:17:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 30m 5801Mi am-55f77847b7-nv9k2 36m 5693Mi am-55f77847b7-v7x55 37m 5692Mi ds-cts-0 6m 380Mi ds-cts-1 6m 376Mi ds-cts-2 7m 367Mi ds-idrepo-0 2420m 13822Mi ds-idrepo-1 1007m 13815Mi ds-idrepo-2 921m 13802Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1674m 3760Mi idm-65858d8c4c-zvhxh 1484m 3876Mi lodemon-97b6d75b7-fknft 1m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 334m 521Mi 06:17:44 DEBUG --- stderr --- 06:17:44 DEBUG 06:17:45 INFO 06:17:45 INFO [loop_until]: kubectl --namespace=xlou top node 06:17:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:17:45 INFO [loop_until]: OK (rc = 0) 06:17:45 DEBUG --- stdout --- 06:17:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 6704Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 87m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1812m 11% 5063Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 635m 3% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1633m 10% 5123Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 50m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2632m 16% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 894m 5% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1115m 7% 14391Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 410m 2% 2040Mi 3% 06:17:45 DEBUG --- stderr --- 06:17:45 DEBUG 06:18:44 INFO 06:18:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:18:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:18:44 INFO [loop_until]: OK (rc = 0) 06:18:44 DEBUG --- stdout --- 06:18:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5801Mi am-55f77847b7-nv9k2 34m 5694Mi am-55f77847b7-v7x55 31m 5691Mi ds-cts-0 7m 380Mi ds-cts-1 7m 375Mi ds-cts-2 6m 367Mi ds-idrepo-0 2127m 13848Mi ds-idrepo-1 614m 13742Mi ds-idrepo-2 567m 13722Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1667m 3766Mi idm-65858d8c4c-zvhxh 1502m 3882Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 341m 522Mi 06:18:44 DEBUG --- stderr --- 06:18:44 DEBUG 06:18:45 INFO 06:18:45 INFO [loop_until]: kubectl --namespace=xlou top node 06:18:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:18:45 INFO [loop_until]: OK (rc = 0) 06:18:45 DEBUG --- stdout --- 06:18:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6702Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 88m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1801m 11% 5070Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 600m 3% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1601m 10% 5129Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1060Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2124m 13% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 790m 4% 14308Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 653m 4% 14307Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 408m 2% 2042Mi 3% 06:18:45 DEBUG --- stderr --- 06:18:45 DEBUG 06:19:44 INFO 06:19:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:19:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:19:44 INFO [loop_until]: OK (rc = 0) 06:19:44 DEBUG --- stdout --- 06:19:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5801Mi am-55f77847b7-nv9k2 35m 5694Mi am-55f77847b7-v7x55 33m 5691Mi ds-cts-0 7m 380Mi ds-cts-1 7m 375Mi ds-cts-2 7m 367Mi ds-idrepo-0 2931m 13822Mi ds-idrepo-1 1746m 13672Mi ds-idrepo-2 1153m 13623Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1637m 3790Mi idm-65858d8c4c-zvhxh 1434m 3888Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 334m 524Mi 06:19:44 DEBUG --- stderr --- 06:19:44 DEBUG 06:19:46 INFO 06:19:46 INFO [loop_until]: kubectl --namespace=xlou top node 06:19:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:19:46 INFO [loop_until]: OK (rc = 0) 06:19:46 DEBUG --- stdout --- 06:19:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6705Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1800m 11% 5096Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 612m 3% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1584m 9% 5131Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2869m 18% 14313Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1432m 9% 14223Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1796m 11% 14248Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 411m 2% 2044Mi 3% 06:19:46 DEBUG --- stderr --- 06:19:46 DEBUG 06:20:44 INFO 06:20:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:20:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:20:44 INFO [loop_until]: OK (rc = 0) 06:20:44 DEBUG --- stdout --- 06:20:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5801Mi am-55f77847b7-nv9k2 36m 5694Mi am-55f77847b7-v7x55 35m 5691Mi ds-cts-0 7m 380Mi ds-cts-1 6m 375Mi ds-cts-2 6m 367Mi ds-idrepo-0 2920m 13770Mi ds-idrepo-1 1420m 13728Mi ds-idrepo-2 1960m 13701Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1685m 3777Mi idm-65858d8c4c-zvhxh 1548m 3893Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 351m 524Mi 06:20:44 DEBUG --- stderr --- 06:20:44 DEBUG 06:20:46 INFO 06:20:46 INFO [loop_until]: kubectl --namespace=xlou top node 06:20:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:20:46 INFO [loop_until]: OK (rc = 0) 06:20:46 DEBUG --- stdout --- 06:20:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6703Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 100m 0% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1759m 11% 5082Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 611m 3% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1648m 10% 5137Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3173m 19% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1659m 10% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1721m 10% 14301Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 419m 2% 2044Mi 3% 06:20:46 DEBUG --- stderr --- 06:20:46 DEBUG 06:21:44 INFO 06:21:44 INFO [loop_until]: kubectl --namespace=xlou top pods 06:21:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:21:45 INFO [loop_until]: OK (rc = 0) 06:21:45 DEBUG --- stdout --- 06:21:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 34m 5796Mi am-55f77847b7-nv9k2 36m 5690Mi am-55f77847b7-v7x55 40m 5691Mi ds-cts-0 7m 381Mi ds-cts-1 6m 376Mi ds-cts-2 6m 369Mi ds-idrepo-0 2323m 13726Mi ds-idrepo-1 1035m 13690Mi ds-idrepo-2 792m 13644Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1556m 3782Mi idm-65858d8c4c-zvhxh 1482m 3899Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 334m 526Mi 06:21:45 DEBUG --- stderr --- 06:21:45 DEBUG 06:21:46 INFO 06:21:46 INFO [loop_until]: kubectl --namespace=xlou top node 06:21:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:21:46 INFO [loop_until]: OK (rc = 0) 06:21:46 DEBUG --- stdout --- 06:21:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 94m 0% 6703Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1800m 11% 5088Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 613m 3% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1590m 10% 5157Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2297m 14% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 781m 4% 14213Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1078m 6% 14264Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 400m 2% 2047Mi 3% 06:21:46 DEBUG --- stderr --- 06:21:46 DEBUG 06:22:45 INFO 06:22:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:22:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:22:45 INFO [loop_until]: OK (rc = 0) 06:22:45 DEBUG --- stdout --- 06:22:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 33m 5796Mi am-55f77847b7-nv9k2 40m 5690Mi am-55f77847b7-v7x55 36m 5691Mi ds-cts-0 7m 380Mi ds-cts-1 6m 375Mi ds-cts-2 7m 367Mi ds-idrepo-0 2888m 13780Mi ds-idrepo-1 723m 13734Mi ds-idrepo-2 574m 13683Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1629m 3788Mi idm-65858d8c4c-zvhxh 1485m 3904Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 344m 527Mi 06:22:45 DEBUG --- stderr --- 06:22:45 DEBUG 06:22:46 INFO 06:22:46 INFO [loop_until]: kubectl --namespace=xlou top node 06:22:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:22:46 INFO [loop_until]: OK (rc = 0) 06:22:46 DEBUG --- stdout --- 06:22:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6703Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1836m 11% 5095Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 605m 3% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1606m 10% 5150Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2788m 17% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1004m 6% 14256Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 822m 5% 14301Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 409m 2% 2047Mi 3% 06:22:46 DEBUG --- stderr --- 06:22:46 DEBUG 06:23:45 INFO 06:23:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:23:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:23:45 INFO [loop_until]: OK (rc = 0) 06:23:45 DEBUG --- stdout --- 06:23:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 33m 5796Mi am-55f77847b7-nv9k2 35m 5690Mi am-55f77847b7-v7x55 34m 5691Mi ds-cts-0 7m 380Mi ds-cts-1 6m 375Mi ds-cts-2 6m 367Mi ds-idrepo-0 2084m 13818Mi ds-idrepo-1 694m 13749Mi ds-idrepo-2 526m 13704Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1612m 3795Mi idm-65858d8c4c-zvhxh 1451m 3911Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 327m 529Mi 06:23:45 DEBUG --- stderr --- 06:23:45 DEBUG 06:23:46 INFO 06:23:46 INFO [loop_until]: kubectl --namespace=xlou top node 06:23:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:23:46 INFO [loop_until]: OK (rc = 0) 06:23:46 DEBUG --- stdout --- 06:23:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6700Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1718m 10% 5105Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 604m 3% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1571m 9% 5159Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2089m 13% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 589m 3% 14278Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 730m 4% 14329Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 390m 2% 2049Mi 3% 06:23:46 DEBUG --- stderr --- 06:23:46 DEBUG 06:24:45 INFO 06:24:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:24:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:24:45 INFO [loop_until]: OK (rc = 0) 06:24:45 DEBUG --- stdout --- 06:24:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 35m 5797Mi am-55f77847b7-nv9k2 37m 5691Mi am-55f77847b7-v7x55 41m 5691Mi ds-cts-0 6m 380Mi ds-cts-1 6m 375Mi ds-cts-2 10m 368Mi ds-idrepo-0 2223m 13851Mi ds-idrepo-1 568m 13667Mi ds-idrepo-2 677m 13718Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1663m 3801Mi idm-65858d8c4c-zvhxh 1582m 3916Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 352m 529Mi 06:24:45 DEBUG --- stderr --- 06:24:45 DEBUG 06:24:46 INFO 06:24:46 INFO [loop_until]: kubectl --namespace=xlou top node 06:24:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:24:46 INFO [loop_until]: OK (rc = 0) 06:24:46 DEBUG --- stdout --- 06:24:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6703Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1791m 11% 5112Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 610m 3% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1604m 10% 5164Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1995m 12% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1015m 6% 14305Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 732m 4% 14241Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 422m 2% 2047Mi 3% 06:24:46 DEBUG --- stderr --- 06:24:46 DEBUG 06:25:45 INFO 06:25:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:25:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:25:45 INFO [loop_until]: OK (rc = 0) 06:25:45 DEBUG --- stdout --- 06:25:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 34m 5797Mi am-55f77847b7-nv9k2 32m 5695Mi am-55f77847b7-v7x55 33m 5692Mi ds-cts-0 9m 380Mi ds-cts-1 6m 375Mi ds-cts-2 6m 368Mi ds-idrepo-0 1980m 13874Mi ds-idrepo-1 361m 13682Mi ds-idrepo-2 544m 13739Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1643m 3812Mi idm-65858d8c4c-zvhxh 1500m 3920Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 333m 531Mi 06:25:45 DEBUG --- stderr --- 06:25:45 DEBUG 06:25:46 INFO 06:25:46 INFO [loop_until]: kubectl --namespace=xlou top node 06:25:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:25:46 INFO [loop_until]: OK (rc = 0) 06:25:46 DEBUG --- stdout --- 06:25:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 91m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1780m 11% 5120Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 614m 3% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1580m 9% 5170Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2034m 12% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 594m 3% 14313Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 420m 2% 14262Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 404m 2% 2049Mi 3% 06:25:46 DEBUG --- stderr --- 06:25:46 DEBUG 06:26:45 INFO 06:26:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:26:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:26:45 INFO [loop_until]: OK (rc = 0) 06:26:45 DEBUG --- stdout --- 06:26:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 33m 5797Mi am-55f77847b7-nv9k2 34m 5695Mi am-55f77847b7-v7x55 31m 5692Mi ds-cts-0 6m 380Mi ds-cts-1 6m 375Mi ds-cts-2 6m 368Mi ds-idrepo-0 2632m 13823Mi ds-idrepo-1 743m 13696Mi ds-idrepo-2 608m 13755Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1662m 3818Mi idm-65858d8c4c-zvhxh 1449m 3926Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 350m 532Mi 06:26:45 DEBUG --- stderr --- 06:26:45 DEBUG 06:26:46 INFO 06:26:46 INFO [loop_until]: kubectl --namespace=xlou top node 06:26:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:26:46 INFO [loop_until]: OK (rc = 0) 06:26:46 DEBUG --- stdout --- 06:26:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6710Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 89m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1703m 10% 5127Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 597m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1592m 10% 5172Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2822m 17% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 732m 4% 14328Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 725m 4% 14276Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 416m 2% 2050Mi 3% 06:26:46 DEBUG --- stderr --- 06:26:46 DEBUG 06:27:45 INFO 06:27:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:27:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:27:45 INFO [loop_until]: OK (rc = 0) 06:27:45 DEBUG --- stdout --- 06:27:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 30m 5798Mi am-55f77847b7-nv9k2 32m 5695Mi am-55f77847b7-v7x55 33m 5692Mi ds-cts-0 7m 380Mi ds-cts-1 12m 375Mi ds-cts-2 7m 368Mi ds-idrepo-0 2275m 13665Mi ds-idrepo-1 1060m 13562Mi ds-idrepo-2 990m 13615Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1667m 3823Mi idm-65858d8c4c-zvhxh 1509m 3931Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 340m 533Mi 06:27:45 DEBUG --- stderr --- 06:27:45 DEBUG 06:27:47 INFO 06:27:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:27:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:27:47 INFO [loop_until]: OK (rc = 0) 06:27:47 DEBUG --- stdout --- 06:27:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 94m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 87m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1768m 11% 5130Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 615m 3% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1550m 9% 5179Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2343m 14% 14280Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 822m 5% 14167Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1125m 7% 14141Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 410m 2% 2053Mi 3% 06:27:47 DEBUG --- stderr --- 06:27:47 DEBUG 06:28:45 INFO 06:28:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:28:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:28:45 INFO [loop_until]: OK (rc = 0) 06:28:45 DEBUG --- stdout --- 06:28:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5798Mi am-55f77847b7-nv9k2 34m 5695Mi am-55f77847b7-v7x55 32m 5692Mi ds-cts-0 7m 380Mi ds-cts-1 5m 375Mi ds-cts-2 6m 369Mi ds-idrepo-0 2713m 13702Mi ds-idrepo-1 727m 13580Mi ds-idrepo-2 623m 13626Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1675m 3828Mi idm-65858d8c4c-zvhxh 1508m 3936Mi lodemon-97b6d75b7-fknft 1m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 336m 534Mi 06:28:45 DEBUG --- stderr --- 06:28:45 DEBUG 06:28:47 INFO 06:28:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:28:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:28:47 INFO [loop_until]: OK (rc = 0) 06:28:47 DEBUG --- stdout --- 06:28:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 88m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 89m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1816m 11% 5137Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 627m 3% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1628m 10% 5185Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2618m 16% 14326Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1001m 6% 14206Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 755m 4% 14159Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 413m 2% 2052Mi 3% 06:28:47 DEBUG --- stderr --- 06:28:47 DEBUG 06:29:45 INFO 06:29:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:29:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:29:45 INFO [loop_until]: OK (rc = 0) 06:29:45 DEBUG --- stdout --- 06:29:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 32m 5798Mi am-55f77847b7-nv9k2 34m 5695Mi am-55f77847b7-v7x55 32m 5692Mi ds-cts-0 8m 380Mi ds-cts-1 6m 375Mi ds-cts-2 7m 369Mi ds-idrepo-0 2301m 13739Mi ds-idrepo-1 874m 13581Mi ds-idrepo-2 520m 13640Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1643m 3833Mi idm-65858d8c4c-zvhxh 1533m 3941Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 344m 535Mi 06:29:45 DEBUG --- stderr --- 06:29:45 DEBUG 06:29:47 INFO 06:29:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:29:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:29:47 INFO [loop_until]: OK (rc = 0) 06:29:47 DEBUG --- stdout --- 06:29:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1330Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6706Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1756m 11% 5141Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 626m 3% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1656m 10% 5185Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2136m 13% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 583m 3% 14222Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 917m 5% 14169Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 409m 2% 2056Mi 3% 06:29:47 DEBUG --- stderr --- 06:29:47 DEBUG 06:30:45 INFO 06:30:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:30:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:30:45 INFO [loop_until]: OK (rc = 0) 06:30:45 DEBUG --- stdout --- 06:30:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 34m 5797Mi am-55f77847b7-nv9k2 34m 5695Mi am-55f77847b7-v7x55 36m 5692Mi ds-cts-0 17m 380Mi ds-cts-1 6m 376Mi ds-cts-2 6m 369Mi ds-idrepo-0 2716m 13784Mi ds-idrepo-1 907m 13601Mi ds-idrepo-2 913m 13652Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1695m 3839Mi idm-65858d8c4c-zvhxh 1464m 3947Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 327m 537Mi 06:30:45 DEBUG --- stderr --- 06:30:45 DEBUG 06:30:47 INFO 06:30:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:30:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:30:47 INFO [loop_until]: OK (rc = 0) 06:30:47 DEBUG --- stdout --- 06:30:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 6706Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1820m 11% 5147Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 624m 3% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1600m 10% 5193Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2746m 17% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 890m 5% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 616m 3% 14195Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 402m 2% 2056Mi 3% 06:30:47 DEBUG --- stderr --- 06:30:47 DEBUG 06:31:45 INFO 06:31:45 INFO [loop_until]: kubectl --namespace=xlou top pods 06:31:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:31:46 INFO [loop_until]: OK (rc = 0) 06:31:46 DEBUG --- stdout --- 06:31:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 34m 5798Mi am-55f77847b7-nv9k2 34m 5695Mi am-55f77847b7-v7x55 34m 5692Mi ds-cts-0 7m 380Mi ds-cts-1 6m 375Mi ds-cts-2 7m 369Mi ds-idrepo-0 2044m 13825Mi ds-idrepo-1 649m 13619Mi ds-idrepo-2 589m 13679Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1631m 3846Mi idm-65858d8c4c-zvhxh 1506m 3952Mi lodemon-97b6d75b7-fknft 8m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 344m 537Mi 06:31:46 DEBUG --- stderr --- 06:31:46 DEBUG 06:31:47 INFO 06:31:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:31:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:31:47 INFO [loop_until]: OK (rc = 0) 06:31:47 DEBUG --- stdout --- 06:31:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6702Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1770m 11% 5152Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 617m 3% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1613m 10% 5201Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2115m 13% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 579m 3% 14266Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 657m 4% 14208Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 406m 2% 2056Mi 3% 06:31:47 DEBUG --- stderr --- 06:31:47 DEBUG 06:32:46 INFO 06:32:46 INFO [loop_until]: kubectl --namespace=xlou top pods 06:32:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:32:46 INFO [loop_until]: OK (rc = 0) 06:32:46 DEBUG --- stdout --- 06:32:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 35m 5798Mi am-55f77847b7-nv9k2 39m 5695Mi am-55f77847b7-v7x55 32m 5692Mi ds-cts-0 6m 380Mi ds-cts-1 6m 375Mi ds-cts-2 8m 369Mi ds-idrepo-0 2817m 13828Mi ds-idrepo-1 750m 13642Mi ds-idrepo-2 1200m 13688Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1631m 3850Mi idm-65858d8c4c-zvhxh 1478m 3957Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 334m 539Mi 06:32:46 DEBUG --- stderr --- 06:32:46 DEBUG 06:32:47 INFO 06:32:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:32:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:32:47 INFO [loop_until]: OK (rc = 0) 06:32:47 DEBUG --- stdout --- 06:32:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6705Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1703m 10% 5161Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 608m 3% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1600m 10% 5205Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2621m 16% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1090m 6% 14302Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 686m 4% 14236Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 399m 2% 2058Mi 3% 06:32:47 DEBUG --- stderr --- 06:32:47 DEBUG 06:33:46 INFO 06:33:46 INFO [loop_until]: kubectl --namespace=xlou top pods 06:33:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:33:46 INFO [loop_until]: OK (rc = 0) 06:33:46 DEBUG --- stdout --- 06:33:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 38m 5798Mi am-55f77847b7-nv9k2 36m 5695Mi am-55f77847b7-v7x55 34m 5692Mi ds-cts-0 6m 381Mi ds-cts-1 11m 376Mi ds-cts-2 7m 368Mi ds-idrepo-0 2050m 13857Mi ds-idrepo-1 820m 13655Mi ds-idrepo-2 501m 13718Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1688m 3856Mi idm-65858d8c4c-zvhxh 1499m 3963Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 339m 539Mi 06:33:46 DEBUG --- stderr --- 06:33:46 DEBUG 06:33:47 INFO 06:33:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:33:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:33:47 INFO [loop_until]: OK (rc = 0) 06:33:47 DEBUG --- stdout --- 06:33:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6706Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1789m 11% 5165Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 621m 3% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1636m 10% 5206Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2102m 13% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 603m 3% 14302Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 854m 5% 14254Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 412m 2% 2070Mi 3% 06:33:47 DEBUG --- stderr --- 06:33:47 DEBUG 06:34:46 INFO 06:34:46 INFO [loop_until]: kubectl --namespace=xlou top pods 06:34:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:34:46 INFO [loop_until]: OK (rc = 0) 06:34:46 DEBUG --- stdout --- 06:34:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 33m 5798Mi am-55f77847b7-nv9k2 42m 5696Mi am-55f77847b7-v7x55 37m 5692Mi ds-cts-0 6m 381Mi ds-cts-1 6m 376Mi ds-cts-2 6m 368Mi ds-idrepo-0 2957m 13822Mi ds-idrepo-1 1209m 13668Mi ds-idrepo-2 790m 13743Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1609m 3863Mi idm-65858d8c4c-zvhxh 1480m 3968Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 326m 541Mi 06:34:46 DEBUG --- stderr --- 06:34:46 DEBUG 06:34:47 INFO 06:34:47 INFO [loop_until]: kubectl --namespace=xlou top node 06:34:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:34:47 INFO [loop_until]: OK (rc = 0) 06:34:47 DEBUG --- stdout --- 06:34:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1787m 11% 5172Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 609m 3% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1563m 9% 5212Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3162m 19% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 595m 3% 14321Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1093m 6% 14266Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 402m 2% 2059Mi 3% 06:34:47 DEBUG --- stderr --- 06:34:47 DEBUG 06:35:46 INFO 06:35:46 INFO [loop_until]: kubectl --namespace=xlou top pods 06:35:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:35:46 INFO [loop_until]: OK (rc = 0) 06:35:46 DEBUG --- stdout --- 06:35:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 33m 5798Mi am-55f77847b7-nv9k2 37m 5696Mi am-55f77847b7-v7x55 33m 5692Mi ds-cts-0 6m 382Mi ds-cts-1 6m 377Mi ds-cts-2 6m 368Mi ds-idrepo-0 2554m 13824Mi ds-idrepo-1 597m 13692Mi ds-idrepo-2 538m 13757Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 1674m 3868Mi idm-65858d8c4c-zvhxh 1508m 3975Mi lodemon-97b6d75b7-fknft 4m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 342m 543Mi 06:35:46 DEBUG --- stderr --- 06:35:46 DEBUG 06:35:48 INFO 06:35:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:35:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:35:48 INFO [loop_until]: OK (rc = 0) 06:35:48 DEBUG --- stdout --- 06:35:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 91m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1735m 10% 5175Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 622m 3% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1626m 10% 5225Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1062Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2549m 16% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 599m 3% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 662m 4% 14290Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 412m 2% 2062Mi 3% 06:35:48 DEBUG --- stderr --- 06:35:48 DEBUG 06:36:46 INFO 06:36:46 INFO [loop_until]: kubectl --namespace=xlou top pods 06:36:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:36:46 INFO [loop_until]: OK (rc = 0) 06:36:46 DEBUG --- stdout --- 06:36:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 5798Mi am-55f77847b7-nv9k2 7m 5696Mi am-55f77847b7-v7x55 9m 5692Mi ds-cts-0 7m 382Mi ds-cts-1 6m 376Mi ds-cts-2 9m 369Mi ds-idrepo-0 203m 13792Mi ds-idrepo-1 10m 13697Mi ds-idrepo-2 438m 13770Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 3870Mi idm-65858d8c4c-zvhxh 5m 4017Mi lodemon-97b6d75b7-fknft 4m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 73m 136Mi 06:36:46 DEBUG --- stderr --- 06:36:46 DEBUG 06:36:48 INFO 06:36:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:36:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:36:48 INFO [loop_until]: OK (rc = 0) 06:36:48 DEBUG --- stdout --- 06:36:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 5176Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 5265Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 154m 0% 14428Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 274m 1% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14294Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 133m 0% 1663Mi 2% 06:36:48 DEBUG --- stderr --- 06:36:48 DEBUG 06:37:46 INFO 06:37:46 INFO [loop_until]: kubectl --namespace=xlou top pods 06:37:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:37:46 INFO [loop_until]: OK (rc = 0) 06:37:46 DEBUG --- stdout --- 06:37:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 12m 5798Mi am-55f77847b7-nv9k2 6m 5696Mi am-55f77847b7-v7x55 9m 5692Mi ds-cts-0 6m 382Mi ds-cts-1 5m 376Mi ds-cts-2 6m 368Mi ds-idrepo-0 12m 13791Mi ds-idrepo-1 15m 13696Mi ds-idrepo-2 9m 13759Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 3870Mi idm-65858d8c4c-zvhxh 5m 3979Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 135Mi 06:37:46 DEBUG --- stderr --- 06:37:46 DEBUG 06:37:48 INFO 06:37:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:37:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:37:48 INFO [loop_until]: OK (rc = 0) 06:37:48 DEBUG --- stdout --- 06:37:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6705Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 5176Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 65m 0% 5229Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 14298Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1664Mi 2% 06:37:48 DEBUG --- stderr --- 06:37:48 DEBUG 127.0.0.1 - - [12/Aug/2023 06:37:57] "GET /monitoring/average?start_time=23-08-12_05:07:26&stop_time=23-08-12_05:35:57 HTTP/1.1" 200 - 06:38:46 INFO 06:38:46 INFO [loop_until]: kubectl --namespace=xlou top pods 06:38:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:38:46 INFO [loop_until]: OK (rc = 0) 06:38:46 DEBUG --- stdout --- 06:38:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 9m 5798Mi am-55f77847b7-nv9k2 7m 5696Mi am-55f77847b7-v7x55 9m 5692Mi ds-cts-0 8m 382Mi ds-cts-1 5m 376Mi ds-cts-2 7m 368Mi ds-idrepo-0 12m 13792Mi ds-idrepo-1 12m 13696Mi ds-idrepo-2 9m 13781Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 5m 3870Mi idm-65858d8c4c-zvhxh 5m 3979Mi lodemon-97b6d75b7-fknft 3m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1266m 430Mi 06:38:46 DEBUG --- stderr --- 06:38:46 DEBUG 06:38:48 INFO 06:38:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:38:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:38:48 INFO [loop_until]: OK (rc = 0) 06:38:48 DEBUG --- stdout --- 06:38:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6706Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 5185Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 5230Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14296Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1840m 11% 1954Mi 3% 06:38:48 DEBUG --- stderr --- 06:38:48 DEBUG 06:39:46 INFO 06:39:46 INFO [loop_until]: kubectl --namespace=xlou top pods 06:39:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:39:46 INFO [loop_until]: OK (rc = 0) 06:39:46 DEBUG --- stdout --- 06:39:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 42m 5798Mi am-55f77847b7-nv9k2 45m 5696Mi am-55f77847b7-v7x55 38m 5687Mi ds-cts-0 7m 383Mi ds-cts-1 6m 376Mi ds-cts-2 6m 369Mi ds-idrepo-0 2321m 13823Mi ds-idrepo-1 2128m 13828Mi ds-idrepo-2 1116m 13839Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 977m 3896Mi idm-65858d8c4c-zvhxh 874m 4004Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 449m 530Mi 06:39:46 DEBUG --- stderr --- 06:39:46 DEBUG 06:39:48 INFO 06:39:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:39:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:39:48 INFO [loop_until]: OK (rc = 0) 06:39:48 DEBUG --- stdout --- 06:39:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1089m 6% 5201Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 451m 2% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1006m 6% 5251Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2173m 13% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1587m 9% 14453Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2596m 16% 14429Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 511m 3% 2054Mi 3% 06:39:48 DEBUG --- stderr --- 06:39:48 DEBUG 06:40:47 INFO 06:40:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:40:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:40:47 INFO [loop_until]: OK (rc = 0) 06:40:47 DEBUG --- stdout --- 06:40:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 38m 5799Mi am-55f77847b7-nv9k2 39m 5696Mi am-55f77847b7-v7x55 37m 5687Mi ds-cts-0 5m 382Mi ds-cts-1 6m 376Mi ds-cts-2 6m 368Mi ds-idrepo-0 2223m 13820Mi ds-idrepo-1 1249m 13815Mi ds-idrepo-2 1430m 13801Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 996m 3931Mi idm-65858d8c4c-zvhxh 901m 4031Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 245m 547Mi 06:40:47 DEBUG --- stderr --- 06:40:47 DEBUG 06:40:48 INFO 06:40:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:40:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:40:48 INFO [loop_until]: OK (rc = 0) 06:40:48 DEBUG --- stdout --- 06:40:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1128m 7% 5230Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 474m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1013m 6% 5282Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2099m 13% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1395m 8% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1335m 8% 14433Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 305m 1% 2066Mi 3% 06:40:48 DEBUG --- stderr --- 06:40:48 DEBUG 06:41:47 INFO 06:41:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:41:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:41:47 INFO [loop_until]: OK (rc = 0) 06:41:47 DEBUG --- stdout --- 06:41:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 38m 5799Mi am-55f77847b7-nv9k2 42m 5696Mi am-55f77847b7-v7x55 40m 5687Mi ds-cts-0 6m 382Mi ds-cts-1 7m 376Mi ds-cts-2 7m 369Mi ds-idrepo-0 2176m 13834Mi ds-idrepo-1 1355m 13823Mi ds-idrepo-2 1924m 13816Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 954m 3929Mi idm-65858d8c4c-zvhxh 838m 4040Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 239m 547Mi 06:41:47 DEBUG --- stderr --- 06:41:47 DEBUG 06:41:48 INFO 06:41:48 INFO [loop_until]: kubectl --namespace=xlou top node 06:41:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:41:48 INFO [loop_until]: OK (rc = 0) 06:41:48 DEBUG --- stdout --- 06:41:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1074m 6% 5235Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 464m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 958m 6% 5289Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2142m 13% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2104m 13% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1331m 8% 14445Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 299m 1% 2069Mi 3% 06:41:48 DEBUG --- stderr --- 06:41:48 DEBUG 06:42:47 INFO 06:42:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:42:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:42:47 INFO [loop_until]: OK (rc = 0) 06:42:47 DEBUG --- stdout --- 06:42:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 39m 5799Mi am-55f77847b7-nv9k2 42m 5696Mi am-55f77847b7-v7x55 41m 5688Mi ds-cts-0 5m 383Mi ds-cts-1 6m 376Mi ds-cts-2 6m 369Mi ds-idrepo-0 3240m 13807Mi ds-idrepo-1 1134m 13846Mi ds-idrepo-2 1264m 13793Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 957m 3935Mi idm-65858d8c4c-zvhxh 866m 4046Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 230m 548Mi 06:42:47 DEBUG --- stderr --- 06:42:47 DEBUG 06:42:49 INFO 06:42:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:42:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:42:49 INFO [loop_until]: OK (rc = 0) 06:42:49 DEBUG --- stdout --- 06:42:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1058m 6% 5243Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 475m 2% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 953m 5% 5291Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3239m 20% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1399m 8% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1113m 7% 14447Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 306m 1% 2070Mi 3% 06:42:49 DEBUG --- stderr --- 06:42:49 DEBUG 06:43:47 INFO 06:43:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:43:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:43:47 INFO [loop_until]: OK (rc = 0) 06:43:47 DEBUG --- stdout --- 06:43:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 39m 5799Mi am-55f77847b7-nv9k2 42m 5696Mi am-55f77847b7-v7x55 46m 5691Mi ds-cts-0 7m 382Mi ds-cts-1 5m 377Mi ds-cts-2 6m 369Mi ds-idrepo-0 1709m 13819Mi ds-idrepo-1 1126m 13847Mi ds-idrepo-2 1132m 13805Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 905m 3940Mi idm-65858d8c4c-zvhxh 838m 4050Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 234m 548Mi 06:43:47 DEBUG --- stderr --- 06:43:47 DEBUG 06:43:49 INFO 06:43:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:43:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:43:49 INFO [loop_until]: OK (rc = 0) 06:43:49 DEBUG --- stdout --- 06:43:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1037m 6% 5247Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 455m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 948m 5% 5298Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1674m 10% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1180m 7% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1173m 7% 14454Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 304m 1% 2068Mi 3% 06:43:49 DEBUG --- stderr --- 06:43:49 DEBUG 06:44:47 INFO 06:44:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:44:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:44:47 INFO [loop_until]: OK (rc = 0) 06:44:47 DEBUG --- stdout --- 06:44:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 39m 5799Mi am-55f77847b7-nv9k2 42m 5696Mi am-55f77847b7-v7x55 39m 5691Mi ds-cts-0 6m 382Mi ds-cts-1 6m 376Mi ds-cts-2 7m 369Mi ds-idrepo-0 2410m 13782Mi ds-idrepo-1 1123m 13820Mi ds-idrepo-2 1459m 13642Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 919m 3945Mi idm-65858d8c4c-zvhxh 821m 4059Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 237m 548Mi 06:44:47 DEBUG --- stderr --- 06:44:47 DEBUG 06:44:49 INFO 06:44:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:44:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:44:49 INFO [loop_until]: OK (rc = 0) 06:44:49 DEBUG --- stdout --- 06:44:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 102m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1038m 6% 5252Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 469m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 931m 5% 5307Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2321m 14% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1650m 10% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1199m 7% 14458Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 296m 1% 2067Mi 3% 06:44:49 DEBUG --- stderr --- 06:44:49 DEBUG 06:45:47 INFO 06:45:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:45:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:45:47 INFO [loop_until]: OK (rc = 0) 06:45:47 DEBUG --- stdout --- 06:45:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 42m 5799Mi am-55f77847b7-nv9k2 44m 5696Mi am-55f77847b7-v7x55 40m 5691Mi ds-cts-0 6m 382Mi ds-cts-1 6m 376Mi ds-cts-2 6m 368Mi ds-idrepo-0 2480m 13681Mi ds-idrepo-1 1215m 13800Mi ds-idrepo-2 1021m 13784Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 955m 3951Mi idm-65858d8c4c-zvhxh 823m 4064Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 241m 549Mi 06:45:47 DEBUG --- stderr --- 06:45:47 DEBUG 06:45:49 INFO 06:45:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:45:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:45:49 INFO [loop_until]: OK (rc = 0) 06:45:49 DEBUG --- stdout --- 06:45:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1338Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 103m 0% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1029m 6% 5259Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 475m 2% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 944m 5% 5313Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2439m 15% 14321Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1186m 7% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1585m 9% 14494Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 311m 1% 2065Mi 3% 06:45:49 DEBUG --- stderr --- 06:45:49 DEBUG 06:46:47 INFO 06:46:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:46:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:46:47 INFO [loop_until]: OK (rc = 0) 06:46:47 DEBUG --- stdout --- 06:46:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 39m 5799Mi am-55f77847b7-nv9k2 43m 5696Mi am-55f77847b7-v7x55 41m 5691Mi ds-cts-0 8m 383Mi ds-cts-1 6m 376Mi ds-cts-2 6m 369Mi ds-idrepo-0 1688m 13809Mi ds-idrepo-1 1360m 13818Mi ds-idrepo-2 1434m 13796Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 892m 3955Mi idm-65858d8c4c-zvhxh 863m 4069Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 227m 549Mi 06:46:47 DEBUG --- stderr --- 06:46:47 DEBUG 06:46:49 INFO 06:46:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:46:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:46:49 INFO [loop_until]: OK (rc = 0) 06:46:49 DEBUG --- stdout --- 06:46:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6709Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1034m 6% 5265Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 476m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 948m 5% 5319Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1921m 12% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1249m 7% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1182m 7% 14458Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 296m 1% 2068Mi 3% 06:46:49 DEBUG --- stderr --- 06:46:49 DEBUG 06:47:47 INFO 06:47:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:47:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:47:47 INFO [loop_until]: OK (rc = 0) 06:47:47 DEBUG --- stdout --- 06:47:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 41m 5799Mi am-55f77847b7-nv9k2 44m 5697Mi am-55f77847b7-v7x55 45m 5691Mi ds-cts-0 11m 382Mi ds-cts-1 6m 376Mi ds-cts-2 6m 369Mi ds-idrepo-0 1860m 13853Mi ds-idrepo-1 1699m 13791Mi ds-idrepo-2 1100m 13781Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 918m 3960Mi idm-65858d8c4c-zvhxh 816m 4073Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 228m 549Mi 06:47:47 DEBUG --- stderr --- 06:47:47 DEBUG 06:47:49 INFO 06:47:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:47:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:47:49 INFO [loop_until]: OK (rc = 0) 06:47:49 DEBUG --- stdout --- 06:47:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 103m 0% 6710Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1043m 6% 5269Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 464m 2% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 906m 5% 5325Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1828m 11% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1218m 7% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1701m 10% 14454Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 293m 1% 2067Mi 3% 06:47:49 DEBUG --- stderr --- 06:47:49 DEBUG 06:48:47 INFO 06:48:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:48:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:48:47 INFO [loop_until]: OK (rc = 0) 06:48:47 DEBUG --- stdout --- 06:48:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 41m 5801Mi am-55f77847b7-nv9k2 40m 5697Mi am-55f77847b7-v7x55 39m 5692Mi ds-cts-0 6m 382Mi ds-cts-1 6m 376Mi ds-cts-2 7m 369Mi ds-idrepo-0 1907m 13823Mi ds-idrepo-1 992m 13807Mi ds-idrepo-2 1899m 13791Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 948m 3967Mi idm-65858d8c4c-zvhxh 865m 4079Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 236m 549Mi 06:48:47 DEBUG --- stderr --- 06:48:47 DEBUG 06:48:49 INFO 06:48:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:48:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:48:49 INFO [loop_until]: OK (rc = 0) 06:48:49 DEBUG --- stdout --- 06:48:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1024m 6% 5265Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 475m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 960m 6% 5332Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2021m 12% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2270m 14% 14418Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1081m 6% 14434Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 308m 1% 2069Mi 3% 06:48:49 DEBUG --- stderr --- 06:48:49 DEBUG 06:49:47 INFO 06:49:47 INFO [loop_until]: kubectl --namespace=xlou top pods 06:49:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:49:47 INFO [loop_until]: OK (rc = 0) 06:49:47 DEBUG --- stdout --- 06:49:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 39m 5801Mi am-55f77847b7-nv9k2 42m 5697Mi am-55f77847b7-v7x55 40m 5692Mi ds-cts-0 8m 382Mi ds-cts-1 6m 376Mi ds-cts-2 8m 369Mi ds-idrepo-0 2482m 13784Mi ds-idrepo-1 1689m 13789Mi ds-idrepo-2 1400m 13790Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 969m 3965Mi idm-65858d8c4c-zvhxh 872m 4083Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 253m 549Mi 06:49:47 DEBUG --- stderr --- 06:49:47 DEBUG 06:49:49 INFO 06:49:49 INFO [loop_until]: kubectl --namespace=xlou top node 06:49:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:49:49 INFO [loop_until]: OK (rc = 0) 06:49:49 DEBUG --- stdout --- 06:49:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6709Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 103m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1078m 6% 5273Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 483m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 970m 6% 5333Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2466m 15% 14457Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1587m 9% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1826m 11% 14434Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 306m 1% 2070Mi 3% 06:49:49 DEBUG --- stderr --- 06:49:49 DEBUG 06:50:48 INFO 06:50:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:50:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:50:48 INFO [loop_until]: OK (rc = 0) 06:50:48 DEBUG --- stdout --- 06:50:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5801Mi am-55f77847b7-nv9k2 42m 5697Mi am-55f77847b7-v7x55 42m 5692Mi ds-cts-0 7m 383Mi ds-cts-1 6m 376Mi ds-cts-2 6m 371Mi ds-idrepo-0 1881m 13846Mi ds-idrepo-1 2152m 13824Mi ds-idrepo-2 1147m 13853Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 919m 3970Mi idm-65858d8c4c-zvhxh 848m 4087Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 237m 549Mi 06:50:48 DEBUG --- stderr --- 06:50:48 DEBUG 06:50:50 INFO 06:50:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:50:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:50:50 INFO [loop_until]: OK (rc = 0) 06:50:50 DEBUG --- stdout --- 06:50:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1339Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 6721Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1090m 6% 5276Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 481m 3% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 925m 5% 5339Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1896m 11% 14483Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1011m 6% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1773m 11% 14456Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 302m 1% 2068Mi 3% 06:50:50 DEBUG --- stderr --- 06:50:50 DEBUG 06:51:48 INFO 06:51:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:51:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:51:48 INFO [loop_until]: OK (rc = 0) 06:51:48 DEBUG --- stdout --- 06:51:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 41m 5801Mi am-55f77847b7-nv9k2 41m 5697Mi am-55f77847b7-v7x55 42m 5692Mi ds-cts-0 6m 383Mi ds-cts-1 6m 376Mi ds-cts-2 10m 371Mi ds-idrepo-0 2720m 13844Mi ds-idrepo-1 1154m 13826Mi ds-idrepo-2 1045m 13804Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 940m 3973Mi idm-65858d8c4c-zvhxh 839m 4091Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 237m 549Mi 06:51:48 DEBUG --- stderr --- 06:51:48 DEBUG 06:51:50 INFO 06:51:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:51:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:51:50 INFO [loop_until]: OK (rc = 0) 06:51:50 DEBUG --- stdout --- 06:51:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1338Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6712Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 102m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1016m 6% 5283Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 472m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 957m 6% 5341Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2545m 16% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 935m 5% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1307m 8% 14496Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 307m 1% 2069Mi 3% 06:51:50 DEBUG --- stderr --- 06:51:50 DEBUG 06:52:48 INFO 06:52:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:52:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:52:48 INFO [loop_until]: OK (rc = 0) 06:52:48 DEBUG --- stdout --- 06:52:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 43m 5801Mi am-55f77847b7-nv9k2 43m 5697Mi am-55f77847b7-v7x55 39m 5692Mi ds-cts-0 6m 383Mi ds-cts-1 6m 376Mi ds-cts-2 10m 371Mi ds-idrepo-0 1902m 13804Mi ds-idrepo-1 1884m 13648Mi ds-idrepo-2 1106m 13849Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 934m 3979Mi idm-65858d8c4c-zvhxh 854m 4097Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 230m 549Mi 06:52:48 DEBUG --- stderr --- 06:52:48 DEBUG 06:52:50 INFO 06:52:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:52:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:52:50 INFO [loop_until]: OK (rc = 0) 06:52:50 DEBUG --- stdout --- 06:52:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6711Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1045m 6% 5288Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 476m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 978m 6% 5348Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2087m 13% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1573m 9% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1620m 10% 14466Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 297m 1% 2069Mi 3% 06:52:50 DEBUG --- stderr --- 06:52:50 DEBUG 06:53:48 INFO 06:53:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:53:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:53:48 INFO [loop_until]: OK (rc = 0) 06:53:48 DEBUG --- stdout --- 06:53:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5801Mi am-55f77847b7-nv9k2 44m 5697Mi am-55f77847b7-v7x55 42m 5692Mi ds-cts-0 5m 383Mi ds-cts-1 7m 378Mi ds-cts-2 9m 371Mi ds-idrepo-0 1863m 13824Mi ds-idrepo-1 1337m 13725Mi ds-idrepo-2 1413m 13702Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 923m 3983Mi idm-65858d8c4c-zvhxh 882m 4101Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 241m 549Mi 06:53:48 DEBUG --- stderr --- 06:53:48 DEBUG 06:53:50 INFO 06:53:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:53:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:53:50 INFO [loop_until]: OK (rc = 0) 06:53:50 DEBUG --- stdout --- 06:53:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1338Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 6707Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1042m 6% 5296Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 475m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 941m 5% 5352Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1927m 12% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1772m 11% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1464m 9% 14358Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 308m 1% 2069Mi 3% 06:53:50 DEBUG --- stderr --- 06:53:50 DEBUG 06:54:48 INFO 06:54:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:54:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:54:48 INFO [loop_until]: OK (rc = 0) 06:54:48 DEBUG --- stdout --- 06:54:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5801Mi am-55f77847b7-nv9k2 52m 5697Mi am-55f77847b7-v7x55 41m 5692Mi ds-cts-0 6m 383Mi ds-cts-1 6m 377Mi ds-cts-2 6m 371Mi ds-idrepo-0 2518m 13823Mi ds-idrepo-1 1279m 13823Mi ds-idrepo-2 672m 13787Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 906m 3987Mi idm-65858d8c4c-zvhxh 799m 4101Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 223m 550Mi 06:54:48 DEBUG --- stderr --- 06:54:48 DEBUG 06:54:50 INFO 06:54:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:54:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:54:50 INFO [loop_until]: OK (rc = 0) 06:54:50 DEBUG --- stdout --- 06:54:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6709Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 102m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1010m 6% 5299Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 454m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 931m 5% 5353Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2464m 15% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1181m 7% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1143m 7% 14461Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 295m 1% 2070Mi 3% 06:54:50 DEBUG --- stderr --- 06:54:50 DEBUG 06:55:48 INFO 06:55:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:55:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:55:48 INFO [loop_until]: OK (rc = 0) 06:55:48 DEBUG --- stdout --- 06:55:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 42m 5801Mi am-55f77847b7-nv9k2 45m 5697Mi am-55f77847b7-v7x55 48m 5694Mi ds-cts-0 6m 383Mi ds-cts-1 6m 376Mi ds-cts-2 7m 371Mi ds-idrepo-0 1568m 13835Mi ds-idrepo-1 1659m 13788Mi ds-idrepo-2 1532m 13805Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 908m 3993Mi idm-65858d8c4c-zvhxh 818m 4105Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 228m 550Mi 06:55:48 DEBUG --- stderr --- 06:55:48 DEBUG 06:55:50 INFO 06:55:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:55:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:55:50 INFO [loop_until]: OK (rc = 0) 06:55:50 DEBUG --- stdout --- 06:55:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6710Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 101m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1049m 6% 5299Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 471m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 925m 5% 5355Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1552m 9% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1853m 11% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1765m 11% 14429Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 289m 1% 2070Mi 3% 06:55:50 DEBUG --- stderr --- 06:55:50 DEBUG 06:56:48 INFO 06:56:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:56:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:56:48 INFO [loop_until]: OK (rc = 0) 06:56:48 DEBUG --- stdout --- 06:56:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5801Mi am-55f77847b7-nv9k2 43m 5698Mi am-55f77847b7-v7x55 45m 5694Mi ds-cts-0 6m 383Mi ds-cts-1 6m 377Mi ds-cts-2 5m 371Mi ds-idrepo-0 1771m 13820Mi ds-idrepo-1 1918m 13811Mi ds-idrepo-2 912m 13824Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 890m 3997Mi idm-65858d8c4c-zvhxh 819m 4110Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 221m 550Mi 06:56:48 DEBUG --- stderr --- 06:56:48 DEBUG 06:56:50 INFO 06:56:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:56:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:56:50 INFO [loop_until]: OK (rc = 0) 06:56:50 DEBUG --- stdout --- 06:56:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 107m 0% 6708Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1002m 6% 5305Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 450m 2% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 894m 5% 5360Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1883m 11% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1113m 7% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1758m 11% 14502Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 299m 1% 2070Mi 3% 06:56:50 DEBUG --- stderr --- 06:56:50 DEBUG 06:57:48 INFO 06:57:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:57:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:57:48 INFO [loop_until]: OK (rc = 0) 06:57:48 DEBUG --- stdout --- 06:57:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 39m 5802Mi am-55f77847b7-nv9k2 42m 5698Mi am-55f77847b7-v7x55 40m 5694Mi ds-cts-0 5m 383Mi ds-cts-1 6m 376Mi ds-cts-2 7m 371Mi ds-idrepo-0 2264m 13788Mi ds-idrepo-1 1346m 13830Mi ds-idrepo-2 1799m 13798Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 931m 4001Mi idm-65858d8c4c-zvhxh 832m 4114Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 232m 551Mi 06:57:48 DEBUG --- stderr --- 06:57:48 DEBUG 06:57:50 INFO 06:57:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:57:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:57:50 INFO [loop_until]: OK (rc = 0) 06:57:50 DEBUG --- stdout --- 06:57:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6709Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1030m 6% 5307Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 471m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 944m 5% 5366Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2044m 12% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1948m 12% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1526m 9% 14502Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 307m 1% 2071Mi 3% 06:57:50 DEBUG --- stderr --- 06:57:50 DEBUG 06:58:48 INFO 06:58:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:58:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:58:48 INFO [loop_until]: OK (rc = 0) 06:58:48 DEBUG --- stdout --- 06:58:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 39m 5802Mi am-55f77847b7-nv9k2 44m 5698Mi am-55f77847b7-v7x55 42m 5694Mi ds-cts-0 7m 383Mi ds-cts-1 6m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 1823m 13864Mi ds-idrepo-1 1547m 13779Mi ds-idrepo-2 828m 13745Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 938m 4006Mi idm-65858d8c4c-zvhxh 826m 4119Mi lodemon-97b6d75b7-fknft 8m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 237m 550Mi 06:58:48 DEBUG --- stderr --- 06:58:48 DEBUG 06:58:50 INFO 06:58:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:58:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:58:50 INFO [loop_until]: OK (rc = 0) 06:58:50 DEBUG --- stdout --- 06:58:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 6710Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1025m 6% 5312Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 456m 2% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 949m 5% 5369Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1936m 12% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1088m 6% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1397m 8% 14467Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 308m 1% 2067Mi 3% 06:58:50 DEBUG --- stderr --- 06:58:50 DEBUG 06:59:48 INFO 06:59:48 INFO [loop_until]: kubectl --namespace=xlou top pods 06:59:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:59:49 INFO [loop_until]: OK (rc = 0) 06:59:49 DEBUG --- stdout --- 06:59:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5802Mi am-55f77847b7-nv9k2 42m 5698Mi am-55f77847b7-v7x55 39m 5694Mi ds-cts-0 6m 383Mi ds-cts-1 6m 376Mi ds-cts-2 5m 371Mi ds-idrepo-0 2547m 13745Mi ds-idrepo-1 1401m 13759Mi ds-idrepo-2 1000m 13823Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 920m 4010Mi idm-65858d8c4c-zvhxh 824m 4122Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 231m 551Mi 06:59:49 DEBUG --- stderr --- 06:59:49 DEBUG 06:59:50 INFO 06:59:50 INFO [loop_until]: kubectl --namespace=xlou top node 06:59:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:59:51 INFO [loop_until]: OK (rc = 0) 06:59:51 DEBUG --- stdout --- 06:59:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6712Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1034m 6% 5314Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 470m 2% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 949m 5% 5376Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2691m 16% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 979m 6% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1671m 10% 14447Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 298m 1% 2069Mi 3% 06:59:51 DEBUG --- stderr --- 06:59:51 DEBUG 07:00:49 INFO 07:00:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:00:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:00:49 INFO [loop_until]: OK (rc = 0) 07:00:49 DEBUG --- stdout --- 07:00:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 41m 5802Mi am-55f77847b7-nv9k2 43m 5698Mi am-55f77847b7-v7x55 40m 5694Mi ds-cts-0 6m 383Mi ds-cts-1 6m 376Mi ds-cts-2 6m 371Mi ds-idrepo-0 1783m 13826Mi ds-idrepo-1 1047m 13823Mi ds-idrepo-2 1021m 13857Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 938m 4016Mi idm-65858d8c4c-zvhxh 857m 4129Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 240m 551Mi 07:00:49 DEBUG --- stderr --- 07:00:49 DEBUG 07:00:51 INFO 07:00:51 INFO [loop_until]: kubectl --namespace=xlou top node 07:00:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:00:51 INFO [loop_until]: OK (rc = 0) 07:00:51 DEBUG --- stdout --- 07:00:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 103m 0% 6712Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 102m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1013m 6% 5323Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 464m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 950m 5% 5381Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1834m 11% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1043m 6% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1043m 6% 14465Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 298m 1% 2068Mi 3% 07:00:51 DEBUG --- stderr --- 07:00:51 DEBUG 07:01:49 INFO 07:01:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:01:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:01:49 INFO [loop_until]: OK (rc = 0) 07:01:49 DEBUG --- stdout --- 07:01:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 41m 5802Mi am-55f77847b7-nv9k2 51m 5699Mi am-55f77847b7-v7x55 40m 5694Mi ds-cts-0 5m 383Mi ds-cts-1 6m 377Mi ds-cts-2 6m 371Mi ds-idrepo-0 2209m 13739Mi ds-idrepo-1 1120m 13819Mi ds-idrepo-2 2072m 13753Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 951m 4020Mi idm-65858d8c4c-zvhxh 857m 4138Mi lodemon-97b6d75b7-fknft 8m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 227m 551Mi 07:01:49 DEBUG --- stderr --- 07:01:49 DEBUG 07:01:51 INFO 07:01:51 INFO [loop_until]: kubectl --namespace=xlou top node 07:01:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:01:51 INFO [loop_until]: OK (rc = 0) 07:01:51 DEBUG --- stdout --- 07:01:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 107m 0% 6711Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1042m 6% 5330Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 473m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 960m 6% 5389Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 51m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2576m 16% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1317m 8% 14401Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1235m 7% 14452Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 298m 1% 2070Mi 3% 07:01:51 DEBUG --- stderr --- 07:01:51 DEBUG 07:02:49 INFO 07:02:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:02:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:02:49 INFO [loop_until]: OK (rc = 0) 07:02:49 DEBUG --- stdout --- 07:02:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 40m 5802Mi am-55f77847b7-nv9k2 42m 5698Mi am-55f77847b7-v7x55 41m 5694Mi ds-cts-0 7m 383Mi ds-cts-1 7m 376Mi ds-cts-2 5m 371Mi ds-idrepo-0 4569m 13805Mi ds-idrepo-1 3285m 13830Mi ds-idrepo-2 2955m 13842Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 977m 4029Mi idm-65858d8c4c-zvhxh 799m 4144Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 223m 551Mi 07:02:49 DEBUG --- stderr --- 07:02:49 DEBUG 07:02:51 INFO 07:02:51 INFO [loop_until]: kubectl --namespace=xlou top node 07:02:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:02:51 INFO [loop_until]: OK (rc = 0) 07:02:51 DEBUG --- stdout --- 07:02:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6713Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1071m 6% 5337Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 454m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 902m 5% 5392Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3505m 22% 14480Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2883m 18% 14497Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2879m 18% 14468Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 293m 1% 2071Mi 3% 07:02:51 DEBUG --- stderr --- 07:02:51 DEBUG 07:03:49 INFO 07:03:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:03:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:03:49 INFO [loop_until]: OK (rc = 0) 07:03:49 DEBUG --- stdout --- 07:03:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 49m 5802Mi am-55f77847b7-nv9k2 44m 5698Mi am-55f77847b7-v7x55 40m 5694Mi ds-cts-0 6m 383Mi ds-cts-1 7m 376Mi ds-cts-2 7m 371Mi ds-idrepo-0 2708m 13827Mi ds-idrepo-1 994m 13835Mi ds-idrepo-2 1128m 13823Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 911m 4034Mi idm-65858d8c4c-zvhxh 831m 4148Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 225m 551Mi 07:03:49 DEBUG --- stderr --- 07:03:49 DEBUG 07:03:51 INFO 07:03:51 INFO [loop_until]: kubectl --namespace=xlou top node 07:03:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:03:51 INFO [loop_until]: OK (rc = 0) 07:03:51 DEBUG --- stdout --- 07:03:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6710Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 105m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1058m 6% 5342Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 486m 3% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 921m 5% 5397Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3126m 19% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1000m 6% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1071m 6% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 291m 1% 2069Mi 3% 07:03:51 DEBUG --- stderr --- 07:03:51 DEBUG 07:04:49 INFO 07:04:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:04:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:04:49 INFO [loop_until]: OK (rc = 0) 07:04:49 DEBUG --- stdout --- 07:04:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 44m 5802Mi am-55f77847b7-nv9k2 44m 5698Mi am-55f77847b7-v7x55 43m 5695Mi ds-cts-0 5m 383Mi ds-cts-1 6m 376Mi ds-cts-2 5m 371Mi ds-idrepo-0 1780m 13821Mi ds-idrepo-1 1167m 13836Mi ds-idrepo-2 1047m 13825Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 897m 4039Mi idm-65858d8c4c-zvhxh 848m 4153Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 226m 552Mi 07:04:49 DEBUG --- stderr --- 07:04:49 DEBUG 07:04:51 INFO 07:04:51 INFO [loop_until]: kubectl --namespace=xlou top node 07:04:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:04:51 INFO [loop_until]: OK (rc = 0) 07:04:51 DEBUG --- stdout --- 07:04:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 6709Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1067m 6% 5348Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 478m 3% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 939m 5% 5400Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1870m 11% 14491Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 964m 6% 14480Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1479m 9% 14494Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 302m 1% 2081Mi 3% 07:04:51 DEBUG --- stderr --- 07:04:51 DEBUG 07:05:49 INFO 07:05:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:05:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:05:49 INFO [loop_until]: OK (rc = 0) 07:05:49 DEBUG --- stdout --- 07:05:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 41m 5802Mi am-55f77847b7-nv9k2 38m 5699Mi am-55f77847b7-v7x55 39m 5695Mi ds-cts-0 5m 383Mi ds-cts-1 6m 376Mi ds-cts-2 6m 371Mi ds-idrepo-0 1668m 13825Mi ds-idrepo-1 1119m 13835Mi ds-idrepo-2 1752m 13792Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 938m 4043Mi idm-65858d8c4c-zvhxh 811m 4157Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 237m 552Mi 07:05:49 DEBUG --- stderr --- 07:05:49 DEBUG 07:05:51 INFO 07:05:51 INFO [loop_until]: kubectl --namespace=xlou top node 07:05:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:05:51 INFO [loop_until]: OK (rc = 0) 07:05:51 DEBUG --- stdout --- 07:05:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6709Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 100m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1053m 6% 5350Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 471m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 913m 5% 5407Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1885m 11% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1548m 9% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1022m 6% 14479Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 303m 1% 2070Mi 3% 07:05:51 DEBUG --- stderr --- 07:05:51 DEBUG 07:06:49 INFO 07:06:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:06:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:06:49 INFO [loop_until]: OK (rc = 0) 07:06:49 DEBUG --- stdout --- 07:06:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 38m 5802Mi am-55f77847b7-nv9k2 41m 5699Mi am-55f77847b7-v7x55 37m 5695Mi ds-cts-0 7m 383Mi ds-cts-1 7m 376Mi ds-cts-2 7m 371Mi ds-idrepo-0 2607m 13826Mi ds-idrepo-1 1486m 13835Mi ds-idrepo-2 2733m 13815Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 945m 4070Mi idm-65858d8c4c-zvhxh 859m 4161Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 229m 552Mi 07:06:49 DEBUG --- stderr --- 07:06:49 DEBUG 07:06:51 INFO 07:06:51 INFO [loop_until]: kubectl --namespace=xlou top node 07:06:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:06:51 INFO [loop_until]: OK (rc = 0) 07:06:51 DEBUG --- stdout --- 07:06:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 103m 0% 6710Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1059m 6% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 472m 2% 2141Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 898m 5% 5414Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2231m 14% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2743m 17% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2100m 13% 14405Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 288m 1% 2069Mi 3% 07:06:51 DEBUG --- stderr --- 07:06:51 DEBUG 07:07:49 INFO 07:07:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:07:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:07:49 INFO [loop_until]: OK (rc = 0) 07:07:49 DEBUG --- stdout --- 07:07:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 39m 5802Mi am-55f77847b7-nv9k2 44m 5699Mi am-55f77847b7-v7x55 38m 5695Mi ds-cts-0 6m 383Mi ds-cts-1 7m 376Mi ds-cts-2 6m 371Mi ds-idrepo-0 2057m 13833Mi ds-idrepo-1 1414m 13859Mi ds-idrepo-2 962m 13838Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 907m 4053Mi idm-65858d8c4c-zvhxh 832m 4166Mi lodemon-97b6d75b7-fknft 4m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 227m 552Mi 07:07:49 DEBUG --- stderr --- 07:07:49 DEBUG 07:07:51 INFO 07:07:51 INFO [loop_until]: kubectl --namespace=xlou top node 07:07:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:07:52 INFO [loop_until]: OK (rc = 0) 07:07:52 DEBUG --- stdout --- 07:07:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 105m 0% 6709Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1058m 6% 5360Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 472m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 941m 5% 5416Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2102m 13% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 968m 6% 14493Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1503m 9% 14508Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 301m 1% 2070Mi 3% 07:07:52 DEBUG --- stderr --- 07:07:52 DEBUG 07:08:49 INFO 07:08:49 INFO [loop_until]: kubectl --namespace=xlou top pods 07:08:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:08:49 INFO [loop_until]: OK (rc = 0) 07:08:49 DEBUG --- stdout --- 07:08:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 35m 5802Mi am-55f77847b7-nv9k2 33m 5699Mi am-55f77847b7-v7x55 29m 5695Mi ds-cts-0 7m 383Mi ds-cts-1 6m 378Mi ds-cts-2 7m 371Mi ds-idrepo-0 1427m 13859Mi ds-idrepo-1 861m 13815Mi ds-idrepo-2 1205m 13786Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 904m 4056Mi idm-65858d8c4c-zvhxh 691m 4170Mi lodemon-97b6d75b7-fknft 4m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 230m 552Mi 07:08:49 DEBUG --- stderr --- 07:08:49 DEBUG 07:08:52 INFO 07:08:52 INFO [loop_until]: kubectl --namespace=xlou top node 07:08:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:08:52 INFO [loop_until]: OK (rc = 0) 07:08:52 DEBUG --- stdout --- 07:08:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6710Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 894m 5% 5360Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 434m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 747m 4% 5422Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1247m 7% 14524Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1251m 7% 14440Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 883m 5% 14463Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 267m 1% 2070Mi 3% 07:08:52 DEBUG --- stderr --- 07:08:52 DEBUG 07:09:50 INFO 07:09:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:09:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:09:50 INFO [loop_until]: OK (rc = 0) 07:09:50 DEBUG --- stdout --- 07:09:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 7m 5802Mi am-55f77847b7-nv9k2 8m 5699Mi am-55f77847b7-v7x55 7m 5695Mi ds-cts-0 7m 383Mi ds-cts-1 5m 377Mi ds-cts-2 6m 371Mi ds-idrepo-0 115m 13811Mi ds-idrepo-1 11m 13711Mi ds-idrepo-2 131m 13755Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 8m 4056Mi idm-65858d8c4c-zvhxh 7m 4170Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 138Mi 07:09:50 DEBUG --- stderr --- 07:09:50 DEBUG 07:09:52 INFO 07:09:52 INFO [loop_until]: kubectl --namespace=xlou top node 07:09:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:09:52 INFO [loop_until]: OK (rc = 0) 07:09:52 DEBUG --- stdout --- 07:09:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 6713Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 5362Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 5421Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 192m 1% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 181m 1% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14353Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1666Mi 2% 07:09:52 DEBUG --- stderr --- 07:09:52 DEBUG 127.0.0.1 - - [12/Aug/2023 07:10:28] "GET /monitoring/average?start_time=23-08-12_05:39:57&stop_time=23-08-12_06:08:27 HTTP/1.1" 200 - 07:10:50 INFO 07:10:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:10:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:10:50 INFO [loop_until]: OK (rc = 0) 07:10:50 DEBUG --- stdout --- 07:10:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 7m 5802Mi am-55f77847b7-nv9k2 8m 5699Mi am-55f77847b7-v7x55 6m 5695Mi ds-cts-0 9m 384Mi ds-cts-1 5m 378Mi ds-cts-2 6m 371Mi ds-idrepo-0 12m 13811Mi ds-idrepo-1 12m 13711Mi ds-idrepo-2 10m 13755Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 8m 4056Mi idm-65858d8c4c-zvhxh 7m 4170Mi lodemon-97b6d75b7-fknft 8m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 138Mi 07:10:50 DEBUG --- stderr --- 07:10:50 DEBUG 07:10:52 INFO 07:10:52 INFO [loop_until]: kubectl --namespace=xlou top node 07:10:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:10:52 INFO [loop_until]: OK (rc = 0) 07:10:52 DEBUG --- stdout --- 07:10:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6709Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 5362Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5419Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 14476Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 56m 0% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1662Mi 2% 07:10:52 DEBUG --- stderr --- 07:10:52 DEBUG 07:11:50 INFO 07:11:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:11:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:11:50 INFO [loop_until]: OK (rc = 0) 07:11:50 DEBUG --- stdout --- 07:11:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 113m 5807Mi am-55f77847b7-nv9k2 118m 5705Mi am-55f77847b7-v7x55 110m 5690Mi ds-cts-0 7m 383Mi ds-cts-1 6m 377Mi ds-cts-2 8m 372Mi ds-idrepo-0 1053m 13867Mi ds-idrepo-1 250m 13713Mi ds-idrepo-2 491m 13799Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 475m 4066Mi idm-65858d8c4c-zvhxh 304m 4166Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 566m 561Mi 07:11:50 DEBUG --- stderr --- 07:11:50 DEBUG 07:11:52 INFO 07:11:52 INFO [loop_until]: kubectl --namespace=xlou top node 07:11:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:11:52 INFO [loop_until]: OK (rc = 0) 07:11:52 DEBUG --- stdout --- 07:11:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 205m 1% 6717Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 202m 1% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 177m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 514m 3% 5374Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 282m 1% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 535m 3% 5417Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1315m 8% 14534Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 559m 3% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 422m 2% 14370Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 738m 4% 2080Mi 3% 07:11:52 DEBUG --- stderr --- 07:11:52 DEBUG 07:12:50 INFO 07:12:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:12:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:12:50 INFO [loop_until]: OK (rc = 0) 07:12:50 DEBUG --- stdout --- 07:12:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 92m 5804Mi am-55f77847b7-nv9k2 103m 5705Mi am-55f77847b7-v7x55 84m 5695Mi ds-cts-0 6m 383Mi ds-cts-1 6m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 2006m 13842Mi ds-idrepo-1 883m 13832Mi ds-idrepo-2 763m 13863Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 662m 4077Mi idm-65858d8c4c-zvhxh 621m 4173Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 471m 571Mi 07:12:50 DEBUG --- stderr --- 07:12:50 DEBUG 07:12:52 INFO 07:12:52 INFO [loop_until]: kubectl --namespace=xlou top node 07:12:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:12:52 INFO [loop_until]: OK (rc = 0) 07:12:52 DEBUG --- stdout --- 07:12:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6718Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 758m 4% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 450m 2% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 694m 4% 5426Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1944m 12% 14517Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 804m 5% 14532Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 889m 5% 14486Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 488m 3% 2092Mi 3% 07:12:52 DEBUG --- stderr --- 07:12:52 DEBUG 07:13:50 INFO 07:13:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:13:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:13:50 INFO [loop_until]: OK (rc = 0) 07:13:50 DEBUG --- stdout --- 07:13:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 84m 5804Mi am-55f77847b7-nv9k2 89m 5705Mi am-55f77847b7-v7x55 81m 5690Mi ds-cts-0 6m 383Mi ds-cts-1 6m 377Mi ds-cts-2 6m 371Mi ds-idrepo-0 1829m 13818Mi ds-idrepo-1 2073m 13819Mi ds-idrepo-2 736m 13807Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 610m 4117Mi idm-65858d8c4c-zvhxh 601m 4190Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 218m 572Mi 07:13:50 DEBUG --- stderr --- 07:13:50 DEBUG 07:13:52 INFO 07:13:52 INFO [loop_until]: kubectl --namespace=xlou top node 07:13:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:13:52 INFO [loop_until]: OK (rc = 0) 07:13:52 DEBUG --- stdout --- 07:13:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1334Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6716Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 733m 4% 5426Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 442m 2% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 654m 4% 5439Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1933m 12% 14497Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 808m 5% 14536Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2134m 13% 14479Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 289m 1% 2091Mi 3% 07:13:52 DEBUG --- stderr --- 07:13:52 DEBUG 07:14:50 INFO 07:14:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:14:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:14:50 INFO [loop_until]: OK (rc = 0) 07:14:50 DEBUG --- stdout --- 07:14:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 85m 5804Mi am-55f77847b7-nv9k2 91m 5705Mi am-55f77847b7-v7x55 87m 5690Mi ds-cts-0 6m 383Mi ds-cts-1 6m 377Mi ds-cts-2 6m 371Mi ds-idrepo-0 1835m 13824Mi ds-idrepo-1 1050m 13866Mi ds-idrepo-2 882m 13861Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 598m 4119Mi idm-65858d8c4c-zvhxh 555m 4193Mi lodemon-97b6d75b7-fknft 8m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 213m 572Mi 07:14:50 DEBUG --- stderr --- 07:14:50 DEBUG 07:14:52 INFO 07:14:52 INFO [loop_until]: kubectl --namespace=xlou top node 07:14:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:14:52 INFO [loop_until]: OK (rc = 0) 07:14:52 DEBUG --- stdout --- 07:14:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6717Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 724m 4% 5427Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 443m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 636m 4% 5442Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1789m 11% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 637m 4% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1192m 7% 14525Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 286m 1% 2091Mi 3% 07:14:52 DEBUG --- stderr --- 07:14:52 DEBUG 07:15:50 INFO 07:15:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:15:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:15:50 INFO [loop_until]: OK (rc = 0) 07:15:50 DEBUG --- stdout --- 07:15:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 86m 5804Mi am-55f77847b7-nv9k2 92m 5705Mi am-55f77847b7-v7x55 84m 5690Mi ds-cts-0 7m 383Mi ds-cts-1 6m 377Mi ds-cts-2 7m 371Mi ds-idrepo-0 2796m 13797Mi ds-idrepo-1 1207m 13865Mi ds-idrepo-2 763m 13824Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 601m 4122Mi idm-65858d8c4c-zvhxh 562m 4191Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 233m 573Mi 07:15:50 DEBUG --- stderr --- 07:15:50 DEBUG 07:15:53 INFO 07:15:53 INFO [loop_until]: kubectl --namespace=xlou top node 07:15:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:15:53 INFO [loop_until]: OK (rc = 0) 07:15:53 DEBUG --- stdout --- 07:15:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6715Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 708m 4% 5431Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 427m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 652m 4% 5442Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2376m 14% 14480Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 797m 5% 14497Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1217m 7% 14521Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 297m 1% 2093Mi 3% 07:15:53 DEBUG --- stderr --- 07:15:53 DEBUG 07:16:50 INFO 07:16:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:16:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:16:50 INFO [loop_until]: OK (rc = 0) 07:16:50 DEBUG --- stdout --- 07:16:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 89m 5804Mi am-55f77847b7-nv9k2 94m 5705Mi am-55f77847b7-v7x55 88m 5690Mi ds-cts-0 7m 383Mi ds-cts-1 6m 378Mi ds-cts-2 5m 371Mi ds-idrepo-0 2034m 13823Mi ds-idrepo-1 1220m 13840Mi ds-idrepo-2 2403m 13772Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 598m 4125Mi idm-65858d8c4c-zvhxh 571m 4204Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 230m 575Mi 07:16:50 DEBUG --- stderr --- 07:16:50 DEBUG 07:16:53 INFO 07:16:53 INFO [loop_until]: kubectl --namespace=xlou top node 07:16:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:16:53 INFO [loop_until]: OK (rc = 0) 07:16:53 DEBUG --- stdout --- 07:16:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 179m 1% 6728Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 711m 4% 5431Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 458m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 678m 4% 5453Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3313m 20% 14246Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1917m 12% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1302m 8% 14499Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 309m 1% 2095Mi 3% 07:16:53 DEBUG --- stderr --- 07:16:53 DEBUG 07:17:50 INFO 07:17:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:17:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:17:50 INFO [loop_until]: OK (rc = 0) 07:17:50 DEBUG --- stdout --- 07:17:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 117m 5807Mi am-55f77847b7-nv9k2 86m 5717Mi am-55f77847b7-v7x55 83m 5703Mi ds-cts-0 7m 384Mi ds-cts-1 6m 378Mi ds-cts-2 7m 371Mi ds-idrepo-0 2191m 13713Mi ds-idrepo-1 1206m 13677Mi ds-idrepo-2 750m 13829Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 617m 4126Mi idm-65858d8c4c-zvhxh 565m 4197Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 228m 573Mi 07:17:50 DEBUG --- stderr --- 07:17:50 DEBUG 07:17:53 INFO 07:17:53 INFO [loop_until]: kubectl --namespace=xlou top node 07:17:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:17:53 INFO [loop_until]: OK (rc = 0) 07:17:53 DEBUG --- stdout --- 07:17:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 147m 0% 6728Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 138m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 730m 4% 5433Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 437m 2% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 661m 4% 5447Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2247m 14% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 775m 4% 14510Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1259m 7% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 299m 1% 2091Mi 3% 07:17:53 DEBUG --- stderr --- 07:17:53 DEBUG 07:18:50 INFO 07:18:50 INFO [loop_until]: kubectl --namespace=xlou top pods 07:18:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:18:51 INFO [loop_until]: OK (rc = 0) 07:18:51 DEBUG --- stdout --- 07:18:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 90m 5808Mi am-55f77847b7-nv9k2 89m 5717Mi am-55f77847b7-v7x55 85m 5702Mi ds-cts-0 7m 383Mi ds-cts-1 6m 377Mi ds-cts-2 6m 371Mi ds-idrepo-0 1864m 13844Mi ds-idrepo-1 729m 13740Mi ds-idrepo-2 1064m 13727Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 598m 4129Mi idm-65858d8c4c-zvhxh 567m 4198Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 229m 574Mi 07:18:51 DEBUG --- stderr --- 07:18:51 DEBUG 07:18:53 INFO 07:18:53 INFO [loop_until]: kubectl --namespace=xlou top node 07:18:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:18:53 INFO [loop_until]: OK (rc = 0) 07:18:53 DEBUG --- stdout --- 07:18:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6727Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 140m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 741m 4% 5434Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 451m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 650m 4% 5452Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1863m 11% 14511Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1119m 7% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 778m 4% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 298m 1% 2093Mi 3% 07:18:53 DEBUG --- stderr --- 07:18:53 DEBUG 07:19:51 INFO 07:19:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:19:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:19:51 INFO [loop_until]: OK (rc = 0) 07:19:51 DEBUG --- stdout --- 07:19:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 89m 5808Mi am-55f77847b7-nv9k2 88m 5717Mi am-55f77847b7-v7x55 83m 5702Mi ds-cts-0 12m 383Mi ds-cts-1 8m 377Mi ds-cts-2 5m 371Mi ds-idrepo-0 2891m 13686Mi ds-idrepo-1 532m 13811Mi ds-idrepo-2 741m 13820Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 653m 4132Mi idm-65858d8c4c-zvhxh 563m 4201Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 233m 575Mi 07:19:51 DEBUG --- stderr --- 07:19:51 DEBUG 07:19:53 INFO 07:19:53 INFO [loop_until]: kubectl --namespace=xlou top node 07:19:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:19:53 INFO [loop_until]: OK (rc = 0) 07:19:53 DEBUG --- stdout --- 07:19:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6730Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 745m 4% 5432Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 453m 2% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 667m 4% 5448Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2492m 15% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 805m 5% 14501Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 895m 5% 14522Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 292m 1% 2095Mi 3% 07:19:53 DEBUG --- stderr --- 07:19:53 DEBUG 07:20:51 INFO 07:20:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:20:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:20:51 INFO [loop_until]: OK (rc = 0) 07:20:51 DEBUG --- stdout --- 07:20:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 82m 5807Mi am-55f77847b7-nv9k2 90m 5717Mi am-55f77847b7-v7x55 86m 5703Mi ds-cts-0 6m 383Mi ds-cts-1 6m 377Mi ds-cts-2 13m 379Mi ds-idrepo-0 2135m 13836Mi ds-idrepo-1 565m 13796Mi ds-idrepo-2 1579m 13863Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 628m 4134Mi idm-65858d8c4c-zvhxh 561m 4203Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 234m 576Mi 07:20:51 DEBUG --- stderr --- 07:20:51 DEBUG 07:20:53 INFO 07:20:53 INFO [loop_until]: kubectl --namespace=xlou top node 07:20:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:20:53 INFO [loop_until]: OK (rc = 0) 07:20:53 DEBUG --- stdout --- 07:20:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6732Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 733m 4% 5436Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 463m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 667m 4% 5449Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2385m 15% 14527Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1661m 10% 14547Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 923m 5% 14463Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 304m 1% 2095Mi 3% 07:20:53 DEBUG --- stderr --- 07:20:53 DEBUG 07:21:51 INFO 07:21:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:21:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:21:51 INFO [loop_until]: OK (rc = 0) 07:21:51 DEBUG --- stdout --- 07:21:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 87m 5807Mi am-55f77847b7-nv9k2 89m 5718Mi am-55f77847b7-v7x55 84m 5702Mi ds-cts-0 6m 384Mi ds-cts-1 6m 377Mi ds-cts-2 6m 372Mi ds-idrepo-0 1814m 13827Mi ds-idrepo-1 1501m 13824Mi ds-idrepo-2 681m 13864Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 656m 4136Mi idm-65858d8c4c-zvhxh 529m 4205Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 224m 576Mi 07:21:51 DEBUG --- stderr --- 07:21:51 DEBUG 07:21:53 INFO 07:21:53 INFO [loop_until]: kubectl --namespace=xlou top node 07:21:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:21:53 INFO [loop_until]: OK (rc = 0) 07:21:53 DEBUG --- stdout --- 07:21:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 86m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6730Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 741m 4% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 442m 2% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 637m 4% 5454Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1878m 11% 14524Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 775m 4% 14557Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1479m 9% 14493Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 295m 1% 2094Mi 3% 07:21:53 DEBUG --- stderr --- 07:21:53 DEBUG 07:22:51 INFO 07:22:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:22:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:22:51 INFO [loop_until]: OK (rc = 0) 07:22:51 DEBUG --- stdout --- 07:22:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 84m 5807Mi am-55f77847b7-nv9k2 89m 5717Mi am-55f77847b7-v7x55 84m 5703Mi ds-cts-0 5m 383Mi ds-cts-1 6m 377Mi ds-cts-2 5m 371Mi ds-idrepo-0 2388m 13657Mi ds-idrepo-1 840m 13868Mi ds-idrepo-2 779m 13824Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 605m 4143Mi idm-65858d8c4c-zvhxh 552m 4211Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 218m 576Mi 07:22:51 DEBUG --- stderr --- 07:22:51 DEBUG 07:22:53 INFO 07:22:53 INFO [loop_until]: kubectl --namespace=xlou top node 07:22:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:22:53 INFO [loop_until]: OK (rc = 0) 07:22:53 DEBUG --- stdout --- 07:22:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6731Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 714m 4% 5448Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 446m 2% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 654m 4% 5463Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2053m 12% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 786m 4% 14516Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 926m 5% 14496Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 293m 1% 2096Mi 3% 07:22:53 DEBUG --- stderr --- 07:22:53 DEBUG 07:23:51 INFO 07:23:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:23:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:23:51 INFO [loop_until]: OK (rc = 0) 07:23:51 DEBUG --- stdout --- 07:23:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 86m 5807Mi am-55f77847b7-nv9k2 88m 5717Mi am-55f77847b7-v7x55 82m 5703Mi ds-cts-0 6m 383Mi ds-cts-1 6m 377Mi ds-cts-2 11m 374Mi ds-idrepo-0 3046m 13758Mi ds-idrepo-1 735m 13860Mi ds-idrepo-2 1669m 13461Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 614m 4146Mi idm-65858d8c4c-zvhxh 555m 4215Mi lodemon-97b6d75b7-fknft 2m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 224m 577Mi 07:23:51 DEBUG --- stderr --- 07:23:51 DEBUG 07:23:54 INFO 07:23:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:23:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:23:54 INFO [loop_until]: OK (rc = 0) 07:23:54 DEBUG --- stdout --- 07:23:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6731Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 706m 4% 5454Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 440m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 660m 4% 5460Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2398m 15% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1801m 11% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 864m 5% 14492Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 291m 1% 2094Mi 3% 07:23:54 DEBUG --- stderr --- 07:23:54 DEBUG 07:24:51 INFO 07:24:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:24:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:24:51 INFO [loop_until]: OK (rc = 0) 07:24:51 DEBUG --- stdout --- 07:24:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 90m 5807Mi am-55f77847b7-nv9k2 90m 5717Mi am-55f77847b7-v7x55 84m 5702Mi ds-cts-0 6m 384Mi ds-cts-1 6m 378Mi ds-cts-2 5m 374Mi ds-idrepo-0 1637m 13823Mi ds-idrepo-1 941m 13865Mi ds-idrepo-2 639m 13616Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 611m 4147Mi idm-65858d8c4c-zvhxh 546m 4217Mi lodemon-97b6d75b7-fknft 8m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 234m 578Mi 07:24:51 DEBUG --- stderr --- 07:24:51 DEBUG 07:24:54 INFO 07:24:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:24:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:24:54 INFO [loop_until]: OK (rc = 0) 07:24:54 DEBUG --- stdout --- 07:24:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6730Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 743m 4% 5454Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 450m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 634m 3% 5464Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1608m 10% 14528Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 787m 4% 14274Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 850m 5% 14537Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 303m 1% 2098Mi 3% 07:24:54 DEBUG --- stderr --- 07:24:54 DEBUG 07:25:51 INFO 07:25:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:25:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:25:51 INFO [loop_until]: OK (rc = 0) 07:25:51 DEBUG --- stdout --- 07:25:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 87m 5808Mi am-55f77847b7-nv9k2 134m 5726Mi am-55f77847b7-v7x55 85m 5702Mi ds-cts-0 8m 383Mi ds-cts-1 6m 377Mi ds-cts-2 6m 374Mi ds-idrepo-0 1538m 13656Mi ds-idrepo-1 3344m 13315Mi ds-idrepo-2 2234m 13441Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 589m 4149Mi idm-65858d8c4c-zvhxh 550m 4219Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 227m 578Mi 07:25:51 DEBUG --- stderr --- 07:25:51 DEBUG 07:25:54 INFO 07:25:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:25:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:25:54 INFO [loop_until]: OK (rc = 0) 07:25:54 DEBUG --- stdout --- 07:25:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 192m 1% 6739Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 716m 4% 5454Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 466m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 664m 4% 5471Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1592m 10% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2158m 13% 14130Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3581m 22% 14025Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 296m 1% 2097Mi 3% 07:25:54 DEBUG --- stderr --- 07:25:54 DEBUG 07:26:51 INFO 07:26:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:26:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:26:51 INFO [loop_until]: OK (rc = 0) 07:26:51 DEBUG --- stdout --- 07:26:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 86m 5808Mi am-55f77847b7-nv9k2 88m 5726Mi am-55f77847b7-v7x55 117m 5711Mi ds-cts-0 7m 383Mi ds-cts-1 6m 377Mi ds-cts-2 7m 374Mi ds-idrepo-0 2279m 13773Mi ds-idrepo-1 809m 13422Mi ds-idrepo-2 748m 13514Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 591m 4152Mi idm-65858d8c4c-zvhxh 561m 4221Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 218m 578Mi 07:26:51 DEBUG --- stderr --- 07:26:51 DEBUG 07:26:54 INFO 07:26:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:26:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:26:54 INFO [loop_until]: OK (rc = 0) 07:26:54 DEBUG --- stdout --- 07:26:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 147m 0% 6738Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 184m 1% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 725m 4% 5461Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 433m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 639m 4% 5482Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2477m 15% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1032m 6% 14224Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 988m 6% 14099Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 291m 1% 2096Mi 3% 07:26:54 DEBUG --- stderr --- 07:26:54 DEBUG 07:27:51 INFO 07:27:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:27:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:27:51 INFO [loop_until]: OK (rc = 0) 07:27:51 DEBUG --- stdout --- 07:27:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 82m 5811Mi am-55f77847b7-nv9k2 84m 5726Mi am-55f77847b7-v7x55 91m 5711Mi ds-cts-0 7m 383Mi ds-cts-1 6m 377Mi ds-cts-2 5m 374Mi ds-idrepo-0 2620m 13811Mi ds-idrepo-1 1830m 13564Mi ds-idrepo-2 1798m 13578Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 630m 4154Mi idm-65858d8c4c-zvhxh 545m 4223Mi lodemon-97b6d75b7-fknft 4m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 219m 579Mi 07:27:51 DEBUG --- stderr --- 07:27:51 DEBUG 07:27:54 INFO 07:27:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:27:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:27:54 INFO [loop_until]: OK (rc = 0) 07:27:54 DEBUG --- stdout --- 07:27:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6749Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 718m 4% 5463Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 450m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 666m 4% 5476Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3190m 20% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1663m 10% 14256Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2091m 13% 14228Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 295m 1% 2097Mi 3% 07:27:54 DEBUG --- stderr --- 07:27:54 DEBUG 07:28:51 INFO 07:28:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:28:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:28:51 INFO [loop_until]: OK (rc = 0) 07:28:51 DEBUG --- stdout --- 07:28:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 86m 5811Mi am-55f77847b7-nv9k2 90m 5726Mi am-55f77847b7-v7x55 81m 5711Mi ds-cts-0 6m 383Mi ds-cts-1 6m 377Mi ds-cts-2 6m 374Mi ds-idrepo-0 1847m 13499Mi ds-idrepo-1 909m 13695Mi ds-idrepo-2 733m 13660Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 610m 4158Mi idm-65858d8c4c-zvhxh 549m 4226Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 223m 579Mi 07:28:51 DEBUG --- stderr --- 07:28:51 DEBUG 07:28:54 INFO 07:28:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:28:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:28:54 INFO [loop_until]: OK (rc = 0) 07:28:54 DEBUG --- stdout --- 07:28:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6738Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 730m 4% 5466Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 434m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 650m 4% 5478Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1633m 10% 14204Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 793m 4% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 926m 5% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 295m 1% 2098Mi 3% 07:28:54 DEBUG --- stderr --- 07:28:54 DEBUG 07:29:51 INFO 07:29:51 INFO [loop_until]: kubectl --namespace=xlou top pods 07:29:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:29:52 INFO [loop_until]: OK (rc = 0) 07:29:52 DEBUG --- stdout --- 07:29:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 84m 5811Mi am-55f77847b7-nv9k2 88m 5726Mi am-55f77847b7-v7x55 85m 5711Mi ds-cts-0 7m 384Mi ds-cts-1 6m 378Mi ds-cts-2 6m 374Mi ds-idrepo-0 2368m 13298Mi ds-idrepo-1 745m 13825Mi ds-idrepo-2 866m 13788Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 611m 4160Mi idm-65858d8c4c-zvhxh 537m 4228Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 227m 579Mi 07:29:52 DEBUG --- stderr --- 07:29:52 DEBUG 07:29:54 INFO 07:29:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:29:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:29:54 INFO [loop_until]: OK (rc = 0) 07:29:54 DEBUG --- stdout --- 07:29:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6740Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 134m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 723m 4% 5465Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 444m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 630m 3% 5481Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2519m 15% 14005Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 765m 4% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 539m 3% 14439Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 292m 1% 2099Mi 3% 07:29:54 DEBUG --- stderr --- 07:29:54 DEBUG 07:30:52 INFO 07:30:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:30:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:30:52 INFO [loop_until]: OK (rc = 0) 07:30:52 DEBUG --- stdout --- 07:30:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 88m 5811Mi am-55f77847b7-nv9k2 91m 5726Mi am-55f77847b7-v7x55 87m 5711Mi ds-cts-0 5m 381Mi ds-cts-1 6m 377Mi ds-cts-2 5m 372Mi ds-idrepo-0 2309m 13249Mi ds-idrepo-1 988m 13644Mi ds-idrepo-2 731m 13813Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 632m 4160Mi idm-65858d8c4c-zvhxh 565m 4230Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 225m 580Mi 07:30:52 DEBUG --- stderr --- 07:30:52 DEBUG 07:30:54 INFO 07:30:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:30:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:30:54 INFO [loop_until]: OK (rc = 0) 07:30:54 DEBUG --- stdout --- 07:30:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6745Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6836Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 723m 4% 5464Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 434m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 679m 4% 5481Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2446m 15% 13980Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 938m 5% 14547Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 994m 6% 14331Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 295m 1% 2101Mi 3% 07:30:54 DEBUG --- stderr --- 07:30:54 DEBUG 07:31:52 INFO 07:31:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:31:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:31:52 INFO [loop_until]: OK (rc = 0) 07:31:52 DEBUG --- stdout --- 07:31:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 90m 5811Mi am-55f77847b7-nv9k2 89m 5726Mi am-55f77847b7-v7x55 86m 5711Mi ds-cts-0 7m 381Mi ds-cts-1 6m 377Mi ds-cts-2 6m 372Mi ds-idrepo-0 1744m 13362Mi ds-idrepo-1 1516m 13517Mi ds-idrepo-2 935m 13851Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 636m 4162Mi idm-65858d8c4c-zvhxh 563m 4232Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 226m 580Mi 07:31:52 DEBUG --- stderr --- 07:31:52 DEBUG 07:31:54 INFO 07:31:54 INFO [loop_until]: kubectl --namespace=xlou top node 07:31:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:31:55 INFO [loop_until]: OK (rc = 0) 07:31:55 DEBUG --- stdout --- 07:31:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6741Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 140m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 727m 4% 5468Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 450m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 658m 4% 5480Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1906m 11% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 946m 5% 14547Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1627m 10% 14209Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 291m 1% 2098Mi 3% 07:31:55 DEBUG --- stderr --- 07:31:55 DEBUG 07:32:52 INFO 07:32:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:32:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:32:52 INFO [loop_until]: OK (rc = 0) 07:32:52 DEBUG --- stdout --- 07:32:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 86m 5811Mi am-55f77847b7-nv9k2 85m 5726Mi am-55f77847b7-v7x55 83m 5711Mi ds-cts-0 6m 382Mi ds-cts-1 6m 377Mi ds-cts-2 6m 372Mi ds-idrepo-0 1932m 13428Mi ds-idrepo-1 859m 13633Mi ds-idrepo-2 1418m 13652Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 579m 4165Mi idm-65858d8c4c-zvhxh 552m 4234Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 227m 581Mi 07:32:52 DEBUG --- stderr --- 07:32:52 DEBUG 07:32:55 INFO 07:32:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:32:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:32:55 INFO [loop_until]: OK (rc = 0) 07:32:55 DEBUG --- stdout --- 07:32:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 148m 0% 6741Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 143m 0% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 699m 4% 5469Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 431m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 653m 4% 5481Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2382m 14% 14159Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 829m 5% 14332Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1193m 7% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 294m 1% 2099Mi 3% 07:32:55 DEBUG --- stderr --- 07:32:55 DEBUG 07:33:52 INFO 07:33:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:33:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:33:52 INFO [loop_until]: OK (rc = 0) 07:33:52 DEBUG --- stdout --- 07:33:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 87m 5811Mi am-55f77847b7-nv9k2 90m 5726Mi am-55f77847b7-v7x55 85m 5711Mi ds-cts-0 6m 382Mi ds-cts-1 6m 377Mi ds-cts-2 7m 373Mi ds-idrepo-0 1521m 13388Mi ds-idrepo-1 1256m 13818Mi ds-idrepo-2 759m 13734Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 658m 4168Mi idm-65858d8c4c-zvhxh 562m 4237Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 224m 581Mi 07:33:52 DEBUG --- stderr --- 07:33:52 DEBUG 07:33:55 INFO 07:33:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:33:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:33:55 INFO [loop_until]: OK (rc = 0) 07:33:55 DEBUG --- stdout --- 07:33:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6736Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 746m 4% 5476Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 454m 2% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 645m 4% 5485Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1719m 10% 14099Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 787m 4% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1428m 8% 14510Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 301m 1% 2100Mi 3% 07:33:55 DEBUG --- stderr --- 07:33:55 DEBUG 07:34:52 INFO 07:34:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:34:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:34:52 INFO [loop_until]: OK (rc = 0) 07:34:52 DEBUG --- stdout --- 07:34:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 85m 5811Mi am-55f77847b7-nv9k2 124m 5728Mi am-55f77847b7-v7x55 82m 5711Mi ds-cts-0 5m 381Mi ds-cts-1 6m 377Mi ds-cts-2 7m 373Mi ds-idrepo-0 2573m 13526Mi ds-idrepo-1 885m 13848Mi ds-idrepo-2 2791m 13398Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 579m 4186Mi idm-65858d8c4c-zvhxh 573m 4239Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 217m 580Mi 07:34:52 DEBUG --- stderr --- 07:34:52 DEBUG 07:34:55 INFO 07:34:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:34:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:34:55 INFO [loop_until]: OK (rc = 0) 07:34:55 DEBUG --- stdout --- 07:34:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6740Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 668m 4% 5495Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 442m 2% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 691m 4% 5488Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2875m 18% 14283Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2678m 16% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 950m 5% 14541Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 289m 1% 2099Mi 3% 07:34:55 DEBUG --- stderr --- 07:34:55 DEBUG 07:35:52 INFO 07:35:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:35:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:35:52 INFO [loop_until]: OK (rc = 0) 07:35:52 DEBUG --- stdout --- 07:35:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 137m 5813Mi am-55f77847b7-nv9k2 85m 5728Mi am-55f77847b7-v7x55 119m 5713Mi ds-cts-0 6m 381Mi ds-cts-1 6m 377Mi ds-cts-2 5m 373Mi ds-idrepo-0 2694m 13654Mi ds-idrepo-1 966m 13838Mi ds-idrepo-2 1428m 13238Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 603m 4175Mi idm-65858d8c4c-zvhxh 534m 4243Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 213m 580Mi 07:35:52 DEBUG --- stderr --- 07:35:52 DEBUG 07:35:55 INFO 07:35:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:35:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:35:55 INFO [loop_until]: OK (rc = 0) 07:35:55 DEBUG --- stdout --- 07:35:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1343Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6739Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 210m 1% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 720m 4% 5481Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 442m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 636m 4% 5492Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2774m 17% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1706m 10% 13957Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1093m 6% 14533Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 278m 1% 2101Mi 3% 07:35:55 DEBUG --- stderr --- 07:35:55 DEBUG 07:36:52 INFO 07:36:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:36:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:36:52 INFO [loop_until]: OK (rc = 0) 07:36:52 DEBUG --- stdout --- 07:36:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 84m 5813Mi am-55f77847b7-nv9k2 87m 5728Mi am-55f77847b7-v7x55 84m 5713Mi ds-cts-0 5m 381Mi ds-cts-1 6m 377Mi ds-cts-2 6m 373Mi ds-idrepo-0 1766m 13365Mi ds-idrepo-1 2159m 13629Mi ds-idrepo-2 2248m 13053Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 609m 4177Mi idm-65858d8c4c-zvhxh 569m 4245Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 228m 581Mi 07:36:52 DEBUG --- stderr --- 07:36:52 DEBUG 07:36:55 INFO 07:36:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:36:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:36:55 INFO [loop_until]: OK (rc = 0) 07:36:55 DEBUG --- stdout --- 07:36:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6742Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 708m 4% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 449m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 669m 4% 5494Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1785m 11% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 1817m 11% 13764Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2179m 13% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 303m 1% 2112Mi 3% 07:36:55 DEBUG --- stderr --- 07:36:55 DEBUG 07:37:52 INFO 07:37:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:37:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:37:52 INFO [loop_until]: OK (rc = 0) 07:37:52 DEBUG --- stdout --- 07:37:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 83m 5813Mi am-55f77847b7-nv9k2 85m 5728Mi am-55f77847b7-v7x55 80m 5714Mi ds-cts-0 5m 382Mi ds-cts-1 5m 379Mi ds-cts-2 6m 373Mi ds-idrepo-0 2692m 13487Mi ds-idrepo-1 830m 13785Mi ds-idrepo-2 707m 13196Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 606m 4179Mi idm-65858d8c4c-zvhxh 574m 4247Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 228m 581Mi 07:37:52 DEBUG --- stderr --- 07:37:52 DEBUG 07:37:55 INFO 07:37:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:37:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:37:55 INFO [loop_until]: OK (rc = 0) 07:37:55 DEBUG --- stdout --- 07:37:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 142m 0% 6737Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 695m 4% 5487Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 444m 2% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 663m 4% 5494Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2694m 16% 14215Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 795m 5% 13896Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 668m 4% 14414Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 303m 1% 2102Mi 3% 07:37:55 DEBUG --- stderr --- 07:37:55 DEBUG 07:38:52 INFO 07:38:52 INFO [loop_until]: kubectl --namespace=xlou top pods 07:38:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:38:52 INFO [loop_until]: OK (rc = 0) 07:38:52 DEBUG --- stdout --- 07:38:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 85m 5813Mi am-55f77847b7-nv9k2 87m 5728Mi am-55f77847b7-v7x55 87m 5713Mi ds-cts-0 7m 382Mi ds-cts-1 5m 378Mi ds-cts-2 6m 373Mi ds-idrepo-0 2799m 13468Mi ds-idrepo-1 1092m 13801Mi ds-idrepo-2 744m 13286Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 629m 4182Mi idm-65858d8c4c-zvhxh 574m 4250Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 231m 581Mi 07:38:52 DEBUG --- stderr --- 07:38:52 DEBUG 07:38:55 INFO 07:38:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:38:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:38:55 INFO [loop_until]: OK (rc = 0) 07:38:55 DEBUG --- stdout --- 07:38:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6740Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6843Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 140m 0% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 729m 4% 5494Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 427m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 636m 4% 5504Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1997m 12% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 931m 5% 14086Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1250m 7% 14520Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 303m 1% 2101Mi 3% 07:38:55 DEBUG --- stderr --- 07:38:55 DEBUG 07:39:53 INFO 07:39:53 INFO [loop_until]: kubectl --namespace=xlou top pods 07:39:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:39:53 INFO [loop_until]: OK (rc = 0) 07:39:53 DEBUG --- stdout --- 07:39:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 84m 5813Mi am-55f77847b7-nv9k2 87m 5728Mi am-55f77847b7-v7x55 81m 5713Mi ds-cts-0 6m 382Mi ds-cts-1 5m 378Mi ds-cts-2 6m 373Mi ds-idrepo-0 1397m 13414Mi ds-idrepo-1 1986m 13508Mi ds-idrepo-2 729m 13393Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 603m 4184Mi idm-65858d8c4c-zvhxh 553m 4252Mi lodemon-97b6d75b7-fknft 5m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 234m 581Mi 07:39:53 DEBUG --- stderr --- 07:39:53 DEBUG 07:39:55 INFO 07:39:55 INFO [loop_until]: kubectl --namespace=xlou top node 07:39:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:39:55 INFO [loop_until]: OK (rc = 0) 07:39:55 DEBUG --- stdout --- 07:39:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1343Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 142m 0% 6740Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 140m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 719m 4% 5501Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 441m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 657m 4% 5503Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1577m 9% 14187Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 964m 6% 14218Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 577m 3% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 305m 1% 2100Mi 3% 07:39:55 DEBUG --- stderr --- 07:39:55 DEBUG 07:40:53 INFO 07:40:53 INFO [loop_until]: kubectl --namespace=xlou top pods 07:40:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:40:53 INFO [loop_until]: OK (rc = 0) 07:40:53 DEBUG --- stdout --- 07:40:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 88m 5813Mi am-55f77847b7-nv9k2 87m 5728Mi am-55f77847b7-v7x55 86m 5713Mi ds-cts-0 5m 382Mi ds-cts-1 6m 378Mi ds-cts-2 7m 373Mi ds-idrepo-0 1824m 13497Mi ds-idrepo-1 2149m 13520Mi ds-idrepo-2 2168m 13438Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 611m 4194Mi idm-65858d8c4c-zvhxh 566m 4254Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 228m 582Mi 07:40:53 DEBUG --- stderr --- 07:40:53 DEBUG 07:40:56 INFO 07:40:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:40:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:40:56 INFO [loop_until]: OK (rc = 0) 07:40:56 DEBUG --- stdout --- 07:40:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6739Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 121m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 648m 4% 5494Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 355m 2% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 536m 3% 5505Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2337m 14% 14289Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 625m 3% 14163Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 916m 5% 14222Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 265m 1% 2105Mi 3% 07:40:56 DEBUG --- stderr --- 07:40:56 DEBUG 07:41:53 INFO 07:41:53 INFO [loop_until]: kubectl --namespace=xlou top pods 07:41:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:41:53 INFO [loop_until]: OK (rc = 0) 07:41:53 DEBUG --- stdout --- 07:41:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 8m 5813Mi am-55f77847b7-nv9k2 14m 5728Mi am-55f77847b7-v7x55 8m 5713Mi ds-cts-0 7m 382Mi ds-cts-1 7m 378Mi ds-cts-2 6m 373Mi ds-idrepo-0 372m 13426Mi ds-idrepo-1 910m 13259Mi ds-idrepo-2 10m 13451Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 5m 4188Mi idm-65858d8c4c-zvhxh 5m 4255Mi lodemon-97b6d75b7-fknft 7m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 89m 140Mi 07:41:53 DEBUG --- stderr --- 07:41:53 DEBUG 07:41:56 INFO 07:41:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:41:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:41:56 INFO [loop_until]: OK (rc = 0) 07:41:56 DEBUG --- stdout --- 07:41:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 6741Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 5493Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5507Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14155Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 212m 1% 13951Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1670Mi 2% 07:41:56 DEBUG --- stderr --- 07:41:56 DEBUG 07:42:53 INFO 07:42:53 INFO [loop_until]: kubectl --namespace=xlou top pods 07:42:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:42:53 INFO [loop_until]: OK (rc = 0) 07:42:53 DEBUG --- stdout --- 07:42:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 8m 5813Mi am-55f77847b7-nv9k2 8m 5728Mi am-55f77847b7-v7x55 9m 5713Mi ds-cts-0 6m 382Mi ds-cts-1 5m 378Mi ds-cts-2 7m 373Mi ds-idrepo-0 12m 13426Mi ds-idrepo-1 13m 13253Mi ds-idrepo-2 9m 13452Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 5m 4187Mi idm-65858d8c4c-zvhxh 6m 4255Mi lodemon-97b6d75b7-fknft 6m 66Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 1m 140Mi 07:42:53 DEBUG --- stderr --- 07:42:53 DEBUG 07:42:56 INFO 07:42:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:42:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:42:56 INFO [loop_until]: OK (rc = 0) 07:42:56 DEBUG --- stdout --- 07:42:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6740Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 5493Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 118m 0% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5506Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14151Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 13951Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1667Mi 2% 07:42:56 DEBUG --- stderr --- 07:42:56 DEBUG 127.0.0.1 - - [12/Aug/2023 07:42:59] "GET /monitoring/average?start_time=23-08-12_06:12:28&stop_time=23-08-12_06:40:58 HTTP/1.1" 200 - 07:43:53 INFO 07:43:53 INFO [loop_until]: kubectl --namespace=xlou top pods 07:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:43:53 INFO [loop_until]: OK (rc = 0) 07:43:53 DEBUG --- stdout --- 07:43:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 4Mi am-55f77847b7-778wv 8m 5813Mi am-55f77847b7-nv9k2 8m 5728Mi am-55f77847b7-v7x55 7m 5713Mi ds-cts-0 6m 383Mi ds-cts-1 5m 378Mi ds-cts-2 6m 373Mi ds-idrepo-0 12m 13426Mi ds-idrepo-1 14m 13253Mi ds-idrepo-2 15m 13451Mi end-user-ui-6845bc78c7-xjx5c 1m 4Mi idm-65858d8c4c-5tvr8 6m 4187Mi idm-65858d8c4c-zvhxh 5m 4255Mi lodemon-97b6d75b7-fknft 8m 67Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 2m 140Mi 07:43:53 DEBUG --- stderr --- 07:43:53 DEBUG 07:43:56 INFO 07:43:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:43:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:43:56 INFO [loop_until]: OK (rc = 0) 07:43:56 DEBUG --- stdout --- 07:43:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 89m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6740Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 6844Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6923Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5493Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5507Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 138m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 134m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 506m 3% 14153Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 425m 2% 14168Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 124m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 455m 2% 13955Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1332m 8% 2033Mi 3% 07:43:56 DEBUG --- stderr --- 07:43:56 DEBUG 07:44:53 INFO 07:44:53 INFO [loop_until]: kubectl --namespace=xlou top pods 07:44:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:44:53 INFO [loop_until]: OK (rc = 0) 07:44:53 DEBUG --- stdout --- 07:44:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 5Mi am-55f77847b7-778wv 8m 5813Mi am-55f77847b7-nv9k2 8m 5728Mi am-55f77847b7-v7x55 7m 5716Mi ds-cts-0 6m 382Mi ds-cts-1 5m 379Mi ds-cts-2 5m 373Mi ds-idrepo-0 302m 13426Mi ds-idrepo-1 88m 13253Mi ds-idrepo-2 256m 13451Mi end-user-ui-6845bc78c7-xjx5c 1m 5Mi idm-65858d8c4c-5tvr8 5m 4187Mi idm-65858d8c4c-zvhxh 5m 4255Mi lodemon-97b6d75b7-fknft 5m 67Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 764m 505Mi 07:44:53 DEBUG --- stderr --- 07:44:53 DEBUG 07:44:56 INFO 07:44:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:44:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:44:56 INFO [loop_until]: OK (rc = 0) 07:44:56 DEBUG --- stdout --- 07:44:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6740Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6844Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 5507Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14151Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 13955Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 552m 3% 1810Mi 3% 07:44:56 DEBUG --- stderr --- 07:44:56 DEBUG 07:45:53 INFO 07:45:53 INFO [loop_until]: kubectl --namespace=xlou top pods 07:45:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:45:53 INFO [loop_until]: OK (rc = 0) 07:45:53 DEBUG --- stdout --- 07:45:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-6q777 1m 5Mi am-55f77847b7-778wv 8m 5813Mi am-55f77847b7-nv9k2 8m 5728Mi am-55f77847b7-v7x55 8m 5716Mi ds-cts-0 5m 382Mi ds-cts-1 5m 379Mi ds-cts-2 6m 373Mi ds-idrepo-0 13m 13426Mi ds-idrepo-1 15m 13254Mi ds-idrepo-2 9m 13452Mi end-user-ui-6845bc78c7-xjx5c 1m 5Mi idm-65858d8c4c-5tvr8 5m 4187Mi idm-65858d8c4c-zvhxh 4m 4254Mi lodemon-97b6d75b7-fknft 8m 67Mi login-ui-74d6fb46c-5prtd 1m 3Mi overseer-0-55c4b4c77c-l4rgv 809m 335Mi 07:45:53 DEBUG --- stderr --- 07:45:53 DEBUG 07:45:56 INFO 07:45:56 INFO [loop_until]: kubectl --namespace=xlou top node 07:45:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:45:56 INFO [loop_until]: OK (rc = 0) 07:45:56 DEBUG --- stdout --- 07:45:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6740Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 5495Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 5504Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 66m 0% 14155Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14165Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 13956Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 918m 5% 1911Mi 3% 07:45:56 DEBUG --- stderr --- 07:45:56 DEBUG 07:46:14 INFO Finished: True 07:46:14 INFO Waiting for threads to register finish flag 07:46:56 INFO Done. Have a nice day! :) 127.0.0.1 - - [12/Aug/2023 07:46:56] "GET /monitoring/stop HTTP/1.1" 200 - 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Cpu_cores_used_per_pod.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Memory_usage_per_pod.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Disk_tps_read_per_pod.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Disk_tps_writes_per_pod.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Cpu_cores_used_per_node.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Memory_usage_used_per_node.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Cpu_iowait_per_node.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Network_receive_per_node.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Network_transmit_per_node.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/am_cts_task_count_token_session.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/am_authentication_rate.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/am_authentication_count_per_pod.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/Cts_reaper_Deletion_count.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/AM_oauth2_authorization_codes.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/ds_pods_replication_delay.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/am_cts_reaper_cache_size.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/node_disk_read_bytes_total.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/node_disk_written_bytes_total.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/ds_backend_entry_count.json does not exist. Skipping... 07:46:59 INFO File /tmp/lodemon_data-23-08-12_05:09:08/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [12/Aug/2023 07:47:01] "GET /monitoring/process HTTP/1.1" 200 -