==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-66684b7694-c5c6m Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-jqvg/10.142.0.123 Start Time: Wed, 16 Aug 2023 01:24:39 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=66684b7694 skaffold.dev/run-id=8bcb5db4-6f24-4d94-94d8-15f5ad8c5adc Annotations: Status: Running IP: 10.106.44.7 IPs: IP: 10.106.44.7 Controlled By: ReplicaSet/lodemon-66684b7694 Containers: lodemon: Container ID: containerd://2e28839accf76a10ea7af9dc736aeaf2503f9e09fca5a473628f418dcd58e856 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:4d5fd958b0039996098fd26521beb194042b0f91 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:2106b17c28f2c595f23eef11231373fe5474ef49737c343028866b96d844ef4b Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Wed, 16 Aug 2023 01:25:24 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ncfqc (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-ncfqc: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 02:25:25 INFO 02:25:25 INFO --------------------- Get expected number of pods --------------------- 02:25:25 INFO 02:25:25 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 02:25:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:25 INFO [loop_until]: OK (rc = 0) 02:25:25 DEBUG --- stdout --- 02:25:25 DEBUG 3 02:25:25 DEBUG --- stderr --- 02:25:25 DEBUG 02:25:25 INFO 02:25:25 INFO ---------------------------- Get pod list ---------------------------- 02:25:25 INFO 02:25:25 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 02:25:25 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 02:25:25 INFO [loop_until]: OK (rc = 0) 02:25:25 DEBUG --- stdout --- 02:25:25 DEBUG am-869fdb5db9-5j69v am-869fdb5db9-8dg94 am-869fdb5db9-wt7sg 02:25:25 DEBUG --- stderr --- 02:25:25 DEBUG 02:25:25 INFO 02:25:25 INFO -------------- Check pod am-869fdb5db9-5j69v is running -------------- 02:25:25 INFO 02:25:25 INFO [loop_until]: kubectl --namespace=xlou get pods am-869fdb5db9-5j69v -o=jsonpath={.status.phase} | grep "Running" 02:25:25 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:25 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:25 INFO [loop_until]: OK (rc = 0) 02:25:25 DEBUG --- stdout --- 02:25:25 DEBUG Running 02:25:25 DEBUG --- stderr --- 02:25:25 DEBUG 02:25:25 INFO 02:25:25 INFO [loop_until]: kubectl --namespace=xlou get pods am-869fdb5db9-5j69v -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:25 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:25 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:25 INFO [loop_until]: OK (rc = 0) 02:25:25 DEBUG --- stdout --- 02:25:25 DEBUG true 02:25:25 DEBUG --- stderr --- 02:25:25 DEBUG 02:25:25 INFO 02:25:25 INFO [loop_until]: kubectl --namespace=xlou get pod am-869fdb5db9-5j69v --output jsonpath={.status.startTime} 02:25:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:25 INFO [loop_until]: OK (rc = 0) 02:25:25 DEBUG --- stdout --- 02:25:25 DEBUG 2023-08-16T01:14:34Z 02:25:25 DEBUG --- stderr --- 02:25:25 DEBUG 02:25:25 INFO 02:25:25 INFO ------- Check pod am-869fdb5db9-5j69v filesystem is accessible ------- 02:25:25 INFO 02:25:25 INFO [loop_until]: kubectl --namespace=xlou exec am-869fdb5db9-5j69v --container openam -- ls / | grep "bin" 02:25:25 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO ------------- Check pod am-869fdb5db9-5j69v restart count ------------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pod am-869fdb5db9-5j69v --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG 0 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO Pod am-869fdb5db9-5j69v has been restarted 0 times. 02:25:26 INFO 02:25:26 INFO -------------- Check pod am-869fdb5db9-8dg94 is running -------------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pods am-869fdb5db9-8dg94 -o=jsonpath={.status.phase} | grep "Running" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG Running 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pods am-869fdb5db9-8dg94 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG true 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pod am-869fdb5db9-8dg94 --output jsonpath={.status.startTime} 02:25:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG 2023-08-16T01:14:34Z 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO ------- Check pod am-869fdb5db9-8dg94 filesystem is accessible ------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou exec am-869fdb5db9-8dg94 --container openam -- ls / | grep "bin" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO ------------- Check pod am-869fdb5db9-8dg94 restart count ------------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pod am-869fdb5db9-8dg94 --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG 0 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO Pod am-869fdb5db9-8dg94 has been restarted 0 times. 02:25:26 INFO 02:25:26 INFO -------------- Check pod am-869fdb5db9-wt7sg is running -------------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pods am-869fdb5db9-wt7sg -o=jsonpath={.status.phase} | grep "Running" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG Running 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pods am-869fdb5db9-wt7sg -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG true 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pod am-869fdb5db9-wt7sg --output jsonpath={.status.startTime} 02:25:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG 2023-08-16T01:14:34Z 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO ------- Check pod am-869fdb5db9-wt7sg filesystem is accessible ------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou exec am-869fdb5db9-wt7sg --container openam -- ls / | grep "bin" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO ------------- Check pod am-869fdb5db9-wt7sg restart count ------------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pod am-869fdb5db9-wt7sg --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG 0 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO Pod am-869fdb5db9-wt7sg has been restarted 0 times. 02:25:26 INFO 02:25:26 INFO --------------------- Get expected number of pods --------------------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 02:25:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG 2 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO ---------------------------- Get pod list ---------------------------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 02:25:26 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG idm-65858d8c4c-d6c9h idm-65858d8c4c-pt5s9 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO -------------- Check pod idm-65858d8c4c-d6c9h is running -------------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-d6c9h -o=jsonpath={.status.phase} | grep "Running" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG Running 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-d6c9h -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG true 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-d6c9h --output jsonpath={.status.startTime} 02:25:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:26 INFO [loop_until]: OK (rc = 0) 02:25:26 DEBUG --- stdout --- 02:25:26 DEBUG 2023-08-16T01:14:35Z 02:25:26 DEBUG --- stderr --- 02:25:26 DEBUG 02:25:26 INFO 02:25:26 INFO ------- Check pod idm-65858d8c4c-d6c9h filesystem is accessible ------- 02:25:26 INFO 02:25:26 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-d6c9h --container openidm -- ls / | grep "bin" 02:25:26 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO ------------ Check pod idm-65858d8c4c-d6c9h restart count ------------ 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-d6c9h --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG 0 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO Pod idm-65858d8c4c-d6c9h has been restarted 0 times. 02:25:27 INFO 02:25:27 INFO -------------- Check pod idm-65858d8c4c-pt5s9 is running -------------- 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-pt5s9 -o=jsonpath={.status.phase} | grep "Running" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG Running 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-pt5s9 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG true 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-pt5s9 --output jsonpath={.status.startTime} 02:25:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG 2023-08-16T01:14:35Z 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO ------- Check pod idm-65858d8c4c-pt5s9 filesystem is accessible ------- 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-pt5s9 --container openidm -- ls / | grep "bin" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO ------------ Check pod idm-65858d8c4c-pt5s9 restart count ------------ 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-pt5s9 --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG 0 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO Pod idm-65858d8c4c-pt5s9 has been restarted 0 times. 02:25:27 INFO 02:25:27 INFO --------------------- Get expected number of pods --------------------- 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 02:25:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG 3 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO ---------------------------- Get pod list ---------------------------- 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 02:25:27 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG Running 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG true 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 02:25:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG 2023-08-16T00:40:17Z 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG 0 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO Pod ds-idrepo-0 has been restarted 0 times. 02:25:27 INFO 02:25:27 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG Running 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG true 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 02:25:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:27 INFO [loop_until]: OK (rc = 0) 02:25:27 DEBUG --- stdout --- 02:25:27 DEBUG 2023-08-16T00:52:39Z 02:25:27 DEBUG --- stderr --- 02:25:27 DEBUG 02:25:27 INFO 02:25:27 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 02:25:27 INFO 02:25:27 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 02:25:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG 0 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO Pod ds-idrepo-1 has been restarted 0 times. 02:25:28 INFO 02:25:28 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 02:25:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG Running 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG true 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 02:25:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG 2023-08-16T01:03:35Z 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 02:25:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG 0 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO Pod ds-idrepo-2 has been restarted 0 times. 02:25:28 INFO 02:25:28 INFO --------------------- Get expected number of pods --------------------- 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 02:25:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG 3 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO ---------------------------- Get pod list ---------------------------- 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 02:25:28 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO -------------------- Check pod ds-cts-0 is running -------------------- 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 02:25:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG Running 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG true 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 02:25:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG 2023-08-16T00:40:17Z 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 02:25:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG 0 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO Pod ds-cts-0 has been restarted 0 times. 02:25:28 INFO 02:25:28 INFO -------------------- Check pod ds-cts-1 is running -------------------- 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 02:25:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG Running 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:28 INFO [loop_until]: OK (rc = 0) 02:25:28 DEBUG --- stdout --- 02:25:28 DEBUG true 02:25:28 DEBUG --- stderr --- 02:25:28 DEBUG 02:25:28 INFO 02:25:28 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 02:25:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:29 INFO [loop_until]: OK (rc = 0) 02:25:29 DEBUG --- stdout --- 02:25:29 DEBUG 2023-08-16T00:40:43Z 02:25:29 DEBUG --- stderr --- 02:25:29 DEBUG 02:25:29 INFO 02:25:29 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 02:25:29 INFO 02:25:29 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 02:25:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:29 INFO [loop_until]: OK (rc = 0) 02:25:29 DEBUG --- stdout --- 02:25:29 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 02:25:29 DEBUG --- stderr --- 02:25:29 DEBUG 02:25:29 INFO 02:25:29 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 02:25:29 INFO 02:25:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:29 INFO [loop_until]: OK (rc = 0) 02:25:29 DEBUG --- stdout --- 02:25:29 DEBUG 0 02:25:29 DEBUG --- stderr --- 02:25:29 DEBUG 02:25:29 INFO Pod ds-cts-1 has been restarted 0 times. 02:25:29 INFO 02:25:29 INFO -------------------- Check pod ds-cts-2 is running -------------------- 02:25:29 INFO 02:25:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 02:25:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:29 INFO [loop_until]: OK (rc = 0) 02:25:29 DEBUG --- stdout --- 02:25:29 DEBUG Running 02:25:29 DEBUG --- stderr --- 02:25:29 DEBUG 02:25:29 INFO 02:25:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 02:25:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:29 INFO [loop_until]: OK (rc = 0) 02:25:29 DEBUG --- stdout --- 02:25:29 DEBUG true 02:25:29 DEBUG --- stderr --- 02:25:29 DEBUG 02:25:29 INFO 02:25:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 02:25:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:29 INFO [loop_until]: OK (rc = 0) 02:25:29 DEBUG --- stdout --- 02:25:29 DEBUG 2023-08-16T00:41:09Z 02:25:29 DEBUG --- stderr --- 02:25:29 DEBUG 02:25:29 INFO 02:25:29 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 02:25:29 INFO 02:25:29 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 02:25:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 02:25:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 02:25:29 INFO [loop_until]: OK (rc = 0) 02:25:29 DEBUG --- stdout --- 02:25:29 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 02:25:29 DEBUG --- stderr --- 02:25:29 DEBUG 02:25:29 INFO 02:25:29 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 02:25:29 INFO 02:25:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 02:25:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:29 INFO [loop_until]: OK (rc = 0) 02:25:29 DEBUG --- stdout --- 02:25:29 DEBUG 0 02:25:29 DEBUG --- stderr --- 02:25:29 DEBUG 02:25:29 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.44.7:8080 Press CTRL+C to quit 02:25:56 INFO 02:25:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:56 INFO [loop_until]: OK (rc = 0) 02:25:56 DEBUG --- stdout --- 02:25:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:56 DEBUG --- stderr --- 02:25:56 DEBUG 02:25:56 INFO 02:25:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:56 INFO [loop_until]: OK (rc = 0) 02:25:56 DEBUG --- stdout --- 02:25:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:56 DEBUG --- stderr --- 02:25:56 DEBUG 02:25:56 INFO 02:25:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:56 INFO [loop_until]: OK (rc = 0) 02:25:56 DEBUG --- stdout --- 02:25:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:56 DEBUG --- stderr --- 02:25:56 DEBUG 02:25:56 INFO 02:25:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:56 INFO [loop_until]: OK (rc = 0) 02:25:56 DEBUG --- stdout --- 02:25:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:56 DEBUG --- stderr --- 02:25:56 DEBUG 02:25:56 INFO 02:25:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:56 INFO [loop_until]: OK (rc = 0) 02:25:56 DEBUG --- stdout --- 02:25:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:56 DEBUG --- stderr --- 02:25:56 DEBUG 02:25:56 INFO 02:25:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:56 INFO [loop_until]: OK (rc = 0) 02:25:56 DEBUG --- stdout --- 02:25:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:56 DEBUG --- stderr --- 02:25:56 DEBUG 02:25:56 INFO 02:25:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:56 INFO [loop_until]: OK (rc = 0) 02:25:56 DEBUG --- stdout --- 02:25:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:56 DEBUG --- stderr --- 02:25:56 DEBUG 02:25:57 INFO 02:25:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:57 INFO [loop_until]: OK (rc = 0) 02:25:57 DEBUG --- stdout --- 02:25:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:57 DEBUG --- stderr --- 02:25:57 DEBUG 02:25:57 INFO 02:25:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:57 INFO [loop_until]: OK (rc = 0) 02:25:57 DEBUG --- stdout --- 02:25:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:57 DEBUG --- stderr --- 02:25:57 DEBUG 02:25:57 INFO 02:25:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:57 INFO [loop_until]: OK (rc = 0) 02:25:57 DEBUG --- stdout --- 02:25:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:57 DEBUG --- stderr --- 02:25:57 DEBUG 02:25:57 INFO 02:25:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:57 INFO [loop_until]: OK (rc = 0) 02:25:57 DEBUG --- stdout --- 02:25:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:57 DEBUG --- stderr --- 02:25:57 DEBUG 02:25:57 INFO 02:25:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:57 INFO [loop_until]: OK (rc = 0) 02:25:57 DEBUG --- stdout --- 02:25:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:57 DEBUG --- stderr --- 02:25:57 DEBUG 02:25:57 INFO 02:25:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:57 INFO [loop_until]: OK (rc = 0) 02:25:57 DEBUG --- stdout --- 02:25:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:57 DEBUG --- stderr --- 02:25:57 DEBUG 02:25:57 INFO 02:25:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:57 INFO [loop_until]: OK (rc = 0) 02:25:57 DEBUG --- stdout --- 02:25:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:57 DEBUG --- stderr --- 02:25:57 DEBUG 02:25:57 INFO 02:25:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:57 INFO [loop_until]: OK (rc = 0) 02:25:57 DEBUG --- stdout --- 02:25:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:57 DEBUG --- stderr --- 02:25:57 DEBUG 02:25:58 INFO 02:25:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:58 INFO [loop_until]: OK (rc = 0) 02:25:58 DEBUG --- stdout --- 02:25:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:58 DEBUG --- stderr --- 02:25:58 DEBUG 02:25:58 INFO 02:25:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:58 INFO [loop_until]: OK (rc = 0) 02:25:58 DEBUG --- stdout --- 02:25:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:58 DEBUG --- stderr --- 02:25:58 DEBUG 02:25:58 INFO 02:25:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:58 INFO [loop_until]: OK (rc = 0) 02:25:58 DEBUG --- stdout --- 02:25:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:58 DEBUG --- stderr --- 02:25:58 DEBUG 02:25:58 INFO 02:25:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:58 INFO [loop_until]: OK (rc = 0) 02:25:58 DEBUG --- stdout --- 02:25:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:58 DEBUG --- stderr --- 02:25:58 DEBUG 02:25:58 INFO 02:25:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:58 INFO [loop_until]: OK (rc = 0) 02:25:58 DEBUG --- stdout --- 02:25:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:58 DEBUG --- stderr --- 02:25:58 DEBUG 02:25:58 INFO 02:25:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:58 INFO [loop_until]: OK (rc = 0) 02:25:58 DEBUG --- stdout --- 02:25:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:58 DEBUG --- stderr --- 02:25:58 DEBUG 02:25:58 INFO 02:25:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:58 INFO [loop_until]: OK (rc = 0) 02:25:58 DEBUG --- stdout --- 02:25:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:58 DEBUG --- stderr --- 02:25:58 DEBUG 02:25:58 INFO 02:25:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:59 INFO [loop_until]: OK (rc = 0) 02:25:59 DEBUG --- stdout --- 02:25:59 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:59 DEBUG --- stderr --- 02:25:59 DEBUG 02:25:59 INFO 02:25:59 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:59 INFO [loop_until]: OK (rc = 0) 02:25:59 DEBUG --- stdout --- 02:25:59 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:59 DEBUG --- stderr --- 02:25:59 DEBUG 02:25:59 INFO 02:25:59 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:59 INFO [loop_until]: OK (rc = 0) 02:25:59 DEBUG --- stdout --- 02:25:59 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:59 DEBUG --- stderr --- 02:25:59 DEBUG 02:25:59 INFO 02:25:59 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:59 INFO [loop_until]: OK (rc = 0) 02:25:59 DEBUG --- stdout --- 02:25:59 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:59 DEBUG --- stderr --- 02:25:59 DEBUG 02:25:59 INFO 02:25:59 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 02:25:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:59 INFO [loop_until]: OK (rc = 0) 02:25:59 DEBUG --- stdout --- 02:25:59 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 02:25:59 DEBUG --- stderr --- 02:25:59 DEBUG 02:25:59 INFO Initializing monitoring instance threads 02:25:59 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 02:25:59 INFO Starting instance threads 02:25:59 INFO 02:25:59 INFO Thread started 02:25:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:25:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:59 INFO 02:25:59 INFO Thread started 02:25:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159" 02:25:59 INFO Thread started Exception in thread Thread-23: Traceback (most recent call last): 02:25:59 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 02:25:59 INFO Thread started Exception in thread Thread-24: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-25: 02:25:59 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1692149159" self.run() 02:25:59 INFO Thread started self.run() 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1692149159" File "/usr/local/lib/python3.9/threading.py", line 910, in run 02:25:59 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 910, in run 02:25:59 INFO Thread started 02:25:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159" self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run 02:25:59 INFO Thread started 02:25:59 INFO All threads has been started Exception in thread Thread-28: 127.0.0.1 - - [16/Aug/2023 02:25:59] "GET /monitoring/start HTTP/1.1" 200 - self._target(*self._args, **self._kwargs) 02:25:59 INFO [loop_until]: OK (rc = 0) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) 02:25:59 DEBUG --- stdout --- Traceback (most recent call last): 02:25:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 22m 2599Mi am-869fdb5db9-8dg94 12m 4409Mi am-869fdb5db9-wt7sg 8m 2716Mi ds-cts-0 9m 380Mi ds-cts-1 8m 358Mi ds-cts-2 8m 358Mi ds-idrepo-0 17m 10316Mi ds-idrepo-1 22m 10275Mi ds-idrepo-2 28m 10300Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 10m 1215Mi idm-65858d8c4c-pt5s9 9m 3417Mi lodemon-66684b7694-c5c6m 611m 61Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 15Mi 02:25:59 DEBUG --- stderr --- 02:25:59 DEBUG File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner instance.run() self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run self.run() instance.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop KeyError: 'functions' instance.run() if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run KeyError: 'functions' instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: if self.prom_data['functions']: KeyError: 'functions' KeyError: 'functions' 02:25:59 INFO [loop_until]: OK (rc = 0) 02:25:59 DEBUG --- stdout --- 02:25:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 75m 0% 3376Mi 5% gke-xlou-cdm-default-pool-f05840a3-jnx6 91m 0% 1884Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 1740m 10% 968Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 3569Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 104m 0% 2517Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 74m 0% 5205Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 75m 0% 4080Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 64m 0% 922Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 70m 0% 10820Mi 18% gke-xlou-cdm-ds-32e4dcb1-hbvk 83m 0% 10800Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 925Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 67m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 77m 0% 10779Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1893Mi 3% 02:25:59 DEBUG --- stderr --- 02:25:59 DEBUG 02:26:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:00 WARNING Response is NONE 02:26:00 DEBUG Exception is preset. Setting retry_loop to true 02:26:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:02 WARNING Response is NONE 02:26:02 WARNING Response is NONE 02:26:02 DEBUG Exception is preset. Setting retry_loop to true 02:26:02 DEBUG Exception is preset. Setting retry_loop to true 02:26:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:06 WARNING Response is NONE 02:26:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:06 WARNING Response is NONE 02:26:06 DEBUG Exception is preset. Setting retry_loop to true 02:26:06 WARNING Response is NONE 02:26:06 WARNING Response is NONE 02:26:06 WARNING Response is NONE 02:26:06 DEBUG Exception is preset. Setting retry_loop to true 02:26:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:06 DEBUG Exception is preset. Setting retry_loop to true 02:26:06 DEBUG Exception is preset. Setting retry_loop to true 02:26:06 DEBUG Exception is preset. Setting retry_loop to true 02:26:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:11 WARNING Response is NONE 02:26:11 DEBUG Exception is preset. Setting retry_loop to true 02:26:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:13 WARNING Response is NONE 02:26:13 WARNING Response is NONE 02:26:13 DEBUG Exception is preset. Setting retry_loop to true 02:26:13 DEBUG Exception is preset. Setting retry_loop to true 02:26:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:15 WARNING Response is NONE 02:26:15 WARNING Response is NONE 02:26:15 WARNING Response is NONE 02:26:15 DEBUG Exception is preset. Setting retry_loop to true 02:26:15 DEBUG Exception is preset. Setting retry_loop to true 02:26:15 DEBUG Exception is preset. Setting retry_loop to true 02:26:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:17 WARNING Response is NONE 02:26:17 DEBUG Exception is preset. Setting retry_loop to true 02:26:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:19 WARNING Response is NONE 02:26:19 WARNING Response is NONE 02:26:19 WARNING Response is NONE 02:26:19 DEBUG Exception is preset. Setting retry_loop to true 02:26:19 DEBUG Exception is preset. Setting retry_loop to true 02:26:19 DEBUG Exception is preset. Setting retry_loop to true 02:26:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:22 WARNING Response is NONE 02:26:22 DEBUG Exception is preset. Setting retry_loop to true 02:26:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:24 WARNING Response is NONE 02:26:24 WARNING Response is NONE 02:26:24 DEBUG Exception is preset. Setting retry_loop to true 02:26:24 DEBUG Exception is preset. Setting retry_loop to true 02:26:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:26 WARNING Response is NONE 02:26:26 DEBUG Exception is preset. Setting retry_loop to true 02:26:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:28 WARNING Response is NONE 02:26:28 DEBUG Exception is preset. Setting retry_loop to true 02:26:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:28 WARNING Response is NONE 02:26:28 DEBUG Exception is preset. Setting retry_loop to true 02:26:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:30 WARNING Response is NONE 02:26:30 DEBUG Exception is preset. Setting retry_loop to true 02:26:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:31 WARNING Response is NONE 02:26:31 DEBUG Exception is preset. Setting retry_loop to true 02:26:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:33 WARNING Response is NONE 02:26:33 DEBUG Exception is preset. Setting retry_loop to true 02:26:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:35 WARNING Response is NONE 02:26:35 DEBUG Exception is preset. Setting retry_loop to true 02:26:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:36 WARNING Response is NONE 02:26:36 DEBUG Exception is preset. Setting retry_loop to true 02:26:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:37 WARNING Response is NONE 02:26:37 DEBUG Exception is preset. Setting retry_loop to true 02:26:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:39 WARNING Response is NONE 02:26:39 DEBUG Exception is preset. Setting retry_loop to true 02:26:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:40 WARNING Response is NONE 02:26:40 DEBUG Exception is preset. Setting retry_loop to true 02:26:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:41 WARNING Response is NONE 02:26:41 DEBUG Exception is preset. Setting retry_loop to true 02:26:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:43 WARNING Response is NONE 02:26:43 DEBUG Exception is preset. Setting retry_loop to true 02:26:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:44 WARNING Response is NONE 02:26:44 DEBUG Exception is preset. Setting retry_loop to true 02:26:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:46 WARNING Response is NONE 02:26:46 DEBUG Exception is preset. Setting retry_loop to true 02:26:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:49 WARNING Response is NONE 02:26:49 DEBUG Exception is preset. Setting retry_loop to true 02:26:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:51 WARNING Response is NONE 02:26:51 DEBUG Exception is preset. Setting retry_loop to true 02:26:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:53 WARNING Response is NONE 02:26:53 DEBUG Exception is preset. Setting retry_loop to true 02:26:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:54 WARNING Response is NONE 02:26:54 DEBUG Exception is preset. Setting retry_loop to true 02:26:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:26:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:55 WARNING Response is NONE 02:26:55 DEBUG Exception is preset. Setting retry_loop to true 02:26:55 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:26:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:26:57 WARNING Response is NONE 02:26:57 DEBUG Exception is preset. Setting retry_loop to true 02:26:57 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:26:59 INFO 02:26:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:26:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:26:59 INFO 02:26:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:26:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:26:59 INFO [loop_until]: OK (rc = 0) 02:26:59 DEBUG --- stdout --- 02:26:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 71m 0% 3387Mi 5% gke-xlou-cdm-default-pool-f05840a3-jnx6 85m 0% 1884Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 973Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3563Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 105m 0% 2517Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5212Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 73m 0% 4080Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 84m 0% 925Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 926m 5% 10835Mi 18% gke-xlou-cdm-ds-32e4dcb1-hbvk 90m 0% 10805Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 270m 1% 929Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 66m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 163m 1% 10784Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 259m 1% 1996Mi 3% 02:26:59 DEBUG --- stderr --- 02:26:59 DEBUG 02:26:59 INFO [loop_until]: OK (rc = 0) 02:26:59 DEBUG --- stdout --- 02:26:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 11m 2609Mi am-869fdb5db9-8dg94 20m 4413Mi am-869fdb5db9-wt7sg 5m 2716Mi ds-cts-0 80m 392Mi ds-cts-1 111m 362Mi ds-cts-2 78m 361Mi ds-idrepo-0 595m 10324Mi ds-idrepo-1 170m 10280Mi ds-idrepo-2 150m 10306Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 1214Mi idm-65858d8c4c-pt5s9 7m 3417Mi lodemon-66684b7694-c5c6m 4m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 177m 140Mi 02:26:59 DEBUG --- stderr --- 02:26:59 DEBUG 02:27:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:00 WARNING Response is NONE 02:27:00 DEBUG Exception is preset. Setting retry_loop to true 02:27:00 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:01 WARNING Response is NONE 02:27:01 DEBUG Exception is preset. Setting retry_loop to true 02:27:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:02 WARNING Response is NONE 02:27:02 DEBUG Exception is preset. Setting retry_loop to true 02:27:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:03 WARNING Response is NONE 02:27:03 DEBUG Exception is preset. Setting retry_loop to true 02:27:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:05 WARNING Response is NONE 02:27:05 DEBUG Exception is preset. Setting retry_loop to true 02:27:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:06 WARNING Response is NONE 02:27:06 DEBUG Exception is preset. Setting retry_loop to true 02:27:06 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:07 WARNING Response is NONE 02:27:07 DEBUG Exception is preset. Setting retry_loop to true 02:27:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:07 WARNING Response is NONE 02:27:07 DEBUG Exception is preset. Setting retry_loop to true 02:27:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:09 WARNING Response is NONE 02:27:09 DEBUG Exception is preset. Setting retry_loop to true 02:27:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:12 WARNING Response is NONE 02:27:12 DEBUG Exception is preset. Setting retry_loop to true 02:27:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:13 WARNING Response is NONE 02:27:13 DEBUG Exception is preset. Setting retry_loop to true 02:27:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:14 WARNING Response is NONE 02:27:14 DEBUG Exception is preset. Setting retry_loop to true 02:27:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:16 WARNING Response is NONE 02:27:16 DEBUG Exception is preset. Setting retry_loop to true 02:27:16 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:18 WARNING Response is NONE 02:27:18 DEBUG Exception is preset. Setting retry_loop to true 02:27:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:20 WARNING Response is NONE 02:27:20 DEBUG Exception is preset. Setting retry_loop to true 02:27:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:23 WARNING Response is NONE 02:27:23 DEBUG Exception is preset. Setting retry_loop to true 02:27:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:25 WARNING Response is NONE 02:27:25 DEBUG Exception is preset. Setting retry_loop to true 02:27:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:26 WARNING Response is NONE 02:27:26 DEBUG Exception is preset. Setting retry_loop to true 02:27:26 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:29 WARNING Response is NONE 02:27:29 DEBUG Exception is preset. Setting retry_loop to true 02:27:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:31 WARNING Response is NONE 02:27:31 DEBUG Exception is preset. Setting retry_loop to true 02:27:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:32 WARNING Response is NONE 02:27:32 DEBUG Exception is preset. Setting retry_loop to true 02:27:32 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:34 WARNING Response is NONE 02:27:34 DEBUG Exception is preset. Setting retry_loop to true 02:27:34 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:36 WARNING Response is NONE 02:27:36 DEBUG Exception is preset. Setting retry_loop to true 02:27:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:41 WARNING Response is NONE 02:27:41 DEBUG Exception is preset. Setting retry_loop to true 02:27:41 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:43 WARNING Response is NONE 02:27:43 DEBUG Exception is preset. Setting retry_loop to true 02:27:43 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:47 WARNING Response is NONE 02:27:47 DEBUG Exception is preset. Setting retry_loop to true 02:27:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:50 WARNING Response is NONE 02:27:50 DEBUG Exception is preset. Setting retry_loop to true 02:27:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:27:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:27:59 WARNING Response is NONE 02:27:59 DEBUG Exception is preset. Setting retry_loop to true 02:27:59 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:27:59 INFO 02:27:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:27:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:27:59 INFO 02:27:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:27:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:27:59 INFO [loop_until]: OK (rc = 0) 02:27:59 DEBUG --- stdout --- 02:27:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 13m 2619Mi am-869fdb5db9-8dg94 16m 4414Mi am-869fdb5db9-wt7sg 12m 2717Mi ds-cts-0 11m 392Mi ds-cts-1 11m 363Mi ds-cts-2 8m 362Mi ds-idrepo-0 19m 10324Mi ds-idrepo-1 27m 10280Mi ds-idrepo-2 30m 10307Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 1215Mi idm-65858d8c4c-pt5s9 7m 3417Mi lodemon-66684b7694-c5c6m 3m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 48Mi 02:27:59 DEBUG --- stderr --- 02:27:59 DEBUG 02:27:59 INFO [loop_until]: OK (rc = 0) 02:27:59 DEBUG --- stdout --- 02:27:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 75m 0% 3398Mi 5% gke-xlou-cdm-default-pool-f05840a3-jnx6 85m 0% 1883Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 977Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 3568Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 105m 0% 2519Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 77m 0% 5213Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 73m 0% 4084Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 926Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 67m 0% 10832Mi 18% gke-xlou-cdm-ds-32e4dcb1-hbvk 102m 0% 10811Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 63m 0% 928Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 85m 0% 10795Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1892Mi 3% 02:27:59 DEBUG --- stderr --- 02:27:59 DEBUG 02:28:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:01 WARNING Response is NONE 02:28:01 DEBUG Exception is preset. Setting retry_loop to true 02:28:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 WARNING Response is NONE 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 DEBUG Exception is preset. Setting retry_loop to true 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:12 WARNING Response is NONE 02:28:12 DEBUG Exception is preset. Setting retry_loop to true 02:28:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:28:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:19 WARNING Response is NONE 02:28:19 DEBUG Exception is preset. Setting retry_loop to true 02:28:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:21 WARNING Response is NONE 02:28:21 WARNING Response is NONE 02:28:21 DEBUG Exception is preset. Setting retry_loop to true 02:28:21 DEBUG Exception is preset. Setting retry_loop to true 02:28:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:25 WARNING Response is NONE 02:28:25 WARNING Response is NONE 02:28:25 WARNING Response is NONE 02:28:25 WARNING Response is NONE 02:28:25 DEBUG Exception is preset. Setting retry_loop to true 02:28:25 DEBUG Exception is preset. Setting retry_loop to true 02:28:25 DEBUG Exception is preset. Setting retry_loop to true 02:28:25 DEBUG Exception is preset. Setting retry_loop to true 02:28:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:30 WARNING Response is NONE 02:28:30 DEBUG Exception is preset. Setting retry_loop to true 02:28:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:32 WARNING Response is NONE 02:28:32 WARNING Response is NONE 02:28:32 DEBUG Exception is preset. Setting retry_loop to true 02:28:32 DEBUG Exception is preset. Setting retry_loop to true 02:28:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:34 WARNING Response is NONE 02:28:34 WARNING Response is NONE 02:28:34 DEBUG Exception is preset. Setting retry_loop to true 02:28:34 DEBUG Exception is preset. Setting retry_loop to true 02:28:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:36 WARNING Response is NONE 02:28:36 DEBUG Exception is preset. Setting retry_loop to true 02:28:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:38 WARNING Response is NONE 02:28:38 DEBUG Exception is preset. Setting retry_loop to true 02:28:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:41 WARNING Response is NONE 02:28:41 DEBUG Exception is preset. Setting retry_loop to true 02:28:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:43 WARNING Response is NONE 02:28:43 DEBUG Exception is preset. Setting retry_loop to true 02:28:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:45 WARNING Response is NONE 02:28:45 DEBUG Exception is preset. Setting retry_loop to true 02:28:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:46 WARNING Response is NONE 02:28:46 DEBUG Exception is preset. Setting retry_loop to true 02:28:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:48 WARNING Response is NONE 02:28:48 DEBUG Exception is preset. Setting retry_loop to true 02:28:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:50 WARNING Response is NONE 02:28:50 DEBUG Exception is preset. Setting retry_loop to true 02:28:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:51 WARNING Response is NONE 02:28:51 DEBUG Exception is preset. Setting retry_loop to true 02:28:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:53 WARNING Response is NONE 02:28:53 DEBUG Exception is preset. Setting retry_loop to true 02:28:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:55 WARNING Response is NONE 02:28:55 DEBUG Exception is preset. Setting retry_loop to true 02:28:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:57 WARNING Response is NONE 02:28:57 DEBUG Exception is preset. Setting retry_loop to true 02:28:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:59 WARNING Response is NONE 02:28:59 DEBUG Exception is preset. Setting retry_loop to true 02:28:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:59 INFO 02:28:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:28:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:28:59 INFO 02:28:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:28:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:28:59 INFO [loop_until]: OK (rc = 0) 02:28:59 DEBUG --- stdout --- 02:28:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 11m 2634Mi am-869fdb5db9-8dg94 10m 4415Mi am-869fdb5db9-wt7sg 11m 2717Mi ds-cts-0 7m 392Mi ds-cts-1 8m 362Mi ds-cts-2 8m 362Mi ds-idrepo-0 19m 10326Mi ds-idrepo-1 22m 10275Mi ds-idrepo-2 26m 10304Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 1215Mi idm-65858d8c4c-pt5s9 7m 3418Mi lodemon-66684b7694-c5c6m 3m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 226m 199Mi 02:28:59 DEBUG --- stderr --- 02:28:59 DEBUG 02:28:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:28:59 WARNING Response is NONE 02:28:59 DEBUG Exception is preset. Setting retry_loop to true 02:28:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:28:59 INFO [loop_until]: OK (rc = 0) 02:28:59 DEBUG --- stdout --- 02:28:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 3408Mi 5% gke-xlou-cdm-default-pool-f05840a3-jnx6 86m 0% 1880Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 975Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 3568Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 100m 0% 2520Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5214Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 71m 0% 4082Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 70m 0% 928Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 75m 0% 10832Mi 18% gke-xlou-cdm-ds-32e4dcb1-hbvk 81m 0% 10803Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 66m 0% 926Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 82m 0% 10783Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 203m 1% 1996Mi 3% 02:28:59 DEBUG --- stderr --- 02:28:59 DEBUG 02:29:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:01 WARNING Response is NONE 02:29:01 DEBUG Exception is preset. Setting retry_loop to true 02:29:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:02 WARNING Response is NONE 02:29:02 DEBUG Exception is preset. Setting retry_loop to true 02:29:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:04 WARNING Response is NONE 02:29:04 DEBUG Exception is preset. Setting retry_loop to true 02:29:04 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:29:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:06 WARNING Response is NONE 02:29:06 DEBUG Exception is preset. Setting retry_loop to true 02:29:06 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:29:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:08 WARNING Response is NONE 02:29:08 DEBUG Exception is preset. Setting retry_loop to true 02:29:08 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:29:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:10 WARNING Response is NONE 02:29:10 DEBUG Exception is preset. Setting retry_loop to true 02:29:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:12 WARNING Response is NONE 02:29:12 DEBUG Exception is preset. Setting retry_loop to true 02:29:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:29:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:13 WARNING Response is NONE 02:29:13 DEBUG Exception is preset. Setting retry_loop to true 02:29:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:16 WARNING Response is NONE 02:29:16 DEBUG Exception is preset. Setting retry_loop to true 02:29:16 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:29:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:22 WARNING Response is NONE 02:29:22 DEBUG Exception is preset. Setting retry_loop to true 02:29:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:22 WARNING Response is NONE 02:29:22 WARNING Response is NONE 02:29:22 DEBUG Exception is preset. Setting retry_loop to true 02:29:22 DEBUG Exception is preset. Setting retry_loop to true 02:29:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:24 WARNING Response is NONE 02:29:24 DEBUG Exception is preset. Setting retry_loop to true 02:29:24 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:29:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:27 WARNING Response is NONE 02:29:27 DEBUG Exception is preset. Setting retry_loop to true 02:29:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:33 WARNING Response is NONE 02:29:33 DEBUG Exception is preset. Setting retry_loop to true 02:29:33 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:29:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:34 WARNING Response is NONE 02:29:34 WARNING Response is NONE 02:29:34 DEBUG Exception is preset. Setting retry_loop to true 02:29:34 DEBUG Exception is preset. Setting retry_loop to true 02:29:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:38 WARNING Response is NONE 02:29:38 DEBUG Exception is preset. Setting retry_loop to true 02:29:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:39 WARNING Response is NONE 02:29:39 DEBUG Exception is preset. Setting retry_loop to true 02:29:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:45 WARNING Response is NONE 02:29:45 WARNING Response is NONE 02:29:45 DEBUG Exception is preset. Setting retry_loop to true 02:29:45 DEBUG Exception is preset. Setting retry_loop to true 02:29:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:49 WARNING Response is NONE 02:29:49 DEBUG Exception is preset. Setting retry_loop to true 02:29:49 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:29:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:50 WARNING Response is NONE 02:29:50 DEBUG Exception is preset. Setting retry_loop to true 02:29:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:29:56 WARNING Response is NONE 02:29:56 WARNING Response is NONE 02:29:56 DEBUG Exception is preset. Setting retry_loop to true 02:29:56 DEBUG Exception is preset. Setting retry_loop to true 02:29:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:29:59 INFO 02:29:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:29:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:30:00 INFO [loop_until]: OK (rc = 0) 02:30:00 DEBUG --- stdout --- 02:30:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 14m 2645Mi am-869fdb5db9-8dg94 10m 4415Mi am-869fdb5db9-wt7sg 5m 2717Mi ds-cts-0 8m 392Mi ds-cts-1 10m 362Mi ds-cts-2 8m 362Mi ds-idrepo-0 18m 10325Mi ds-idrepo-1 22m 10280Mi ds-idrepo-2 22m 10306Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 8m 1216Mi idm-65858d8c4c-pt5s9 6m 3418Mi lodemon-66684b7694-c5c6m 3m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 98Mi 02:30:00 DEBUG --- stderr --- 02:30:00 DEBUG 02:30:00 INFO 02:30:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:30:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:30:00 INFO [loop_until]: OK (rc = 0) 02:30:00 DEBUG --- stdout --- 02:30:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 74m 0% 3419Mi 5% gke-xlou-cdm-default-pool-f05840a3-jnx6 79m 0% 1881Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 975Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3573Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 103m 0% 2519Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 76m 0% 5210Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 73m 0% 4083Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 927Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 68m 0% 10836Mi 18% gke-xlou-cdm-ds-32e4dcb1-hbvk 74m 0% 10803Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 63m 0% 929Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 77m 0% 10788Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1888Mi 3% 02:30:00 DEBUG --- stderr --- 02:30:00 DEBUG 02:30:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:30:01 WARNING Response is NONE 02:30:01 DEBUG Exception is preset. Setting retry_loop to true 02:30:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 02:30:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:30:07 WARNING Response is NONE 02:30:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:30:07 DEBUG Exception is preset. Setting retry_loop to true 02:30:07 WARNING Response is NONE 02:30:07 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): 02:30:07 DEBUG Exception is preset. Setting retry_loop to true 02:30:07 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:30:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1692149159 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 02:30:13 WARNING Response is NONE 02:30:13 DEBUG Exception is preset. Setting retry_loop to true 02:30:13 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 02:31:00 INFO 02:31:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:31:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:31:00 INFO [loop_until]: OK (rc = 0) 02:31:00 DEBUG --- stdout --- 02:31:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 50m 2727Mi am-869fdb5db9-8dg94 67m 4449Mi am-869fdb5db9-wt7sg 12m 2718Mi ds-cts-0 57m 394Mi ds-cts-1 8m 362Mi ds-cts-2 7m 362Mi ds-idrepo-0 43m 10331Mi ds-idrepo-1 21m 10281Mi ds-idrepo-2 23m 10307Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 37m 1255Mi idm-65858d8c4c-pt5s9 20m 3437Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 2m 98Mi 02:31:00 DEBUG --- stderr --- 02:31:00 DEBUG 02:31:00 INFO 02:31:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:31:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:31:00 INFO [loop_until]: OK (rc = 0) 02:31:00 DEBUG --- stdout --- 02:31:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 70m 0% 3433Mi 5% gke-xlou-cdm-default-pool-f05840a3-jnx6 90m 0% 1889Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 974Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3571Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 104m 0% 2517Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 80m 0% 5216Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4084Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 927Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 80m 0% 10843Mi 18% gke-xlou-cdm-ds-32e4dcb1-hbvk 68m 0% 10807Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 927Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 137m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 74m 0% 10786Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 73m 0% 1890Mi 3% 02:31:00 DEBUG --- stderr --- 02:31:00 DEBUG 02:32:00 INFO 02:32:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:32:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:32:00 INFO [loop_until]: OK (rc = 0) 02:32:00 DEBUG --- stdout --- 02:32:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 2728Mi am-869fdb5db9-8dg94 18m 4432Mi am-869fdb5db9-wt7sg 14m 2739Mi ds-cts-0 403m 395Mi ds-cts-1 159m 374Mi ds-cts-2 195m 364Mi ds-idrepo-0 3013m 13270Mi ds-idrepo-1 224m 10285Mi ds-idrepo-2 257m 10328Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 9m 1272Mi idm-65858d8c4c-pt5s9 12m 3422Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 953m 366Mi 02:32:00 DEBUG --- stderr --- 02:32:00 DEBUG 02:32:00 INFO 02:32:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:32:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:32:00 INFO [loop_until]: OK (rc = 0) 02:32:00 DEBUG --- stdout --- 02:32:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 69m 0% 3505Mi 5% gke-xlou-cdm-default-pool-f05840a3-jnx6 87m 0% 1941Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 977Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 70m 0% 3590Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 108m 0% 2520Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 77m 0% 5232Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 79m 0% 4085Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 277m 1% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3156m 19% 13702Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 352m 2% 10810Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 235m 1% 929Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 427m 2% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 292m 1% 10793Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1064m 6% 2153Mi 3% 02:32:00 DEBUG --- stderr --- 02:32:00 DEBUG 02:33:00 INFO 02:33:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:33:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:33:00 INFO [loop_until]: OK (rc = 0) 02:33:00 DEBUG --- stdout --- 02:33:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 11m 2737Mi am-869fdb5db9-8dg94 14m 4435Mi am-869fdb5db9-wt7sg 8m 2739Mi ds-cts-0 7m 391Mi ds-cts-1 8m 364Mi ds-cts-2 7m 364Mi ds-idrepo-0 2676m 13390Mi ds-idrepo-1 31m 10292Mi ds-idrepo-2 29m 10311Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 7m 1283Mi idm-65858d8c4c-pt5s9 7m 3423Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1095m 366Mi 02:33:00 DEBUG --- stderr --- 02:33:00 DEBUG 02:33:00 INFO 02:33:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:33:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:33:00 INFO [loop_until]: OK (rc = 0) 02:33:00 DEBUG --- stdout --- 02:33:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 70m 0% 3517Mi 5% gke-xlou-cdm-default-pool-f05840a3-jnx6 88m 0% 1952Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 70m 0% 976Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3592Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 100m 0% 2521Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 76m 0% 5235Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 71m 0% 4088Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 928Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2827m 17% 13821Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 79m 0% 10810Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 81m 0% 10801Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1135m 7% 2155Mi 3% 02:33:00 DEBUG --- stderr --- 02:33:00 DEBUG 02:34:00 INFO 02:34:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:34:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:34:00 INFO [loop_until]: OK (rc = 0) 02:34:00 DEBUG --- stdout --- 02:34:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 9m 2752Mi am-869fdb5db9-8dg94 12m 4435Mi am-869fdb5db9-wt7sg 13m 2740Mi ds-cts-0 7m 392Mi ds-cts-1 8m 364Mi ds-cts-2 7m 364Mi ds-idrepo-0 2794m 13441Mi ds-idrepo-1 21m 10292Mi ds-idrepo-2 26m 10312Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 1291Mi idm-65858d8c4c-pt5s9 6m 3424Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1120m 367Mi 02:34:00 DEBUG --- stderr --- 02:34:00 DEBUG 02:34:00 INFO 02:34:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:34:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:34:00 INFO [loop_until]: OK (rc = 0) 02:34:00 DEBUG --- stdout --- 02:34:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3528Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 88m 0% 1961Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 70m 0% 973Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 3595Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 97m 0% 2535Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5235Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4088Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 63m 0% 930Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2954m 18% 13926Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 82m 0% 10810Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 63m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 77m 0% 10802Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1188m 7% 2158Mi 3% 02:34:00 DEBUG --- stderr --- 02:34:00 DEBUG 02:35:00 INFO 02:35:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:35:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:35:00 INFO [loop_until]: OK (rc = 0) 02:35:00 DEBUG --- stdout --- 02:35:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 2761Mi am-869fdb5db9-8dg94 10m 4435Mi am-869fdb5db9-wt7sg 13m 2740Mi ds-cts-0 7m 394Mi ds-cts-1 9m 364Mi ds-cts-2 7m 364Mi ds-idrepo-0 2904m 13542Mi ds-idrepo-1 26m 10294Mi ds-idrepo-2 28m 10314Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 10m 1304Mi idm-65858d8c4c-pt5s9 6m 3423Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1154m 367Mi 02:35:00 DEBUG --- stderr --- 02:35:00 DEBUG 02:35:00 INFO 02:35:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:35:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:35:00 INFO [loop_until]: OK (rc = 0) 02:35:00 DEBUG --- stdout --- 02:35:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 73m 0% 3539Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 90m 0% 1976Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 69m 0% 972Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 3596Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 100m 0% 2523Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5235Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 73m 0% 4090Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2998m 18% 13961Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 78m 0% 10812Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 79m 0% 10802Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1236m 7% 2157Mi 3% 02:35:00 DEBUG --- stderr --- 02:35:00 DEBUG 02:36:00 INFO 02:36:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:36:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:36:00 INFO [loop_until]: OK (rc = 0) 02:36:00 DEBUG --- stdout --- 02:36:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2773Mi am-869fdb5db9-8dg94 13m 4436Mi am-869fdb5db9-wt7sg 12m 2741Mi ds-cts-0 10m 393Mi ds-cts-1 10m 364Mi ds-cts-2 9m 364Mi ds-idrepo-0 3034m 13612Mi ds-idrepo-1 17m 10294Mi ds-idrepo-2 24m 10314Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 13m 1317Mi idm-65858d8c4c-pt5s9 10m 3424Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1266m 367Mi 02:36:00 DEBUG --- stderr --- 02:36:00 DEBUG 02:36:00 INFO 02:36:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:36:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:36:00 INFO [loop_until]: OK (rc = 0) 02:36:00 DEBUG --- stdout --- 02:36:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3548Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 95m 0% 1989Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 3595Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 102m 0% 2529Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5236Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 75m 0% 4091Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3071m 19% 14026Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 71m 0% 10818Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 72m 0% 10802Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1364m 8% 2160Mi 3% 02:36:00 DEBUG --- stderr --- 02:36:00 DEBUG 02:37:00 INFO 02:37:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:37:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:37:00 INFO [loop_until]: OK (rc = 0) 02:37:00 DEBUG --- stdout --- 02:37:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 15m 2833Mi am-869fdb5db9-8dg94 10m 4436Mi am-869fdb5db9-wt7sg 9m 2743Mi ds-cts-0 9m 393Mi ds-cts-1 8m 365Mi ds-cts-2 7m 364Mi ds-idrepo-0 15m 13612Mi ds-idrepo-1 18m 10294Mi ds-idrepo-2 20m 10318Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 1327Mi idm-65858d8c4c-pt5s9 8m 3421Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 99Mi 02:37:00 DEBUG --- stderr --- 02:37:00 DEBUG 02:37:00 INFO 02:37:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:37:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:37:00 INFO [loop_until]: OK (rc = 0) 02:37:00 DEBUG --- stdout --- 02:37:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 70m 0% 3608Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 88m 0% 1999Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 977Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 3597Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 99m 0% 2529Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5236Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 74m 0% 4090Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 66m 0% 14028Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 72m 0% 10821Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 74m 0% 10806Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1897Mi 3% 02:37:00 DEBUG --- stderr --- 02:37:00 DEBUG 02:38:00 INFO 02:38:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:38:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:38:00 INFO [loop_until]: OK (rc = 0) 02:38:00 DEBUG --- stdout --- 02:38:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 2832Mi am-869fdb5db9-8dg94 9m 4437Mi am-869fdb5db9-wt7sg 17m 2744Mi ds-cts-0 8m 393Mi ds-cts-1 14m 365Mi ds-cts-2 14m 365Mi ds-idrepo-0 11m 13612Mi ds-idrepo-1 2769m 12829Mi ds-idrepo-2 20m 10319Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 8m 1337Mi idm-65858d8c4c-pt5s9 7m 3421Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 893m 377Mi 02:38:00 DEBUG --- stderr --- 02:38:00 DEBUG 02:38:01 INFO 02:38:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:38:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:38:01 INFO [loop_until]: OK (rc = 0) 02:38:01 DEBUG --- stdout --- 02:38:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3610Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 89m 0% 2007Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 978Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 3600Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 103m 0% 2529Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5238Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 70m 0% 4084Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 66m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 62m 0% 14029Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 70m 0% 10823Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 67m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2824m 17% 13265Mi 22% gke-xlou-cdm-frontend-a8771548-k40m 1070m 6% 2184Mi 3% 02:38:01 DEBUG --- stderr --- 02:38:01 DEBUG 02:39:00 INFO 02:39:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:39:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:39:00 INFO [loop_until]: OK (rc = 0) 02:39:00 DEBUG --- stdout --- 02:39:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 13m 2833Mi am-869fdb5db9-8dg94 10m 4438Mi am-869fdb5db9-wt7sg 8m 2744Mi ds-cts-0 6m 393Mi ds-cts-1 19m 365Mi ds-cts-2 7m 365Mi ds-idrepo-0 14m 13612Mi ds-idrepo-1 2777m 13363Mi ds-idrepo-2 16m 10320Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 9m 1347Mi idm-65858d8c4c-pt5s9 6m 3421Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1048m 377Mi 02:39:00 DEBUG --- stderr --- 02:39:00 DEBUG 02:39:01 INFO 02:39:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:39:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:39:01 INFO [loop_until]: OK (rc = 0) 02:39:01 DEBUG --- stdout --- 02:39:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3612Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 94m 0% 2025Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 977Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 3598Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 100m 0% 2534Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 71m 0% 5234Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4086Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 65m 0% 14026Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 68m 0% 10821Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 72m 0% 930Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 65m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2829m 17% 13792Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1160m 7% 2173Mi 3% 02:39:01 DEBUG --- stderr --- 02:39:01 DEBUG 02:40:00 INFO 02:40:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:40:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:40:01 INFO [loop_until]: OK (rc = 0) 02:40:01 DEBUG --- stdout --- 02:40:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 10m 2833Mi am-869fdb5db9-8dg94 10m 4437Mi am-869fdb5db9-wt7sg 7m 2744Mi ds-cts-0 8m 393Mi ds-cts-1 8m 365Mi ds-cts-2 6m 365Mi ds-idrepo-0 24m 13612Mi ds-idrepo-1 2749m 13353Mi ds-idrepo-2 17m 10326Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 8m 1364Mi idm-65858d8c4c-pt5s9 9m 3421Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1146m 378Mi 02:40:01 DEBUG --- stderr --- 02:40:01 DEBUG 02:40:01 INFO 02:40:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:40:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:40:01 INFO [loop_until]: OK (rc = 0) 02:40:01 DEBUG --- stdout --- 02:40:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 71m 0% 3608Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 92m 0% 2038Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 3598Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 99m 0% 2535Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5235Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 83m 0% 4089Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 75m 0% 14027Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 76m 0% 10829Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2753m 17% 13779Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1207m 7% 2172Mi 3% 02:40:01 DEBUG --- stderr --- 02:40:01 DEBUG 02:41:01 INFO 02:41:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:41:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:41:01 INFO [loop_until]: OK (rc = 0) 02:41:01 DEBUG --- stdout --- 02:41:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2833Mi am-869fdb5db9-8dg94 10m 4436Mi am-869fdb5db9-wt7sg 12m 2745Mi ds-cts-0 7m 393Mi ds-cts-1 8m 365Mi ds-cts-2 9m 365Mi ds-idrepo-0 11m 13612Mi ds-idrepo-1 2805m 13503Mi ds-idrepo-2 26m 10326Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 1375Mi idm-65858d8c4c-pt5s9 5m 3422Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1224m 378Mi 02:41:01 DEBUG --- stderr --- 02:41:01 DEBUG 02:41:01 INFO 02:41:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:41:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:41:01 INFO [loop_until]: OK (rc = 0) 02:41:01 DEBUG --- stdout --- 02:41:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 3611Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 87m 0% 2058Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 972Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 3598Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 102m 0% 2536Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5237Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 73m 0% 4087Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 62m 0% 14028Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 79m 0% 10829Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2867m 18% 13922Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1280m 8% 2172Mi 3% 02:41:01 DEBUG --- stderr --- 02:41:01 DEBUG 02:42:01 INFO 02:42:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:42:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:42:01 INFO [loop_until]: OK (rc = 0) 02:42:01 DEBUG --- stdout --- 02:42:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2833Mi am-869fdb5db9-8dg94 10m 4438Mi am-869fdb5db9-wt7sg 8m 2746Mi ds-cts-0 8m 393Mi ds-cts-1 8m 365Mi ds-cts-2 7m 365Mi ds-idrepo-0 11m 13613Mi ds-idrepo-1 3125m 13665Mi ds-idrepo-2 17m 10326Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 8m 1397Mi idm-65858d8c4c-pt5s9 7m 3422Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1275m 378Mi 02:42:01 DEBUG --- stderr --- 02:42:01 DEBUG 02:42:01 INFO 02:42:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:42:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:42:01 INFO [loop_until]: OK (rc = 0) 02:42:01 DEBUG --- stdout --- 02:42:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3611Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 90m 0% 2065Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 975Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3596Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 108m 0% 2535Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 72m 0% 5238Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 74m 0% 4089Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 62m 0% 14028Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 68m 0% 10827Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 929Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 3227m 20% 14084Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1356m 8% 2170Mi 3% 02:42:01 DEBUG --- stderr --- 02:42:01 DEBUG 02:43:01 INFO 02:43:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:43:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:43:01 INFO [loop_until]: OK (rc = 0) 02:43:01 DEBUG --- stdout --- 02:43:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 19m 2829Mi am-869fdb5db9-8dg94 11m 4438Mi am-869fdb5db9-wt7sg 6m 2746Mi ds-cts-0 7m 393Mi ds-cts-1 8m 365Mi ds-cts-2 7m 365Mi ds-idrepo-0 12m 13613Mi ds-idrepo-1 21m 13672Mi ds-idrepo-2 18m 10327Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 1396Mi idm-65858d8c4c-pt5s9 6m 3422Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 99Mi 02:43:01 DEBUG --- stderr --- 02:43:01 DEBUG 02:43:01 INFO 02:43:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:43:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:43:01 INFO [loop_until]: OK (rc = 0) 02:43:01 DEBUG --- stdout --- 02:43:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 78m 0% 3604Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 87m 0% 2068Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 978Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 3598Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 108m 0% 2531Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 71m 0% 5236Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4090Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 934Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 65m 0% 14029Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 68m 0% 10831Mi 18% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 77m 0% 14088Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1896Mi 3% 02:43:01 DEBUG --- stderr --- 02:43:01 DEBUG 02:44:01 INFO 02:44:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:44:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:44:01 INFO [loop_until]: OK (rc = 0) 02:44:01 DEBUG --- stdout --- 02:44:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2833Mi am-869fdb5db9-8dg94 7m 4438Mi am-869fdb5db9-wt7sg 8m 2746Mi ds-cts-0 8m 394Mi ds-cts-1 11m 366Mi ds-cts-2 9m 365Mi ds-idrepo-0 16m 13613Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 1782m 11443Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 1396Mi idm-65858d8c4c-pt5s9 6m 3423Mi lodemon-66684b7694-c5c6m 7m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 918m 370Mi 02:44:01 DEBUG --- stderr --- 02:44:01 DEBUG 02:44:01 INFO 02:44:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:44:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:44:01 INFO [loop_until]: OK (rc = 0) 02:44:01 DEBUG --- stdout --- 02:44:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 3612Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 87m 0% 2070Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 981Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 3599Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 105m 0% 2534Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5240Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 78m 0% 4091Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 65m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 68m 0% 14027Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 2306m 14% 12112Mi 20% gke-xlou-cdm-ds-32e4dcb1-l2t2 64m 0% 930Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 71m 0% 14086Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1178m 7% 2163Mi 3% 02:44:01 DEBUG --- stderr --- 02:44:01 DEBUG 02:45:01 INFO 02:45:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:45:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:45:01 INFO [loop_until]: OK (rc = 0) 02:45:01 DEBUG --- stdout --- 02:45:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2834Mi am-869fdb5db9-8dg94 8m 4439Mi am-869fdb5db9-wt7sg 13m 2746Mi ds-cts-0 7m 394Mi ds-cts-1 8m 366Mi ds-cts-2 6m 365Mi ds-idrepo-0 11m 13613Mi ds-idrepo-1 22m 13672Mi ds-idrepo-2 2641m 13368Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 10m 1396Mi idm-65858d8c4c-pt5s9 6m 3423Mi lodemon-66684b7694-c5c6m 7m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1073m 370Mi 02:45:01 DEBUG --- stderr --- 02:45:01 DEBUG 02:45:01 INFO 02:45:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:45:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:45:01 INFO [loop_until]: OK (rc = 0) 02:45:01 DEBUG --- stdout --- 02:45:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 3611Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 107m 0% 2070Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 984Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 71m 0% 3599Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 112m 0% 2533Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 72m 0% 5239Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 74m 0% 4092Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 934Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 63m 0% 14028Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 2754m 17% 13786Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 75m 0% 14088Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1154m 7% 2165Mi 3% 02:45:01 DEBUG --- stderr --- 02:45:01 DEBUG 02:46:01 INFO 02:46:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:46:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:46:01 INFO [loop_until]: OK (rc = 0) 02:46:01 DEBUG --- stdout --- 02:46:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 2838Mi am-869fdb5db9-8dg94 10m 4440Mi am-869fdb5db9-wt7sg 9m 2746Mi ds-cts-0 7m 394Mi ds-cts-1 9m 367Mi ds-cts-2 7m 365Mi ds-idrepo-0 11m 13612Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 2802m 13358Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 10m 1396Mi idm-65858d8c4c-pt5s9 7m 3423Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1142m 370Mi 02:46:01 DEBUG --- stderr --- 02:46:01 DEBUG 02:46:01 INFO 02:46:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:46:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:46:02 INFO [loop_until]: OK (rc = 0) 02:46:02 DEBUG --- stdout --- 02:46:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 69m 0% 3617Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 89m 0% 2068Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 978Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 3600Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 106m 0% 2532Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5237Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4092Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 62m 0% 14030Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 2870m 18% 13778Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 930Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14083Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1219m 7% 2163Mi 3% 02:46:02 DEBUG --- stderr --- 02:46:02 DEBUG 02:47:01 INFO 02:47:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:47:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:47:01 INFO [loop_until]: OK (rc = 0) 02:47:01 DEBUG --- stdout --- 02:47:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 2838Mi am-869fdb5db9-8dg94 11m 4441Mi am-869fdb5db9-wt7sg 7m 2747Mi ds-cts-0 7m 394Mi ds-cts-1 8m 367Mi ds-cts-2 6m 366Mi ds-idrepo-0 11m 13613Mi ds-idrepo-1 21m 13672Mi ds-idrepo-2 2665m 13393Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 1397Mi idm-65858d8c4c-pt5s9 7m 3428Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1247m 370Mi 02:47:01 DEBUG --- stderr --- 02:47:01 DEBUG 02:47:02 INFO 02:47:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:47:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:47:02 INFO [loop_until]: OK (rc = 0) 02:47:02 DEBUG --- stdout --- 02:47:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3614Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 87m 0% 2056Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 977Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 3613Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 105m 0% 2531Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5240Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4097Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 62m 0% 14031Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 2796m 17% 13810Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 934Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 72m 0% 14088Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1286m 8% 2165Mi 3% 02:47:02 DEBUG --- stderr --- 02:47:02 DEBUG 02:48:01 INFO 02:48:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:48:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:48:01 INFO [loop_until]: OK (rc = 0) 02:48:01 DEBUG --- stdout --- 02:48:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2839Mi am-869fdb5db9-8dg94 11m 4452Mi am-869fdb5db9-wt7sg 7m 2748Mi ds-cts-0 6m 394Mi ds-cts-1 8m 367Mi ds-cts-2 7m 366Mi ds-idrepo-0 12m 13612Mi ds-idrepo-1 18m 13672Mi ds-idrepo-2 2854m 13594Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 1398Mi idm-65858d8c4c-pt5s9 7m 3429Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1282m 370Mi 02:48:01 DEBUG --- stderr --- 02:48:01 DEBUG 02:48:02 INFO 02:48:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:48:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:48:02 INFO [loop_until]: OK (rc = 0) 02:48:02 DEBUG --- stdout --- 02:48:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3614Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 80m 0% 2066Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 980Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3602Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 106m 0% 2530Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5253Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 74m 0% 4094Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 58m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 61m 0% 14031Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 2886m 18% 14002Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 935Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 73m 0% 14087Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1333m 8% 2167Mi 3% 02:48:02 DEBUG --- stderr --- 02:48:02 DEBUG 02:49:01 INFO 02:49:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:49:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:49:01 INFO [loop_until]: OK (rc = 0) 02:49:01 DEBUG --- stdout --- 02:49:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 21m 2839Mi am-869fdb5db9-8dg94 9m 4453Mi am-869fdb5db9-wt7sg 7m 2754Mi ds-cts-0 8m 394Mi ds-cts-1 7m 367Mi ds-cts-2 7m 365Mi ds-idrepo-0 14m 13613Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 13m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 8m 1397Mi idm-65858d8c4c-pt5s9 10m 3429Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 39m 99Mi 02:49:01 DEBUG --- stderr --- 02:49:01 DEBUG 02:49:02 INFO 02:49:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:49:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:49:02 INFO [loop_until]: OK (rc = 0) 02:49:02 DEBUG --- stdout --- 02:49:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 77m 0% 3614Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 83m 0% 2066Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 71m 0% 981Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 3612Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 106m 0% 2531Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5253Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 76m 0% 4095Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 935Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 62m 0% 14032Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 386m 2% 14052Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 66m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14097Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 426m 2% 1896Mi 3% 02:49:02 DEBUG --- stderr --- 02:49:02 DEBUG 02:50:01 INFO 02:50:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:50:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:50:02 INFO [loop_until]: OK (rc = 0) 02:50:02 DEBUG --- stdout --- 02:50:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2839Mi am-869fdb5db9-8dg94 18m 4455Mi am-869fdb5db9-wt7sg 14m 2752Mi ds-cts-0 7m 394Mi ds-cts-1 7m 367Mi ds-cts-2 14m 366Mi ds-idrepo-0 11m 13612Mi ds-idrepo-1 14m 13671Mi ds-idrepo-2 13m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 10m 1398Mi idm-65858d8c4c-pt5s9 7m 3429Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1911m 479Mi 02:50:02 DEBUG --- stderr --- 02:50:02 DEBUG 02:50:02 INFO 02:50:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:50:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:50:02 INFO [loop_until]: OK (rc = 0) 02:50:02 DEBUG --- stdout --- 02:50:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3616Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 93m 0% 2074Mi 3% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 977Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 73m 0% 3610Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 106m 0% 2531Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 72m 0% 5254Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4099Mi 6% gke-xlou-cdm-ds-32e4dcb1-02kn 70m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 63m 0% 14031Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 935Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 66m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14087Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1773m 11% 2184Mi 3% 02:50:02 DEBUG --- stderr --- 02:50:02 DEBUG 02:51:02 INFO 02:51:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:51:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:51:02 INFO [loop_until]: OK (rc = 0) 02:51:02 DEBUG --- stdout --- 02:51:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 29m 2848Mi am-869fdb5db9-8dg94 20m 4458Mi am-869fdb5db9-wt7sg 7m 2752Mi ds-cts-0 7m 395Mi ds-cts-1 9m 368Mi ds-cts-2 7m 366Mi ds-idrepo-0 614m 13612Mi ds-idrepo-1 16m 13671Mi ds-idrepo-2 19m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 608m 3312Mi idm-65858d8c4c-pt5s9 505m 3463Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 158m 491Mi 02:51:02 DEBUG --- stderr --- 02:51:02 DEBUG 02:51:02 INFO 02:51:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:51:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:51:02 INFO [loop_until]: OK (rc = 0) 02:51:02 DEBUG --- stdout --- 02:51:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 79m 0% 3627Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 641m 4% 3989Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 980Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 3610Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 147m 0% 2547Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 80m 0% 5259Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 652m 4% 4133Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 937Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 675m 4% 14030Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 71m 0% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 65m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14087Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 212m 1% 2284Mi 3% 02:51:02 DEBUG --- stderr --- 02:51:02 DEBUG 02:52:02 INFO 02:52:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:52:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:52:02 INFO [loop_until]: OK (rc = 0) 02:52:02 DEBUG --- stdout --- 02:52:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 2849Mi am-869fdb5db9-8dg94 10m 4459Mi am-869fdb5db9-wt7sg 8m 2752Mi ds-cts-0 6m 395Mi ds-cts-1 8m 368Mi ds-cts-2 7m 367Mi ds-idrepo-0 506m 13612Mi ds-idrepo-1 19m 13671Mi ds-idrepo-2 13m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 558m 3320Mi idm-65858d8c4c-pt5s9 402m 3468Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 115m 495Mi 02:52:02 DEBUG --- stderr --- 02:52:02 DEBUG 02:52:02 INFO 02:52:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:52:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:52:02 INFO [loop_until]: OK (rc = 0) 02:52:02 DEBUG --- stdout --- 02:52:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3626Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 665m 4% 3987Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 982Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3610Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 141m 0% 2556Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5260Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 530m 3% 4136Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 935Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 547m 3% 14028Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 69m 0% 14063Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 935Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 72m 0% 14087Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 180m 1% 2288Mi 3% 02:52:02 DEBUG --- stderr --- 02:52:02 DEBUG 02:53:02 INFO 02:53:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:53:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:53:02 INFO [loop_until]: OK (rc = 0) 02:53:02 DEBUG --- stdout --- 02:53:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 2849Mi am-869fdb5db9-8dg94 8m 4459Mi am-869fdb5db9-wt7sg 10m 2757Mi ds-cts-0 7m 395Mi ds-cts-1 13m 368Mi ds-cts-2 7m 366Mi ds-idrepo-0 504m 13613Mi ds-idrepo-1 13m 13671Mi ds-idrepo-2 14m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 518m 3321Mi idm-65858d8c4c-pt5s9 403m 3469Mi lodemon-66684b7694-c5c6m 4m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 67m 506Mi 02:53:02 DEBUG --- stderr --- 02:53:02 DEBUG 02:53:02 INFO 02:53:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:53:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:53:02 INFO [loop_until]: OK (rc = 0) 02:53:02 DEBUG --- stdout --- 02:53:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3630Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 601m 3% 3989Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 982Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3607Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 138m 0% 2557Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5262Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 466m 2% 4138Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 934Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 558m 3% 14032Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 65m 0% 934Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 964Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14089Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 143m 0% 2293Mi 3% 02:53:02 DEBUG --- stderr --- 02:53:02 DEBUG 02:54:02 INFO 02:54:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:54:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:54:02 INFO [loop_until]: OK (rc = 0) 02:54:02 DEBUG --- stdout --- 02:54:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2849Mi am-869fdb5db9-8dg94 15m 4463Mi am-869fdb5db9-wt7sg 12m 2757Mi ds-cts-0 7m 395Mi ds-cts-1 8m 369Mi ds-cts-2 7m 366Mi ds-idrepo-0 509m 13613Mi ds-idrepo-1 13m 13671Mi ds-idrepo-2 13m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 556m 3329Mi idm-65858d8c4c-pt5s9 444m 3467Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 62m 509Mi 02:54:02 DEBUG --- stderr --- 02:54:02 DEBUG 02:54:02 INFO 02:54:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:54:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:54:03 INFO [loop_until]: OK (rc = 0) 02:54:03 DEBUG --- stdout --- 02:54:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3629Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 647m 4% 3996Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 982Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 3614Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 139m 0% 2557Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 82m 0% 5262Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 484m 3% 4137Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 931Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 606m 3% 14030Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14088Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 129m 0% 2302Mi 3% 02:54:03 DEBUG --- stderr --- 02:54:03 DEBUG 02:55:02 INFO 02:55:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:55:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:55:02 INFO [loop_until]: OK (rc = 0) 02:55:02 DEBUG --- stdout --- 02:55:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2849Mi am-869fdb5db9-8dg94 8m 4457Mi am-869fdb5db9-wt7sg 7m 2758Mi ds-cts-0 7m 395Mi ds-cts-1 16m 368Mi ds-cts-2 6m 366Mi ds-idrepo-0 438m 13613Mi ds-idrepo-1 15m 13672Mi ds-idrepo-2 13m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 460m 3338Mi idm-65858d8c4c-pt5s9 322m 3466Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 67m 510Mi 02:55:02 DEBUG --- stderr --- 02:55:02 DEBUG 02:55:03 INFO 02:55:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:55:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:55:03 INFO [loop_until]: OK (rc = 0) 02:55:03 DEBUG --- stdout --- 02:55:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3630Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 551m 3% 4004Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 980Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 3614Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 138m 0% 2561Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5259Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 413m 2% 4135Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 498m 3% 14044Mi 23% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 69m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 70m 0% 14091Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 135m 0% 2300Mi 3% 02:55:03 DEBUG --- stderr --- 02:55:03 DEBUG 02:56:02 INFO 02:56:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:56:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:56:02 INFO [loop_until]: OK (rc = 0) 02:56:02 DEBUG --- stdout --- 02:56:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2850Mi am-869fdb5db9-8dg94 11m 4457Mi am-869fdb5db9-wt7sg 6m 2758Mi ds-cts-0 7m 395Mi ds-cts-1 9m 368Mi ds-cts-2 6m 366Mi ds-idrepo-0 646m 13615Mi ds-idrepo-1 16m 13671Mi ds-idrepo-2 18m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 638m 3329Mi idm-65858d8c4c-pt5s9 428m 3486Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 74m 511Mi 02:56:02 DEBUG --- stderr --- 02:56:02 DEBUG 02:56:03 INFO 02:56:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:56:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:56:03 INFO [loop_until]: OK (rc = 0) 02:56:03 DEBUG --- stdout --- 02:56:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3629Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 708m 4% 3998Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 978Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3614Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 140m 0% 2557Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 76m 0% 5260Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 513m 3% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 934Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 795m 5% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 71m 0% 14053Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 964Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14088Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 140m 0% 2304Mi 3% 02:56:03 DEBUG --- stderr --- 02:56:03 DEBUG 02:57:02 INFO 02:57:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:57:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:57:02 INFO [loop_until]: OK (rc = 0) 02:57:02 DEBUG --- stdout --- 02:57:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2850Mi am-869fdb5db9-8dg94 7m 4460Mi am-869fdb5db9-wt7sg 6m 2758Mi ds-cts-0 6m 395Mi ds-cts-1 7m 368Mi ds-cts-2 6m 366Mi ds-idrepo-0 638m 13772Mi ds-idrepo-1 13m 13671Mi ds-idrepo-2 13m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 546m 3333Mi idm-65858d8c4c-pt5s9 435m 3475Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 61m 512Mi 02:57:02 DEBUG --- stderr --- 02:57:02 DEBUG 02:57:03 INFO 02:57:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:57:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:57:03 INFO [loop_until]: OK (rc = 0) 02:57:03 DEBUG --- stdout --- 02:57:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3631Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 623m 3% 4002Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3615Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 141m 0% 2558Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5259Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 542m 3% 4144Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 932Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 701m 4% 14179Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14051Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14088Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 128m 0% 2305Mi 3% 02:57:03 DEBUG --- stderr --- 02:57:03 DEBUG 02:58:02 INFO 02:58:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:58:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:58:02 INFO [loop_until]: OK (rc = 0) 02:58:02 DEBUG --- stdout --- 02:58:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2850Mi am-869fdb5db9-8dg94 8m 4460Mi am-869fdb5db9-wt7sg 8m 2761Mi ds-cts-0 6m 396Mi ds-cts-1 12m 369Mi ds-cts-2 6m 367Mi ds-idrepo-0 497m 13771Mi ds-idrepo-1 13m 13671Mi ds-idrepo-2 16m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 536m 3335Mi idm-65858d8c4c-pt5s9 379m 3475Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 58m 512Mi 02:58:02 DEBUG --- stderr --- 02:58:02 DEBUG 02:58:03 INFO 02:58:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:58:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:58:03 INFO [loop_until]: OK (rc = 0) 02:58:03 DEBUG --- stdout --- 02:58:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 3624Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 616m 3% 4004Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 980Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3618Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 139m 0% 2564Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5255Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 458m 2% 4143Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 581m 3% 14186Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14054Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14092Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 120m 0% 2305Mi 3% 02:58:03 DEBUG --- stderr --- 02:58:03 DEBUG 02:59:02 INFO 02:59:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:59:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:59:03 INFO [loop_until]: OK (rc = 0) 02:59:03 DEBUG --- stdout --- 02:59:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 19m 2853Mi am-869fdb5db9-8dg94 7m 4460Mi am-869fdb5db9-wt7sg 7m 2761Mi ds-cts-0 7m 396Mi ds-cts-1 8m 369Mi ds-cts-2 7m 367Mi ds-idrepo-0 414m 13772Mi ds-idrepo-1 13m 13671Mi ds-idrepo-2 15m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 437m 3336Mi idm-65858d8c4c-pt5s9 329m 3469Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 76m 512Mi 02:59:03 DEBUG --- stderr --- 02:59:03 DEBUG 02:59:03 INFO 02:59:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:59:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:59:03 INFO [loop_until]: OK (rc = 0) 02:59:03 DEBUG --- stdout --- 02:59:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 77m 0% 3630Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 499m 3% 4005Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 981Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3618Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 138m 0% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5261Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 418m 2% 4141Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 64m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 506m 3% 14184Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 71m 0% 14055Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 65m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 70m 0% 14089Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 143m 0% 2306Mi 3% 02:59:03 DEBUG --- stderr --- 02:59:03 DEBUG 03:00:03 INFO 03:00:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:00:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:00:03 INFO [loop_until]: OK (rc = 0) 03:00:03 DEBUG --- stdout --- 03:00:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 6m 2853Mi am-869fdb5db9-8dg94 7m 4460Mi am-869fdb5db9-wt7sg 6m 2762Mi ds-cts-0 6m 396Mi ds-cts-1 11m 369Mi ds-cts-2 6m 366Mi ds-idrepo-0 576m 13773Mi ds-idrepo-1 13m 13671Mi ds-idrepo-2 14m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 402m 3334Mi idm-65858d8c4c-pt5s9 423m 3471Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 56m 512Mi 03:00:03 DEBUG --- stderr --- 03:00:03 DEBUG 03:00:03 INFO 03:00:03 INFO [loop_until]: kubectl --namespace=xlou top node 03:00:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:00:03 INFO [loop_until]: OK (rc = 0) 03:00:03 DEBUG --- stdout --- 03:00:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3635Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 606m 3% 4001Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 3620Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 141m 0% 2559Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5261Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 549m 3% 4144Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 745m 4% 14184Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14054Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 937Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 964Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14093Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 120m 0% 2303Mi 3% 03:00:03 DEBUG --- stderr --- 03:00:03 DEBUG 03:01:03 INFO 03:01:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:01:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:01:03 INFO [loop_until]: OK (rc = 0) 03:01:03 DEBUG --- stdout --- 03:01:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 6m 2853Mi am-869fdb5db9-8dg94 7m 4457Mi am-869fdb5db9-wt7sg 8m 2762Mi ds-cts-0 8m 397Mi ds-cts-1 9m 369Mi ds-cts-2 7m 367Mi ds-idrepo-0 540m 13810Mi ds-idrepo-1 14m 13671Mi ds-idrepo-2 20m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 485m 3335Mi idm-65858d8c4c-pt5s9 404m 3472Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 54m 513Mi 03:01:03 DEBUG --- stderr --- 03:01:03 DEBUG 03:01:03 INFO 03:01:03 INFO [loop_until]: kubectl --namespace=xlou top node 03:01:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:01:03 INFO [loop_until]: OK (rc = 0) 03:01:03 DEBUG --- stdout --- 03:01:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3634Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 571m 3% 3995Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 977Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3616Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 131m 0% 2562Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5256Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 427m 2% 4141Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 63m 0% 937Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 630m 3% 14221Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14053Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 965Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14095Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 121m 0% 2305Mi 3% 03:01:03 DEBUG --- stderr --- 03:01:03 DEBUG 03:02:03 INFO 03:02:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:02:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:02:03 INFO [loop_until]: OK (rc = 0) 03:02:03 DEBUG --- stdout --- 03:02:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2855Mi am-869fdb5db9-8dg94 13m 4459Mi am-869fdb5db9-wt7sg 8m 2762Mi ds-cts-0 8m 397Mi ds-cts-1 7m 370Mi ds-cts-2 7m 368Mi ds-idrepo-0 596m 13811Mi ds-idrepo-1 18m 13671Mi ds-idrepo-2 13m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 513m 3336Mi idm-65858d8c4c-pt5s9 406m 3473Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 58m 512Mi 03:02:03 DEBUG --- stderr --- 03:02:03 DEBUG 03:02:03 INFO 03:02:03 INFO [loop_until]: kubectl --namespace=xlou top node 03:02:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:02:03 INFO [loop_until]: OK (rc = 0) 03:02:03 DEBUG --- stdout --- 03:02:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3634Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 600m 3% 3999Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 980Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 3617Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 137m 0% 2560Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5257Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 455m 2% 4145Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 548m 3% 14226Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14054Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 938Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 71m 0% 14094Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 123m 0% 2305Mi 3% 03:02:03 DEBUG --- stderr --- 03:02:03 DEBUG 03:03:03 INFO 03:03:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:03:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:03:03 INFO [loop_until]: OK (rc = 0) 03:03:03 DEBUG --- stdout --- 03:03:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2855Mi am-869fdb5db9-8dg94 8m 4460Mi am-869fdb5db9-wt7sg 7m 2757Mi ds-cts-0 7m 398Mi ds-cts-1 9m 369Mi ds-cts-2 6m 367Mi ds-idrepo-0 525m 13814Mi ds-idrepo-1 19m 13671Mi ds-idrepo-2 13m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 476m 3341Mi idm-65858d8c4c-pt5s9 416m 3474Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 63m 515Mi 03:03:03 DEBUG --- stderr --- 03:03:03 DEBUG 03:03:04 INFO 03:03:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:03:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:03:04 INFO [loop_until]: OK (rc = 0) 03:03:04 DEBUG --- stdout --- 03:03:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3635Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 557m 3% 4006Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 70m 0% 982Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3612Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 135m 0% 2561Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5259Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 472m 2% 4142Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 937Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 644m 4% 14225Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14053Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 939Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 73m 0% 14095Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 131m 0% 2305Mi 3% 03:03:04 DEBUG --- stderr --- 03:03:04 DEBUG 03:04:03 INFO 03:04:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:04:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:04:03 INFO [loop_until]: OK (rc = 0) 03:04:03 DEBUG --- stdout --- 03:04:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 2855Mi am-869fdb5db9-8dg94 9m 4462Mi am-869fdb5db9-wt7sg 6m 2757Mi ds-cts-0 6m 397Mi ds-cts-1 7m 369Mi ds-cts-2 7m 368Mi ds-idrepo-0 616m 13814Mi ds-idrepo-1 13m 13671Mi ds-idrepo-2 14m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 499m 3341Mi idm-65858d8c4c-pt5s9 387m 3475Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 51m 515Mi 03:04:03 DEBUG --- stderr --- 03:04:03 DEBUG 03:04:04 INFO 03:04:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:04:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:04:04 INFO [loop_until]: OK (rc = 0) 03:04:04 DEBUG --- stdout --- 03:04:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3635Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 580m 3% 4011Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 71m 0% 982Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 58m 0% 3612Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 140m 0% 2561Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5262Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 408m 2% 4144Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 937Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 608m 3% 14225Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 68m 0% 14055Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 939Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14096Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 113m 0% 2307Mi 3% 03:04:04 DEBUG --- stderr --- 03:04:04 DEBUG 03:05:03 INFO 03:05:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:05:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:05:03 INFO [loop_until]: OK (rc = 0) 03:05:03 DEBUG --- stdout --- 03:05:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2855Mi am-869fdb5db9-8dg94 8m 4462Mi am-869fdb5db9-wt7sg 7m 2757Mi ds-cts-0 7m 397Mi ds-cts-1 7m 370Mi ds-cts-2 6m 367Mi ds-idrepo-0 590m 13814Mi ds-idrepo-1 15m 13672Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 440m 3341Mi idm-65858d8c4c-pt5s9 355m 3478Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 54m 516Mi 03:05:03 DEBUG --- stderr --- 03:05:03 DEBUG 03:05:04 INFO 03:05:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:05:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:05:04 INFO [loop_until]: OK (rc = 0) 03:05:04 DEBUG --- stdout --- 03:05:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3634Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 524m 3% 4008Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 981Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3616Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 132m 0% 2562Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5263Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 452m 2% 4148Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 935Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 621m 3% 14230Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14055Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 940Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 967Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14095Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 116m 0% 2306Mi 3% 03:05:04 DEBUG --- stderr --- 03:05:04 DEBUG 03:06:03 INFO 03:06:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:06:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:06:03 INFO [loop_until]: OK (rc = 0) 03:06:03 DEBUG --- stdout --- 03:06:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2855Mi am-869fdb5db9-8dg94 7m 4462Mi am-869fdb5db9-wt7sg 7m 2757Mi ds-cts-0 10m 397Mi ds-cts-1 9m 369Mi ds-cts-2 8m 368Mi ds-idrepo-0 640m 13817Mi ds-idrepo-1 18m 13672Mi ds-idrepo-2 13m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 522m 3348Mi idm-65858d8c4c-pt5s9 355m 3478Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 51m 515Mi 03:06:03 DEBUG --- stderr --- 03:06:03 DEBUG 03:06:04 INFO 03:06:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:06:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:06:04 INFO [loop_until]: OK (rc = 0) 03:06:04 DEBUG --- stdout --- 03:06:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3636Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 659m 4% 4012Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 70m 0% 978Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3612Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 134m 0% 2562Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5264Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 450m 2% 4147Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 63m 0% 933Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 669m 4% 14231Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14053Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 938Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 966Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 73m 0% 14095Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 118m 0% 2307Mi 3% 03:06:04 DEBUG --- stderr --- 03:06:04 DEBUG 03:07:03 INFO 03:07:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:07:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:07:03 INFO [loop_until]: OK (rc = 0) 03:07:03 DEBUG --- stdout --- 03:07:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2855Mi am-869fdb5db9-8dg94 10m 4462Mi am-869fdb5db9-wt7sg 6m 2757Mi ds-cts-0 16m 398Mi ds-cts-1 8m 369Mi ds-cts-2 7m 367Mi ds-idrepo-0 591m 13812Mi ds-idrepo-1 18m 13672Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 446m 3343Mi idm-65858d8c4c-pt5s9 453m 3479Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 53m 515Mi 03:07:03 DEBUG --- stderr --- 03:07:03 DEBUG 03:07:04 INFO 03:07:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:07:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:07:04 INFO [loop_until]: OK (rc = 0) 03:07:04 DEBUG --- stdout --- 03:07:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 550m 3% 4014Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 71m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3612Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 149m 0% 2561Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5265Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 503m 3% 4149Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 58m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 586m 3% 14226Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14057Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 934Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 964Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 73m 0% 14097Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 115m 0% 2309Mi 3% 03:07:04 DEBUG --- stderr --- 03:07:04 DEBUG 03:08:03 INFO 03:08:03 INFO [loop_until]: kubectl --namespace=xlou top pods 03:08:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:08:04 INFO [loop_until]: OK (rc = 0) 03:08:04 DEBUG --- stdout --- 03:08:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2855Mi am-869fdb5db9-8dg94 27m 4471Mi am-869fdb5db9-wt7sg 17m 2757Mi ds-cts-0 6m 398Mi ds-cts-1 8m 369Mi ds-cts-2 8m 368Mi ds-idrepo-0 559m 13814Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 14m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 431m 3345Mi idm-65858d8c4c-pt5s9 352m 3480Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 51m 516Mi 03:08:04 DEBUG --- stderr --- 03:08:04 DEBUG 03:08:04 INFO 03:08:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:08:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:08:04 INFO [loop_until]: OK (rc = 0) 03:08:04 DEBUG --- stdout --- 03:08:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 546m 3% 4012Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 82m 0% 981Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 71m 0% 3617Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 146m 0% 2561Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 79m 0% 5267Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 427m 2% 4150Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 63m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 602m 3% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14058Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 938Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 968Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14099Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 119m 0% 2310Mi 3% 03:08:04 DEBUG --- stderr --- 03:08:04 DEBUG 03:09:04 INFO 03:09:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:09:04 INFO [loop_until]: OK (rc = 0) 03:09:04 DEBUG --- stdout --- 03:09:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2855Mi am-869fdb5db9-8dg94 6m 4466Mi am-869fdb5db9-wt7sg 9m 2757Mi ds-cts-0 7m 398Mi ds-cts-1 9m 369Mi ds-cts-2 7m 367Mi ds-idrepo-0 531m 13820Mi ds-idrepo-1 22m 13672Mi ds-idrepo-2 13m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 421m 3348Mi idm-65858d8c4c-pt5s9 392m 3485Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 43m 516Mi 03:09:04 DEBUG --- stderr --- 03:09:04 DEBUG 03:09:04 INFO 03:09:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:09:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:09:04 INFO [loop_until]: OK (rc = 0) 03:09:04 DEBUG --- stdout --- 03:09:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 469m 2% 4018Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3611Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 145m 0% 2566Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5265Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 448m 2% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 66m 0% 937Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 583m 3% 14218Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14056Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 938Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 968Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 75m 0% 14097Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 110m 0% 2310Mi 3% 03:09:04 DEBUG --- stderr --- 03:09:04 DEBUG 03:10:04 INFO 03:10:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:10:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:10:04 INFO [loop_until]: OK (rc = 0) 03:10:04 DEBUG --- stdout --- 03:10:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2855Mi am-869fdb5db9-8dg94 8m 4467Mi am-869fdb5db9-wt7sg 21m 2764Mi ds-cts-0 6m 398Mi ds-cts-1 9m 372Mi ds-cts-2 6m 367Mi ds-idrepo-0 551m 13811Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 14m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 422m 3350Mi idm-65858d8c4c-pt5s9 415m 3481Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 49m 516Mi 03:10:04 DEBUG --- stderr --- 03:10:04 DEBUG 03:10:04 INFO 03:10:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:10:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:10:04 INFO [loop_until]: OK (rc = 0) 03:10:04 DEBUG --- stdout --- 03:10:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3635Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 507m 3% 4020Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 74m 0% 3622Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 150m 0% 2564Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5266Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 539m 3% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 58m 0% 936Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 735m 4% 14236Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14055Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 937Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 63m 0% 970Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14098Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 116m 0% 2311Mi 3% 03:10:04 DEBUG --- stderr --- 03:10:04 DEBUG 03:11:04 INFO 03:11:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:11:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:11:04 INFO [loop_until]: OK (rc = 0) 03:11:04 DEBUG --- stdout --- 03:11:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2855Mi am-869fdb5db9-8dg94 6m 4467Mi am-869fdb5db9-wt7sg 7m 2765Mi ds-cts-0 10m 398Mi ds-cts-1 7m 372Mi ds-cts-2 14m 373Mi ds-idrepo-0 587m 13804Mi ds-idrepo-1 14m 13673Mi ds-idrepo-2 13m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 454m 3350Mi idm-65858d8c4c-pt5s9 344m 3483Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 47m 517Mi 03:11:04 DEBUG --- stderr --- 03:11:04 DEBUG 03:11:04 INFO 03:11:04 INFO [loop_until]: kubectl --namespace=xlou top node 03:11:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:11:04 INFO [loop_until]: OK (rc = 0) 03:11:04 DEBUG --- stdout --- 03:11:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3637Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 492m 3% 4022Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 980Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 3621Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 145m 0% 2564Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5266Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 446m 2% 4155Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 63m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 594m 3% 14219Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14058Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 968Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14095Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 113m 0% 2307Mi 3% 03:11:04 DEBUG --- stderr --- 03:11:04 DEBUG 03:12:04 INFO 03:12:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:12:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:12:04 INFO [loop_until]: OK (rc = 0) 03:12:04 DEBUG --- stdout --- 03:12:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2855Mi am-869fdb5db9-8dg94 7m 4467Mi am-869fdb5db9-wt7sg 9m 2765Mi ds-cts-0 7m 398Mi ds-cts-1 8m 372Mi ds-cts-2 10m 373Mi ds-idrepo-0 604m 13823Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 526m 3351Mi idm-65858d8c4c-pt5s9 340m 3484Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 49m 517Mi 03:12:04 DEBUG --- stderr --- 03:12:04 DEBUG 03:12:05 INFO 03:12:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:12:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:12:05 INFO [loop_until]: OK (rc = 0) 03:12:05 DEBUG --- stdout --- 03:12:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3648Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 618m 3% 4016Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 979Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3622Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 147m 0% 2563Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5265Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 344m 2% 4157Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 65m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 692m 4% 14238Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14059Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 938Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 966Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14096Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 110m 0% 2309Mi 3% 03:12:05 DEBUG --- stderr --- 03:12:05 DEBUG 03:13:04 INFO 03:13:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:13:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:13:04 INFO [loop_until]: OK (rc = 0) 03:13:04 DEBUG --- stdout --- 03:13:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2855Mi am-869fdb5db9-8dg94 7m 4467Mi am-869fdb5db9-wt7sg 6m 2765Mi ds-cts-0 9m 398Mi ds-cts-1 7m 372Mi ds-cts-2 8m 373Mi ds-idrepo-0 538m 13799Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 17m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 474m 3353Mi idm-65858d8c4c-pt5s9 377m 3484Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 52m 517Mi 03:13:04 DEBUG --- stderr --- 03:13:04 DEBUG 03:13:05 INFO 03:13:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:13:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:13:05 INFO [loop_until]: OK (rc = 0) 03:13:05 DEBUG --- stdout --- 03:13:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3635Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 569m 3% 4017Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 980Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3623Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 144m 0% 2562Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5266Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 502m 3% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 649m 4% 14216Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14058Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 939Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 966Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 123m 0% 2308Mi 3% 03:13:05 DEBUG --- stderr --- 03:13:05 DEBUG 03:14:04 INFO 03:14:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:14:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:14:04 INFO [loop_until]: OK (rc = 0) 03:14:04 DEBUG --- stdout --- 03:14:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2856Mi am-869fdb5db9-8dg94 8m 4467Mi am-869fdb5db9-wt7sg 7m 2765Mi ds-cts-0 10m 398Mi ds-cts-1 8m 372Mi ds-cts-2 7m 374Mi ds-idrepo-0 589m 13794Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 17m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 429m 3356Mi idm-65858d8c4c-pt5s9 416m 3485Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 52m 517Mi 03:14:04 DEBUG --- stderr --- 03:14:04 DEBUG 03:14:05 INFO 03:14:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:14:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:14:05 INFO [loop_until]: OK (rc = 0) 03:14:05 DEBUG --- stdout --- 03:14:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3639Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 470m 2% 4022Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 3622Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 142m 0% 2565Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5267Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 450m 2% 4158Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 613m 3% 14213Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 68m 0% 14058Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 940Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 65m 0% 965Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14099Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 116m 0% 2312Mi 3% 03:14:05 DEBUG --- stderr --- 03:14:05 DEBUG 03:15:04 INFO 03:15:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:15:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:15:04 INFO [loop_until]: OK (rc = 0) 03:15:04 DEBUG --- stdout --- 03:15:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 2856Mi am-869fdb5db9-8dg94 7m 4467Mi am-869fdb5db9-wt7sg 6m 2765Mi ds-cts-0 9m 398Mi ds-cts-1 7m 371Mi ds-cts-2 11m 374Mi ds-idrepo-0 486m 13823Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 17m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 443m 3349Mi idm-65858d8c4c-pt5s9 293m 3485Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 46m 517Mi 03:15:04 DEBUG --- stderr --- 03:15:04 DEBUG 03:15:05 INFO 03:15:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:15:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:15:05 INFO [loop_until]: OK (rc = 0) 03:15:05 DEBUG --- stdout --- 03:15:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3634Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 496m 3% 4014Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3624Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 145m 0% 2566Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5267Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 367m 2% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 63m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 537m 3% 14242Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 69m 0% 14058Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 63m 0% 968Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 109m 0% 2312Mi 3% 03:15:05 DEBUG --- stderr --- 03:15:05 DEBUG 03:16:04 INFO 03:16:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:16:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:16:04 INFO [loop_until]: OK (rc = 0) 03:16:04 DEBUG --- stdout --- 03:16:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2856Mi am-869fdb5db9-8dg94 11m 4467Mi am-869fdb5db9-wt7sg 11m 2766Mi ds-cts-0 9m 398Mi ds-cts-1 7m 372Mi ds-cts-2 7m 373Mi ds-idrepo-0 522m 13793Mi ds-idrepo-1 14m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 402m 3358Mi idm-65858d8c4c-pt5s9 371m 3483Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 46m 517Mi 03:16:04 DEBUG --- stderr --- 03:16:04 DEBUG 03:16:05 INFO 03:16:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:16:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:16:05 INFO [loop_until]: OK (rc = 0) 03:16:05 DEBUG --- stdout --- 03:16:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 485m 3% 4026Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 3622Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 139m 0% 2564Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5268Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 454m 2% 4149Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 607m 3% 14212Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14059Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 939Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 971Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14096Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 114m 0% 2309Mi 3% 03:16:05 DEBUG --- stderr --- 03:16:05 DEBUG 03:17:04 INFO 03:17:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:17:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:17:04 INFO [loop_until]: OK (rc = 0) 03:17:04 DEBUG --- stdout --- 03:17:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2856Mi am-869fdb5db9-8dg94 7m 4467Mi am-869fdb5db9-wt7sg 6m 2766Mi ds-cts-0 6m 398Mi ds-cts-1 9m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 559m 13819Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 392m 3351Mi idm-65858d8c4c-pt5s9 377m 3489Mi lodemon-66684b7694-c5c6m 4m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 57m 517Mi 03:17:04 DEBUG --- stderr --- 03:17:04 DEBUG 03:17:05 INFO 03:17:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:17:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:17:05 INFO [loop_until]: OK (rc = 0) 03:17:05 DEBUG --- stdout --- 03:17:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3636Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 521m 3% 4021Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 981Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3623Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 143m 0% 2566Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5267Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 503m 3% 4168Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 637m 4% 14219Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14058Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 971Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14098Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 116m 0% 2312Mi 3% 03:17:05 DEBUG --- stderr --- 03:17:05 DEBUG 03:18:05 INFO 03:18:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:18:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:18:05 INFO [loop_until]: OK (rc = 0) 03:18:05 DEBUG --- stdout --- 03:18:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2856Mi am-869fdb5db9-8dg94 8m 4467Mi am-869fdb5db9-wt7sg 6m 2766Mi ds-cts-0 6m 399Mi ds-cts-1 8m 372Mi ds-cts-2 9m 373Mi ds-idrepo-0 423m 13798Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 16m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 387m 3352Mi idm-65858d8c4c-pt5s9 317m 3489Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 48m 517Mi 03:18:05 DEBUG --- stderr --- 03:18:05 DEBUG 03:18:05 INFO 03:18:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:18:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:18:05 INFO [loop_until]: OK (rc = 0) 03:18:05 DEBUG --- stdout --- 03:18:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3637Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 495m 3% 4020Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 982Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3624Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 137m 0% 2566Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5264Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 442m 2% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 586m 3% 14242Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14062Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 968Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14101Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 122m 0% 2311Mi 3% 03:18:05 DEBUG --- stderr --- 03:18:05 DEBUG 03:19:05 INFO 03:19:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:19:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:19:05 INFO [loop_until]: OK (rc = 0) 03:19:05 DEBUG --- stdout --- 03:19:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2856Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 6m 2767Mi ds-cts-0 6m 398Mi ds-cts-1 7m 372Mi ds-cts-2 7m 374Mi ds-idrepo-0 523m 13802Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 465m 3353Mi idm-65858d8c4c-pt5s9 423m 3489Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 55m 517Mi 03:19:05 DEBUG --- stderr --- 03:19:05 DEBUG 03:19:05 INFO 03:19:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:19:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:19:05 INFO [loop_until]: OK (rc = 0) 03:19:05 DEBUG --- stdout --- 03:19:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 549m 3% 4018Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 984Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3623Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 146m 0% 2565Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5266Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 477m 3% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 632m 3% 14238Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14059Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 940Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 966Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14103Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 118m 0% 2312Mi 3% 03:19:05 DEBUG --- stderr --- 03:19:05 DEBUG 03:20:05 INFO 03:20:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:20:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:20:05 INFO [loop_until]: OK (rc = 0) 03:20:05 DEBUG --- stdout --- 03:20:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2856Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 7m 2766Mi ds-cts-0 8m 399Mi ds-cts-1 8m 372Mi ds-cts-2 7m 373Mi ds-idrepo-0 587m 13822Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 370m 3352Mi idm-65858d8c4c-pt5s9 417m 3490Mi lodemon-66684b7694-c5c6m 4m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 47m 516Mi 03:20:05 DEBUG --- stderr --- 03:20:05 DEBUG 03:20:06 INFO 03:20:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:20:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:20:06 INFO [loop_until]: OK (rc = 0) 03:20:06 DEBUG --- stdout --- 03:20:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 436m 2% 4029Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3624Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 144m 0% 2564Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5270Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 329m 2% 4159Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 526m 3% 14243Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14060Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 968Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 108m 0% 2312Mi 3% 03:20:06 DEBUG --- stderr --- 03:20:06 DEBUG 03:21:05 INFO 03:21:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:21:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:21:05 INFO [loop_until]: OK (rc = 0) 03:21:05 DEBUG --- stdout --- 03:21:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 2856Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 8m 2766Mi ds-cts-0 6m 399Mi ds-cts-1 9m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 11m 13821Mi ds-idrepo-1 15m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 3353Mi idm-65858d8c4c-pt5s9 7m 3490Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 11m 101Mi 03:21:05 DEBUG --- stderr --- 03:21:05 DEBUG 03:21:06 INFO 03:21:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:21:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:21:06 INFO [loop_until]: OK (rc = 0) 03:21:06 DEBUG --- stdout --- 03:21:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3639Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 85m 0% 4023Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 3621Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 111m 0% 2562Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5269Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 71m 0% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 63m 0% 14239Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14059Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 939Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 967Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14101Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1899Mi 3% 03:21:06 DEBUG --- stderr --- 03:21:06 DEBUG 127.0.0.1 - - [16/Aug/2023 03:21:49] "GET /monitoring/average?start_time=23-08-16_01:51:18&stop_time=23-08-16_02:19:48 HTTP/1.1" 200 - 03:22:05 INFO 03:22:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:22:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:22:05 INFO [loop_until]: OK (rc = 0) 03:22:05 DEBUG --- stdout --- 03:22:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2856Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 5m 2766Mi ds-cts-0 7m 400Mi ds-cts-1 7m 372Mi ds-cts-2 8m 375Mi ds-idrepo-0 10m 13824Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 14m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 3352Mi idm-65858d8c4c-pt5s9 6m 3489Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 101Mi 03:22:05 DEBUG --- stderr --- 03:22:05 DEBUG 03:22:06 INFO 03:22:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:22:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:22:06 INFO [loop_until]: OK (rc = 0) 03:22:06 DEBUG --- stdout --- 03:22:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3637Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 81m 0% 4020Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 3620Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 102m 0% 2564Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5265Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4159Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 61m 0% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14060Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 966Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1902Mi 3% 03:22:06 DEBUG --- stderr --- 03:22:06 DEBUG 03:23:05 INFO 03:23:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:23:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:23:05 INFO [loop_until]: OK (rc = 0) 03:23:05 DEBUG --- stdout --- 03:23:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 13m 2857Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 6m 2767Mi ds-cts-0 9m 399Mi ds-cts-1 9m 372Mi ds-cts-2 7m 375Mi ds-idrepo-0 563m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 483m 3361Mi idm-65858d8c4c-pt5s9 231m 3493Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 237m 492Mi 03:23:05 DEBUG --- stderr --- 03:23:05 DEBUG 03:23:06 INFO 03:23:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:23:06 INFO [loop_until]: OK (rc = 0) 03:23:06 DEBUG --- stdout --- 03:23:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3637Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 434m 2% 4028Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 3621Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 133m 0% 2568Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5265Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 525m 3% 4156Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 692m 4% 14214Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14063Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 63m 0% 970Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14099Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 436m 2% 2287Mi 3% 03:23:06 DEBUG --- stderr --- 03:23:06 DEBUG 03:24:05 INFO 03:24:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:24:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:24:05 INFO [loop_until]: OK (rc = 0) 03:24:05 DEBUG --- stdout --- 03:24:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 11m 2766Mi ds-cts-0 6m 399Mi ds-cts-1 8m 373Mi ds-cts-2 6m 374Mi ds-idrepo-0 1241m 13823Mi ds-idrepo-1 16m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 793m 3363Mi idm-65858d8c4c-pt5s9 808m 3489Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 169m 499Mi 03:24:05 DEBUG --- stderr --- 03:24:05 DEBUG 03:24:06 INFO 03:24:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:24:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:24:06 INFO [loop_until]: OK (rc = 0) 03:24:06 DEBUG --- stdout --- 03:24:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 879m 5% 4034Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 78m 0% 3633Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 174m 1% 2568Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5268Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 786m 4% 4162Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1189m 7% 14226Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14063Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 63m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 968Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 259m 1% 2294Mi 3% 03:24:06 DEBUG --- stderr --- 03:24:06 DEBUG 03:25:05 INFO 03:25:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:25:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:25:05 INFO [loop_until]: OK (rc = 0) 03:25:05 DEBUG --- stdout --- 03:25:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 9m 4468Mi am-869fdb5db9-wt7sg 8m 2774Mi ds-cts-0 14m 402Mi ds-cts-1 8m 373Mi ds-cts-2 9m 375Mi ds-idrepo-0 1001m 13807Mi ds-idrepo-1 13m 13671Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 839m 3359Mi idm-65858d8c4c-pt5s9 592m 3496Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 108m 510Mi 03:25:05 DEBUG --- stderr --- 03:25:05 DEBUG 03:25:06 INFO 03:25:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:25:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:25:06 INFO [loop_until]: OK (rc = 0) 03:25:06 DEBUG --- stdout --- 03:25:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1023m 6% 4033Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3632Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 173m 1% 2567Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 71m 0% 5269Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 713m 4% 4164Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1165m 7% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14064Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 66m 0% 974Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14102Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 176m 1% 2305Mi 3% 03:25:06 DEBUG --- stderr --- 03:25:06 DEBUG 03:26:05 INFO 03:26:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:26:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:26:05 INFO [loop_until]: OK (rc = 0) 03:26:05 DEBUG --- stdout --- 03:26:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 6m 4468Mi am-869fdb5db9-wt7sg 6m 2783Mi ds-cts-0 6m 402Mi ds-cts-1 8m 373Mi ds-cts-2 7m 375Mi ds-idrepo-0 930m 13806Mi ds-idrepo-1 13m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 793m 3361Mi idm-65858d8c4c-pt5s9 681m 3492Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 108m 511Mi 03:26:05 DEBUG --- stderr --- 03:26:05 DEBUG 03:26:06 INFO 03:26:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:26:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:26:06 INFO [loop_until]: OK (rc = 0) 03:26:06 DEBUG --- stdout --- 03:26:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3639Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 873m 5% 4033Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 168m 1% 2565Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5270Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 736m 4% 4160Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1070m 6% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14065Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 973Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14100Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 174m 1% 2303Mi 3% 03:26:06 DEBUG --- stderr --- 03:26:06 DEBUG 03:27:05 INFO 03:27:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:27:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:27:06 INFO [loop_until]: OK (rc = 0) 03:27:06 DEBUG --- stdout --- 03:27:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 7m 2793Mi ds-cts-0 6m 402Mi ds-cts-1 7m 373Mi ds-cts-2 6m 375Mi ds-idrepo-0 1156m 13822Mi ds-idrepo-1 20m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 819m 3361Mi idm-65858d8c4c-pt5s9 752m 3493Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 101m 511Mi 03:27:06 DEBUG --- stderr --- 03:27:06 DEBUG 03:27:06 INFO 03:27:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:27:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:27:06 INFO [loop_until]: OK (rc = 0) 03:27:06 DEBUG --- stdout --- 03:27:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 865m 5% 4029Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 3653Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 171m 1% 2567Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5270Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 854m 5% 4161Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1131m 7% 14247Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14066Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 74m 0% 14101Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 167m 1% 2307Mi 3% 03:27:06 DEBUG --- stderr --- 03:27:06 DEBUG 03:28:06 INFO 03:28:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:28:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:28:06 INFO [loop_until]: OK (rc = 0) 03:28:06 DEBUG --- stdout --- 03:28:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 10m 4468Mi am-869fdb5db9-wt7sg 13m 2804Mi ds-cts-0 8m 402Mi ds-cts-1 8m 373Mi ds-cts-2 6m 375Mi ds-idrepo-0 1044m 13823Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 811m 3361Mi idm-65858d8c4c-pt5s9 642m 3493Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 89m 512Mi 03:28:06 DEBUG --- stderr --- 03:28:06 DEBUG 03:28:06 INFO 03:28:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:28:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:28:06 INFO [loop_until]: OK (rc = 0) 03:28:06 DEBUG --- stdout --- 03:28:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 850m 5% 4034Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 989Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 3663Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 168m 1% 2567Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5268Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 681m 4% 4162Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1072m 6% 14220Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14066Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 973Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14101Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 175m 1% 2304Mi 3% 03:28:06 DEBUG --- stderr --- 03:28:06 DEBUG 03:29:06 INFO 03:29:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:29:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:29:06 INFO [loop_until]: OK (rc = 0) 03:29:06 DEBUG --- stdout --- 03:29:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 10m 4469Mi am-869fdb5db9-wt7sg 7m 2815Mi ds-cts-0 6m 402Mi ds-cts-1 9m 373Mi ds-cts-2 8m 375Mi ds-idrepo-0 921m 13803Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 728m 3362Mi idm-65858d8c4c-pt5s9 547m 3494Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 90m 513Mi 03:29:06 DEBUG --- stderr --- 03:29:06 DEBUG 03:29:07 INFO 03:29:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:29:07 INFO [loop_until]: OK (rc = 0) 03:29:07 DEBUG --- stdout --- 03:29:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 817m 5% 4034Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3673Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 173m 1% 2569Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 72m 0% 5268Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 704m 4% 4166Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 66m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1034m 6% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14067Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 67m 0% 940Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 64m 0% 973Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 79m 0% 14116Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 157m 0% 2304Mi 3% 03:29:07 DEBUG --- stderr --- 03:29:07 DEBUG 03:30:06 INFO 03:30:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:30:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:30:06 INFO [loop_until]: OK (rc = 0) 03:30:06 DEBUG --- stdout --- 03:30:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 9m 2822Mi ds-cts-0 7m 402Mi ds-cts-1 8m 373Mi ds-cts-2 7m 375Mi ds-idrepo-0 1260m 13800Mi ds-idrepo-1 15m 13673Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 902m 3364Mi idm-65858d8c4c-pt5s9 767m 3501Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 88m 514Mi 03:30:06 DEBUG --- stderr --- 03:30:06 DEBUG 03:30:07 INFO 03:30:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:30:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:30:07 INFO [loop_until]: OK (rc = 0) 03:30:07 DEBUG --- stdout --- 03:30:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 70m 0% 3637Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1050m 6% 4036Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 69m 0% 3681Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 165m 1% 2565Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 75m 0% 5270Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 850m 5% 4171Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 65m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1265m 7% 14221Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14067Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 939Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 974Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14104Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 156m 0% 2307Mi 3% 03:30:07 DEBUG --- stderr --- 03:30:07 DEBUG 03:31:06 INFO 03:31:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:31:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:31:06 INFO [loop_until]: OK (rc = 0) 03:31:06 DEBUG --- stdout --- 03:31:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 2857Mi am-869fdb5db9-8dg94 9m 4468Mi am-869fdb5db9-wt7sg 7m 2835Mi ds-cts-0 6m 402Mi ds-cts-1 7m 374Mi ds-cts-2 8m 375Mi ds-idrepo-0 1127m 13794Mi ds-idrepo-1 15m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 680m 3371Mi idm-65858d8c4c-pt5s9 658m 3502Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 88m 518Mi 03:31:06 DEBUG --- stderr --- 03:31:06 DEBUG 03:31:07 INFO 03:31:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:31:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:31:07 INFO [loop_until]: OK (rc = 0) 03:31:07 DEBUG --- stdout --- 03:31:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 69m 0% 3637Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 913m 5% 4042Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 984Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3690Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 179m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 809m 5% 4168Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 949m 5% 14222Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14065Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 974Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14106Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 153m 0% 2313Mi 3% 03:31:07 DEBUG --- stderr --- 03:31:07 DEBUG 03:32:06 INFO 03:32:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:32:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:32:06 INFO [loop_until]: OK (rc = 0) 03:32:06 DEBUG --- stdout --- 03:32:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2857Mi am-869fdb5db9-8dg94 10m 4469Mi am-869fdb5db9-wt7sg 6m 2845Mi ds-cts-0 6m 402Mi ds-cts-1 8m 373Mi ds-cts-2 7m 375Mi ds-idrepo-0 1157m 13823Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 970m 3375Mi idm-65858d8c4c-pt5s9 781m 3504Mi lodemon-66684b7694-c5c6m 7m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 94m 519Mi 03:32:06 DEBUG --- stderr --- 03:32:06 DEBUG 03:32:07 INFO 03:32:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:32:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:32:07 INFO [loop_until]: OK (rc = 0) 03:32:07 DEBUG --- stdout --- 03:32:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1082m 6% 4047Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 58m 0% 3701Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 190m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5268Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 868m 5% 4169Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1149m 7% 14219Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14068Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14105Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 161m 1% 2314Mi 3% 03:32:07 DEBUG --- stderr --- 03:32:07 DEBUG 03:33:06 INFO 03:33:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:33:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:33:06 INFO [loop_until]: OK (rc = 0) 03:33:06 DEBUG --- stdout --- 03:33:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 15m 2857Mi am-869fdb5db9-8dg94 8m 4468Mi am-869fdb5db9-wt7sg 6m 2855Mi ds-cts-0 7m 402Mi ds-cts-1 8m 374Mi ds-cts-2 6m 372Mi ds-idrepo-0 1094m 13822Mi ds-idrepo-1 22m 13673Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 957m 3376Mi idm-65858d8c4c-pt5s9 764m 3503Mi lodemon-66684b7694-c5c6m 8m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 91m 519Mi 03:33:06 DEBUG --- stderr --- 03:33:06 DEBUG 03:33:07 INFO 03:33:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:33:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:33:07 INFO [loop_until]: OK (rc = 0) 03:33:07 DEBUG --- stdout --- 03:33:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 959m 6% 4048Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 981Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3706Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 181m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 881m 5% 4169Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1199m 7% 14245Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 69m 0% 14076Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 66m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 75m 0% 14104Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 151m 0% 2313Mi 3% 03:33:07 DEBUG --- stderr --- 03:33:07 DEBUG 03:34:06 INFO 03:34:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:34:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:34:06 INFO [loop_until]: OK (rc = 0) 03:34:06 DEBUG --- stdout --- 03:34:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 7m 4469Mi am-869fdb5db9-wt7sg 6m 2864Mi ds-cts-0 6m 402Mi ds-cts-1 9m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 1087m 13794Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 13m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 923m 3377Mi idm-65858d8c4c-pt5s9 710m 3504Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 86m 519Mi 03:34:06 DEBUG --- stderr --- 03:34:06 DEBUG 03:34:07 INFO 03:34:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:34:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:34:07 INFO [loop_until]: OK (rc = 0) 03:34:07 DEBUG --- stdout --- 03:34:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 868m 5% 4047Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 70m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 3721Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 178m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 765m 4% 4170Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1129m 7% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14066Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 64m 0% 940Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14106Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 150m 0% 2316Mi 3% 03:34:07 DEBUG --- stderr --- 03:34:07 DEBUG 03:35:06 INFO 03:35:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:35:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:35:06 INFO [loop_until]: OK (rc = 0) 03:35:06 DEBUG --- stdout --- 03:35:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 7m 4469Mi am-869fdb5db9-wt7sg 6m 2874Mi ds-cts-0 6m 404Mi ds-cts-1 12m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 1071m 13822Mi ds-idrepo-1 13m 13673Mi ds-idrepo-2 19m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 799m 3380Mi idm-65858d8c4c-pt5s9 806m 3505Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 89m 519Mi 03:35:06 DEBUG --- stderr --- 03:35:06 DEBUG 03:35:07 INFO 03:35:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:35:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:35:07 INFO [loop_until]: OK (rc = 0) 03:35:07 DEBUG --- stdout --- 03:35:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 881m 5% 4050Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 3731Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 177m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 955m 6% 4174Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1135m 7% 14223Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 69m 0% 14064Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14108Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 156m 0% 2317Mi 3% 03:35:07 DEBUG --- stderr --- 03:35:07 DEBUG 03:36:06 INFO 03:36:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:36:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:36:06 INFO [loop_until]: OK (rc = 0) 03:36:06 DEBUG --- stdout --- 03:36:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2857Mi am-869fdb5db9-8dg94 11m 4469Mi am-869fdb5db9-wt7sg 6m 2884Mi ds-cts-0 6m 402Mi ds-cts-1 12m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 1040m 13823Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 17m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 881m 3379Mi idm-65858d8c4c-pt5s9 610m 3504Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 87m 520Mi 03:36:06 DEBUG --- stderr --- 03:36:06 DEBUG 03:36:07 INFO 03:36:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:36:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:36:08 INFO [loop_until]: OK (rc = 0) 03:36:08 DEBUG --- stdout --- 03:36:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 953m 5% 4058Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3740Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 182m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5270Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 744m 4% 4174Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1112m 6% 14245Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 68m 0% 14066Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 65m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14108Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 150m 0% 2311Mi 3% 03:36:08 DEBUG --- stderr --- 03:36:08 DEBUG 03:37:07 INFO 03:37:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:37:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:37:07 INFO [loop_until]: OK (rc = 0) 03:37:07 DEBUG --- stdout --- 03:37:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 7m 4469Mi am-869fdb5db9-wt7sg 6m 2895Mi ds-cts-0 6m 402Mi ds-cts-1 11m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 1060m 13804Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 14m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 927m 3381Mi idm-65858d8c4c-pt5s9 608m 3505Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 87m 519Mi 03:37:07 DEBUG --- stderr --- 03:37:07 DEBUG 03:37:08 INFO 03:37:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:37:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:37:08 INFO [loop_until]: OK (rc = 0) 03:37:08 DEBUG --- stdout --- 03:37:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 999m 6% 4053Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 58m 0% 3750Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 172m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5272Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 656m 4% 4177Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 993m 6% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 68m 0% 14064Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 63m 0% 938Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14109Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 148m 0% 2314Mi 3% 03:37:08 DEBUG --- stderr --- 03:37:08 DEBUG 03:38:07 INFO 03:38:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:38:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:38:07 INFO [loop_until]: OK (rc = 0) 03:38:07 DEBUG --- stdout --- 03:38:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 9m 4469Mi am-869fdb5db9-wt7sg 8m 2905Mi ds-cts-0 7m 402Mi ds-cts-1 11m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 1087m 13822Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 852m 3384Mi idm-65858d8c4c-pt5s9 675m 3507Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 92m 520Mi 03:38:07 DEBUG --- stderr --- 03:38:07 DEBUG 03:38:08 INFO 03:38:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:38:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:38:08 INFO [loop_until]: OK (rc = 0) 03:38:08 DEBUG --- stdout --- 03:38:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 940m 5% 4053Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 990Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 3763Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 176m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5272Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 744m 4% 4173Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1103m 6% 14221Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14067Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 64m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 974Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14109Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 154m 0% 2314Mi 3% 03:38:08 DEBUG --- stderr --- 03:38:08 DEBUG 03:39:07 INFO 03:39:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:39:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:39:07 INFO [loop_until]: OK (rc = 0) 03:39:07 DEBUG --- stdout --- 03:39:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 8m 4469Mi am-869fdb5db9-wt7sg 5m 2916Mi ds-cts-0 6m 402Mi ds-cts-1 10m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 1105m 13825Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 13m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 904m 3391Mi idm-65858d8c4c-pt5s9 711m 3509Mi lodemon-66684b7694-c5c6m 7m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 85m 520Mi 03:39:07 DEBUG --- stderr --- 03:39:07 DEBUG 03:39:08 INFO 03:39:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:39:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:39:08 INFO [loop_until]: OK (rc = 0) 03:39:08 DEBUG --- stdout --- 03:39:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3639Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 951m 5% 4063Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 1002Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3773Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 177m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5273Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 802m 5% 4179Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 65m 0% 940Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1148m 7% 14225Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14063Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 64m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14107Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 154m 0% 2316Mi 3% 03:39:08 DEBUG --- stderr --- 03:39:08 DEBUG 03:40:07 INFO 03:40:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:40:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:40:07 INFO [loop_until]: OK (rc = 0) 03:40:07 DEBUG --- stdout --- 03:40:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 8m 4469Mi am-869fdb5db9-wt7sg 6m 2926Mi ds-cts-0 6m 402Mi ds-cts-1 8m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 975m 13797Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 767m 3386Mi idm-65858d8c4c-pt5s9 746m 3509Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 81m 520Mi 03:40:07 DEBUG --- stderr --- 03:40:07 DEBUG 03:40:08 INFO 03:40:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:40:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:40:08 INFO [loop_until]: OK (rc = 0) 03:40:08 DEBUG --- stdout --- 03:40:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 878m 5% 4057Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 56m 0% 3785Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 169m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5272Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 890m 5% 4181Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 58m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1094m 6% 14250Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14065Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 974Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14107Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 150m 0% 2316Mi 3% 03:40:08 DEBUG --- stderr --- 03:40:08 DEBUG 03:41:07 INFO 03:41:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:41:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:41:07 INFO [loop_until]: OK (rc = 0) 03:41:07 DEBUG --- stdout --- 03:41:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 7m 4469Mi am-869fdb5db9-wt7sg 8m 2937Mi ds-cts-0 6m 402Mi ds-cts-1 7m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 890m 13794Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 727m 3387Mi idm-65858d8c4c-pt5s9 638m 3515Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 79m 520Mi 03:41:07 DEBUG --- stderr --- 03:41:07 DEBUG 03:41:08 INFO 03:41:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:41:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:41:08 INFO [loop_until]: OK (rc = 0) 03:41:08 DEBUG --- stdout --- 03:41:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 866m 5% 4058Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3795Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 164m 1% 2571Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 663m 4% 4185Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1002m 6% 14219Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14065Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14107Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 145m 0% 2314Mi 3% 03:41:08 DEBUG --- stderr --- 03:41:08 DEBUG 03:42:07 INFO 03:42:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:42:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:42:07 INFO [loop_until]: OK (rc = 0) 03:42:07 DEBUG --- stdout --- 03:42:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4469Mi am-869fdb5db9-wt7sg 6m 2947Mi ds-cts-0 6m 402Mi ds-cts-1 7m 373Mi ds-cts-2 7m 372Mi ds-idrepo-0 1093m 13802Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 876m 3388Mi idm-65858d8c4c-pt5s9 735m 3513Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 84m 520Mi 03:42:07 DEBUG --- stderr --- 03:42:07 DEBUG 03:42:08 INFO 03:42:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:42:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:42:08 INFO [loop_until]: OK (rc = 0) 03:42:08 DEBUG --- stdout --- 03:42:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 996m 6% 4049Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3801Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 171m 1% 2568Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 778m 4% 4184Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1136m 7% 14227Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14065Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 974Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14108Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 151m 0% 2314Mi 3% 03:42:08 DEBUG --- stderr --- 03:42:08 DEBUG 03:43:07 INFO 03:43:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:43:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:43:07 INFO [loop_until]: OK (rc = 0) 03:43:07 DEBUG --- stdout --- 03:43:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 7m 4470Mi am-869fdb5db9-wt7sg 7m 2960Mi ds-cts-0 7m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 373Mi ds-idrepo-0 998m 13823Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 13m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 857m 3390Mi idm-65858d8c4c-pt5s9 778m 3515Mi lodemon-66684b7694-c5c6m 8m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 85m 520Mi 03:43:07 DEBUG --- stderr --- 03:43:07 DEBUG 03:43:08 INFO 03:43:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:43:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:43:08 INFO [loop_until]: OK (rc = 0) 03:43:08 DEBUG --- stdout --- 03:43:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 852m 5% 4052Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3815Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 177m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 789m 4% 4183Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1111m 6% 14253Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14068Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14110Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 151m 0% 2311Mi 3% 03:43:08 DEBUG --- stderr --- 03:43:08 DEBUG 03:44:07 INFO 03:44:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:44:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:44:07 INFO [loop_until]: OK (rc = 0) 03:44:07 DEBUG --- stdout --- 03:44:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2857Mi am-869fdb5db9-8dg94 8m 4470Mi am-869fdb5db9-wt7sg 6m 2970Mi ds-cts-0 6m 402Mi ds-cts-1 8m 375Mi ds-cts-2 7m 373Mi ds-idrepo-0 1067m 13801Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 769m 3390Mi idm-65858d8c4c-pt5s9 775m 3521Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 90m 521Mi 03:44:07 DEBUG --- stderr --- 03:44:07 DEBUG 03:44:08 INFO 03:44:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:44:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:44:09 INFO [loop_until]: OK (rc = 0) 03:44:09 DEBUG --- stdout --- 03:44:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3643Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 988m 6% 4056Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 70m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 3824Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 179m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 856m 5% 4192Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1167m 7% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14067Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14109Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 155m 0% 2315Mi 3% 03:44:09 DEBUG --- stderr --- 03:44:09 DEBUG 03:45:07 INFO 03:45:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:45:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:45:08 INFO [loop_until]: OK (rc = 0) 03:45:08 DEBUG --- stdout --- 03:45:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 9m 4470Mi am-869fdb5db9-wt7sg 7m 2978Mi ds-cts-0 6m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 372Mi ds-idrepo-0 1060m 13822Mi ds-idrepo-1 17m 13672Mi ds-idrepo-2 19m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 934m 3392Mi idm-65858d8c4c-pt5s9 745m 3515Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 92m 521Mi 03:45:08 DEBUG --- stderr --- 03:45:08 DEBUG 03:45:09 INFO 03:45:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:45:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:45:09 INFO [loop_until]: OK (rc = 0) 03:45:09 DEBUG --- stdout --- 03:45:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3644Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 974m 6% 4060Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3836Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 175m 1% 2567Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 741m 4% 4187Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 58m 0% 939Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1145m 7% 14253Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 70m 0% 14068Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 73m 0% 14109Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 154m 0% 2316Mi 3% 03:45:09 DEBUG --- stderr --- 03:45:09 DEBUG 03:46:08 INFO 03:46:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:46:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:46:08 INFO [loop_until]: OK (rc = 0) 03:46:08 DEBUG --- stdout --- 03:46:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2857Mi am-869fdb5db9-8dg94 7m 4470Mi am-869fdb5db9-wt7sg 6m 2989Mi ds-cts-0 6m 402Mi ds-cts-1 7m 373Mi ds-cts-2 6m 372Mi ds-idrepo-0 1027m 13822Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 817m 3394Mi idm-65858d8c4c-pt5s9 603m 3517Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 82m 521Mi 03:46:08 DEBUG --- stderr --- 03:46:08 DEBUG 03:46:09 INFO 03:46:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:46:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:46:09 INFO [loop_until]: OK (rc = 0) 03:46:09 DEBUG --- stdout --- 03:46:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 885m 5% 4062Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3847Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 167m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5273Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 684m 4% 4186Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1004m 6% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14069Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14108Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 146m 0% 2315Mi 3% 03:46:09 DEBUG --- stderr --- 03:46:09 DEBUG 03:47:08 INFO 03:47:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:47:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:47:08 INFO [loop_until]: OK (rc = 0) 03:47:08 DEBUG --- stdout --- 03:47:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4470Mi am-869fdb5db9-wt7sg 7m 3002Mi ds-cts-0 14m 402Mi ds-cts-1 7m 373Mi ds-cts-2 6m 373Mi ds-idrepo-0 1177m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1026m 3396Mi idm-65858d8c4c-pt5s9 771m 3518Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 96m 521Mi 03:47:08 DEBUG --- stderr --- 03:47:08 DEBUG 03:47:09 INFO 03:47:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:47:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:47:09 INFO [loop_until]: OK (rc = 0) 03:47:09 DEBUG --- stdout --- 03:47:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1131m 7% 4060Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 71m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3859Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 176m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5273Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 825m 5% 4189Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 58m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1219m 7% 14228Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14069Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 67m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14112Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 155m 0% 2316Mi 3% 03:47:09 DEBUG --- stderr --- 03:47:09 DEBUG 03:48:08 INFO 03:48:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:48:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:48:08 INFO [loop_until]: OK (rc = 0) 03:48:08 DEBUG --- stdout --- 03:48:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 9m 4470Mi am-869fdb5db9-wt7sg 8m 3010Mi ds-cts-0 6m 403Mi ds-cts-1 16m 371Mi ds-cts-2 6m 372Mi ds-idrepo-0 1002m 13822Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 14m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 731m 3404Mi idm-65858d8c4c-pt5s9 705m 3520Mi lodemon-66684b7694-c5c6m 1m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 85m 521Mi 03:48:08 DEBUG --- stderr --- 03:48:08 DEBUG 03:48:09 INFO 03:48:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:48:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:48:09 INFO [loop_until]: OK (rc = 0) 03:48:09 DEBUG --- stdout --- 03:48:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3643Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 786m 4% 4064Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 3866Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 166m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 769m 4% 4188Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1028m 6% 14251Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14068Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 69m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14112Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 154m 0% 2317Mi 3% 03:48:09 DEBUG --- stderr --- 03:48:09 DEBUG 03:49:08 INFO 03:49:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:49:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:49:08 INFO [loop_until]: OK (rc = 0) 03:49:08 DEBUG --- stdout --- 03:49:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 8m 4470Mi am-869fdb5db9-wt7sg 7m 3020Mi ds-cts-0 6m 403Mi ds-cts-1 7m 371Mi ds-cts-2 6m 374Mi ds-idrepo-0 980m 13822Mi ds-idrepo-1 14m 13671Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 759m 3397Mi idm-65858d8c4c-pt5s9 826m 3521Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 91m 522Mi 03:49:08 DEBUG --- stderr --- 03:49:08 DEBUG 03:49:09 INFO 03:49:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:49:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:49:09 INFO [loop_until]: OK (rc = 0) 03:49:09 DEBUG --- stdout --- 03:49:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3643Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 757m 4% 4068Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3878Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 177m 1% 2571Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5273Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 939m 5% 4189Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1029m 6% 14237Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14068Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14112Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 163m 1% 2318Mi 3% 03:49:09 DEBUG --- stderr --- 03:49:09 DEBUG 03:50:08 INFO 03:50:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:50:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:50:08 INFO [loop_until]: OK (rc = 0) 03:50:08 DEBUG --- stdout --- 03:50:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4470Mi am-869fdb5db9-wt7sg 7m 3033Mi ds-cts-0 6m 403Mi ds-cts-1 13m 371Mi ds-cts-2 6m 374Mi ds-idrepo-0 860m 13802Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 817m 3400Mi idm-65858d8c4c-pt5s9 604m 3522Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 83m 521Mi 03:50:08 DEBUG --- stderr --- 03:50:08 DEBUG 03:50:09 INFO 03:50:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:50:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:50:09 INFO [loop_until]: OK (rc = 0) 03:50:09 DEBUG --- stdout --- 03:50:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 69m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 947m 5% 4066Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3890Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 170m 1% 2569Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 69m 0% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 695m 4% 4187Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 58m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 908m 5% 14234Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14114Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 150m 0% 2319Mi 3% 03:50:09 DEBUG --- stderr --- 03:50:09 DEBUG 03:51:08 INFO 03:51:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:51:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:51:08 INFO [loop_until]: OK (rc = 0) 03:51:08 DEBUG --- stdout --- 03:51:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 12m 4466Mi am-869fdb5db9-wt7sg 6m 3043Mi ds-cts-0 6m 403Mi ds-cts-1 7m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 993m 13823Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 14m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 764m 3401Mi idm-65858d8c4c-pt5s9 699m 3523Mi lodemon-66684b7694-c5c6m 7m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 83m 522Mi 03:51:08 DEBUG --- stderr --- 03:51:08 DEBUG 03:51:09 INFO 03:51:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:51:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:51:09 INFO [loop_until]: OK (rc = 0) 03:51:09 DEBUG --- stdout --- 03:51:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 833m 5% 4064Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3898Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 171m 1% 2568Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 797m 5% 4189Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1079m 6% 14233Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14115Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 152m 0% 2319Mi 3% 03:51:09 DEBUG --- stderr --- 03:51:09 DEBUG 03:52:08 INFO 03:52:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:52:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:52:08 INFO [loop_until]: OK (rc = 0) 03:52:08 DEBUG --- stdout --- 03:52:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 2857Mi am-869fdb5db9-8dg94 7m 4466Mi am-869fdb5db9-wt7sg 6m 3052Mi ds-cts-0 6m 403Mi ds-cts-1 7m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 1011m 13808Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 642m 3403Mi idm-65858d8c4c-pt5s9 776m 3524Mi lodemon-66684b7694-c5c6m 6m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 90m 521Mi 03:52:08 DEBUG --- stderr --- 03:52:08 DEBUG 03:52:09 INFO 03:52:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:52:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:52:09 INFO [loop_until]: OK (rc = 0) 03:52:09 DEBUG --- stdout --- 03:52:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 71m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 786m 4% 4068Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 3911Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 167m 1% 2569Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5267Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 835m 5% 4194Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 957m 6% 14241Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14072Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14112Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 150m 0% 2319Mi 3% 03:52:09 DEBUG --- stderr --- 03:52:09 DEBUG 03:53:08 INFO 03:53:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:53:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:53:08 INFO [loop_until]: OK (rc = 0) 03:53:08 DEBUG --- stdout --- 03:53:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 8m 4466Mi am-869fdb5db9-wt7sg 6m 3063Mi ds-cts-0 7m 403Mi ds-cts-1 9m 372Mi ds-cts-2 7m 374Mi ds-idrepo-0 11m 13809Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 3404Mi idm-65858d8c4c-pt5s9 184m 3525Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 35m 522Mi 03:53:08 DEBUG --- stderr --- 03:53:08 DEBUG 03:53:10 INFO 03:53:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:53:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:53:10 INFO [loop_until]: OK (rc = 0) 03:53:10 DEBUG --- stdout --- 03:53:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 84m 0% 4069Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 991Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3918Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 99m 0% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5268Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 73m 0% 4194Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 62m 0% 14240Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14068Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14114Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 102m 0% 1903Mi 3% 03:53:10 DEBUG --- stderr --- 03:53:10 DEBUG 03:54:08 INFO 03:54:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:54:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:54:08 INFO [loop_until]: OK (rc = 0) 03:54:08 DEBUG --- stdout --- 03:54:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 8m 4466Mi am-869fdb5db9-wt7sg 6m 3071Mi ds-cts-0 6m 403Mi ds-cts-1 7m 372Mi ds-cts-2 7m 375Mi ds-idrepo-0 11m 13809Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 3407Mi idm-65858d8c4c-pt5s9 7m 3525Mi lodemon-66684b7694-c5c6m 2m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 101Mi 03:54:08 DEBUG --- stderr --- 03:54:08 DEBUG 03:54:10 INFO 03:54:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:54:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:54:10 INFO [loop_until]: OK (rc = 0) 03:54:10 DEBUG --- stdout --- 03:54:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 85m 0% 4078Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3932Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 101m 0% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5270Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 4192Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 63m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 60m 0% 14238Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14071Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14113Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 63m 0% 1906Mi 3% 03:54:10 DEBUG --- stderr --- 03:54:10 DEBUG 127.0.0.1 - - [16/Aug/2023 03:54:20] "GET /monitoring/average?start_time=23-08-16_02:23:49&stop_time=23-08-16_02:52:19 HTTP/1.1" 200 - 03:55:09 INFO 03:55:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:55:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:55:09 INFO [loop_until]: OK (rc = 0) 03:55:09 DEBUG --- stdout --- 03:55:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 11m 2857Mi am-869fdb5db9-8dg94 7m 4466Mi am-869fdb5db9-wt7sg 6m 3084Mi ds-cts-0 6m 403Mi ds-cts-1 8m 373Mi ds-cts-2 7m 376Mi ds-idrepo-0 10m 13808Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 3407Mi idm-65858d8c4c-pt5s9 35m 3529Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 469m 410Mi 03:55:09 DEBUG --- stderr --- 03:55:09 DEBUG 03:55:10 INFO 03:55:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:55:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:55:10 INFO [loop_until]: OK (rc = 0) 03:55:10 DEBUG --- stdout --- 03:55:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 71m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 131m 0% 4075Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 85m 0% 980Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 3934Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 110m 0% 2569Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 112m 0% 4199Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 65m 0% 14241Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14115Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 685m 4% 2298Mi 3% 03:55:10 DEBUG --- stderr --- 03:55:10 DEBUG 03:56:09 INFO 03:56:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:56:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:56:09 INFO [loop_until]: OK (rc = 0) 03:56:09 DEBUG --- stdout --- 03:56:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 5m 2857Mi am-869fdb5db9-8dg94 7m 4466Mi am-869fdb5db9-wt7sg 6m 3093Mi ds-cts-0 7m 403Mi ds-cts-1 13m 372Mi ds-cts-2 7m 374Mi ds-idrepo-0 1512m 13801Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1494m 3427Mi idm-65858d8c4c-pt5s9 1004m 3528Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 285m 511Mi 03:56:09 DEBUG --- stderr --- 03:56:09 DEBUG 03:56:10 INFO 03:56:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:56:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:56:10 INFO [loop_until]: OK (rc = 0) 03:56:10 DEBUG --- stdout --- 03:56:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1638m 10% 4078Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 981Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 3946Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 214m 1% 2569Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5269Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1152m 7% 4192Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 67m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1636m 10% 14230Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14068Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 65m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14117Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 335m 2% 2308Mi 3% 03:56:10 DEBUG --- stderr --- 03:56:10 DEBUG 03:57:09 INFO 03:57:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:57:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:57:09 INFO [loop_until]: OK (rc = 0) 03:57:09 DEBUG --- stdout --- 03:57:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 5m 2857Mi am-869fdb5db9-8dg94 7m 4466Mi am-869fdb5db9-wt7sg 6m 3104Mi ds-cts-0 7m 403Mi ds-cts-1 7m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 1579m 13805Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 14m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1309m 3409Mi idm-65858d8c4c-pt5s9 1112m 3530Mi lodemon-66684b7694-c5c6m 5m 66Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 188m 524Mi 03:57:09 DEBUG --- stderr --- 03:57:09 DEBUG 03:57:10 INFO 03:57:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:57:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:57:10 INFO [loop_until]: OK (rc = 0) 03:57:10 DEBUG --- stdout --- 03:57:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1355m 8% 4076Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 983Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 3960Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 217m 1% 2571Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5272Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1232m 7% 4198Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1654m 10% 14239Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14114Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 247m 1% 2322Mi 3% 03:57:10 DEBUG --- stderr --- 03:57:10 DEBUG 03:58:09 INFO 03:58:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:58:09 INFO [loop_until]: OK (rc = 0) 03:58:09 DEBUG --- stdout --- 03:58:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 8m 4466Mi am-869fdb5db9-wt7sg 6m 3117Mi ds-cts-0 11m 403Mi ds-cts-1 7m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 1571m 13809Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1303m 3410Mi idm-65858d8c4c-pt5s9 1057m 3537Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 174m 525Mi 03:58:09 DEBUG --- stderr --- 03:58:09 DEBUG 03:58:10 INFO 03:58:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:58:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:58:10 INFO [loop_until]: OK (rc = 0) 03:58:10 DEBUG --- stdout --- 03:58:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1425m 8% 4080Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 985Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 3974Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 210m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5272Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1187m 7% 4194Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1495m 9% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14072Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14117Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 230m 1% 2323Mi 3% 03:58:10 DEBUG --- stderr --- 03:58:10 DEBUG 03:59:09 INFO 03:59:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:59:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:59:09 INFO [loop_until]: OK (rc = 0) 03:59:09 DEBUG --- stdout --- 03:59:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4466Mi am-869fdb5db9-wt7sg 6m 3127Mi ds-cts-0 6m 403Mi ds-cts-1 7m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1574m 13803Mi ds-idrepo-1 26m 13671Mi ds-idrepo-2 11m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1423m 3414Mi idm-65858d8c4c-pt5s9 1033m 3533Mi lodemon-66684b7694-c5c6m 5m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 166m 538Mi 03:59:09 DEBUG --- stderr --- 03:59:09 DEBUG 03:59:10 INFO 03:59:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:59:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:59:10 INFO [loop_until]: OK (rc = 0) 03:59:10 DEBUG --- stdout --- 03:59:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1438m 9% 4081Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 82m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 3987Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 214m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5270Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1069m 6% 4196Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1702m 10% 14236Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14065Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 74m 0% 14118Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 239m 1% 2336Mi 3% 03:59:10 DEBUG --- stderr --- 03:59:10 DEBUG 04:00:09 INFO 04:00:09 INFO [loop_until]: kubectl --namespace=xlou top pods 04:00:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:00:09 INFO [loop_until]: OK (rc = 0) 04:00:09 DEBUG --- stdout --- 04:00:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 12m 4471Mi am-869fdb5db9-wt7sg 8m 3137Mi ds-cts-0 6m 403Mi ds-cts-1 7m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1573m 13800Mi ds-idrepo-1 12m 13671Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1206m 3416Mi idm-65858d8c4c-pt5s9 1265m 3539Mi lodemon-66684b7694-c5c6m 1m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 129m 538Mi 04:00:09 DEBUG --- stderr --- 04:00:09 DEBUG 04:00:10 INFO 04:00:10 INFO [loop_until]: kubectl --namespace=xlou top node 04:00:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:00:10 INFO [loop_until]: OK (rc = 0) 04:00:10 DEBUG --- stdout --- 04:00:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1235m 7% 4085Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 989Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 71m 0% 4008Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 216m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 75m 0% 5272Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1369m 8% 4201Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1748m 11% 14242Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14068Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14120Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 203m 1% 2333Mi 3% 04:00:10 DEBUG --- stderr --- 04:00:10 DEBUG 04:01:09 INFO 04:01:09 INFO [loop_until]: kubectl --namespace=xlou top pods 04:01:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:01:09 INFO [loop_until]: OK (rc = 0) 04:01:09 DEBUG --- stdout --- 04:01:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 8m 4471Mi am-869fdb5db9-wt7sg 5m 3148Mi ds-cts-0 11m 403Mi ds-cts-1 7m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 1461m 13801Mi ds-idrepo-1 15m 13671Mi ds-idrepo-2 11m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1295m 3427Mi idm-65858d8c4c-pt5s9 892m 3535Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 125m 539Mi 04:01:09 DEBUG --- stderr --- 04:01:09 DEBUG 04:01:10 INFO 04:01:10 INFO [loop_until]: kubectl --namespace=xlou top node 04:01:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:01:11 INFO [loop_until]: OK (rc = 0) 04:01:11 DEBUG --- stdout --- 04:01:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3643Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1398m 8% 4086Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4006Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 209m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5276Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 985m 6% 4203Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1474m 9% 14236Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14067Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 63m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14121Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 182m 1% 2334Mi 3% 04:01:11 DEBUG --- stderr --- 04:01:11 DEBUG 04:02:09 INFO 04:02:09 INFO [loop_until]: kubectl --namespace=xlou top pods 04:02:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:02:09 INFO [loop_until]: OK (rc = 0) 04:02:09 DEBUG --- stdout --- 04:02:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 5m 2857Mi am-869fdb5db9-8dg94 8m 4472Mi am-869fdb5db9-wt7sg 6m 3159Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 1630m 13802Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1573m 3419Mi idm-65858d8c4c-pt5s9 1008m 3537Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 141m 539Mi 04:02:09 DEBUG --- stderr --- 04:02:09 DEBUG 04:02:11 INFO 04:02:11 INFO [loop_until]: kubectl --namespace=xlou top node 04:02:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:02:11 INFO [loop_until]: OK (rc = 0) 04:02:11 DEBUG --- stdout --- 04:02:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1561m 9% 4085Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 989Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4018Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 225m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1168m 7% 4204Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1708m 10% 14236Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14118Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 212m 1% 2337Mi 3% 04:02:11 DEBUG --- stderr --- 04:02:11 DEBUG 04:03:09 INFO 04:03:09 INFO [loop_until]: kubectl --namespace=xlou top pods 04:03:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:03:09 INFO [loop_until]: OK (rc = 0) 04:03:09 DEBUG --- stdout --- 04:03:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 8m 4471Mi am-869fdb5db9-wt7sg 15m 3169Mi ds-cts-0 6m 404Mi ds-cts-1 8m 372Mi ds-cts-2 7m 374Mi ds-idrepo-0 1679m 13822Mi ds-idrepo-1 18m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1460m 3420Mi idm-65858d8c4c-pt5s9 1170m 3546Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 136m 539Mi 04:03:09 DEBUG --- stderr --- 04:03:09 DEBUG 04:03:11 INFO 04:03:11 INFO [loop_until]: kubectl --namespace=xlou top node 04:03:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:03:11 INFO [loop_until]: OK (rc = 0) 04:03:11 DEBUG --- stdout --- 04:03:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1516m 9% 4086Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 991Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 71m 0% 4025Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 222m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5277Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1254m 7% 4207Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1743m 10% 14256Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14123Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 211m 1% 2337Mi 3% 04:03:11 DEBUG --- stderr --- 04:03:11 DEBUG 04:04:09 INFO 04:04:09 INFO [loop_until]: kubectl --namespace=xlou top pods 04:04:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:04:09 INFO [loop_until]: OK (rc = 0) 04:04:09 DEBUG --- stdout --- 04:04:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2857Mi am-869fdb5db9-8dg94 8m 4471Mi am-869fdb5db9-wt7sg 6m 3178Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 7m 375Mi ds-idrepo-0 1641m 13823Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1293m 3431Mi idm-65858d8c4c-pt5s9 1146m 3541Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 144m 540Mi 04:04:09 DEBUG --- stderr --- 04:04:09 DEBUG 04:04:11 INFO 04:04:11 INFO [loop_until]: kubectl --namespace=xlou top node 04:04:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:04:11 INFO [loop_until]: OK (rc = 0) 04:04:11 DEBUG --- stdout --- 04:04:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1437m 9% 4097Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 990Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4037Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 205m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5278Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1308m 8% 4207Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1716m 10% 14235Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14121Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 203m 1% 2337Mi 3% 04:04:11 DEBUG --- stderr --- 04:04:11 DEBUG 04:05:10 INFO 04:05:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:05:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:05:10 INFO [loop_until]: OK (rc = 0) 04:05:10 DEBUG --- stdout --- 04:05:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 9m 4471Mi am-869fdb5db9-wt7sg 6m 3189Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 11m 374Mi ds-idrepo-0 1484m 13807Mi ds-idrepo-1 17m 13673Mi ds-idrepo-2 15m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1234m 3423Mi idm-65858d8c4c-pt5s9 1145m 3542Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 133m 541Mi 04:05:10 DEBUG --- stderr --- 04:05:10 DEBUG 04:05:11 INFO 04:05:11 INFO [loop_until]: kubectl --namespace=xlou top node 04:05:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:05:11 INFO [loop_until]: OK (rc = 0) 04:05:11 DEBUG --- stdout --- 04:05:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1355m 8% 4092Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4047Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 207m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5279Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1196m 7% 4208Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1490m 9% 14236Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14069Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14121Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 201m 1% 2332Mi 3% 04:05:11 DEBUG --- stderr --- 04:05:11 DEBUG 04:06:10 INFO 04:06:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:06:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:06:10 INFO [loop_until]: OK (rc = 0) 04:06:10 DEBUG --- stdout --- 04:06:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4471Mi am-869fdb5db9-wt7sg 6m 3199Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 8m 375Mi ds-idrepo-0 1635m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 15m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1425m 3431Mi idm-65858d8c4c-pt5s9 1074m 3544Mi lodemon-66684b7694-c5c6m 5m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 160m 573Mi 04:06:10 DEBUG --- stderr --- 04:06:10 DEBUG 04:06:11 INFO 04:06:11 INFO [loop_until]: kubectl --namespace=xlou top node 04:06:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:06:11 INFO [loop_until]: OK (rc = 0) 04:06:11 DEBUG --- stdout --- 04:06:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3639Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1567m 9% 4101Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4057Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 203m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5289Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1182m 7% 4211Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1775m 11% 14230Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14072Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14121Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 228m 1% 2369Mi 4% 04:06:11 DEBUG --- stderr --- 04:06:11 DEBUG 04:07:10 INFO 04:07:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:07:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:07:10 INFO [loop_until]: OK (rc = 0) 04:07:10 DEBUG --- stdout --- 04:07:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 7m 4471Mi am-869fdb5db9-wt7sg 7m 3210Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 7m 374Mi ds-idrepo-0 1610m 13825Mi ds-idrepo-1 26m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1318m 3433Mi idm-65858d8c4c-pt5s9 1157m 3546Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 156m 573Mi 04:07:10 DEBUG --- stderr --- 04:07:10 DEBUG 04:07:11 INFO 04:07:11 INFO [loop_until]: kubectl --namespace=xlou top node 04:07:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:07:11 INFO [loop_until]: OK (rc = 0) 04:07:11 DEBUG --- stdout --- 04:07:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1426m 8% 4100Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 4068Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 215m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1212m 7% 4214Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1629m 10% 14233Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 76m 0% 14120Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 221m 1% 2370Mi 4% 04:07:11 DEBUG --- stderr --- 04:07:11 DEBUG 04:08:10 INFO 04:08:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:08:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:08:10 INFO [loop_until]: OK (rc = 0) 04:08:10 DEBUG --- stdout --- 04:08:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 8m 4471Mi am-869fdb5db9-wt7sg 6m 3221Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 10m 375Mi ds-idrepo-0 1478m 13804Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1201m 3430Mi idm-65858d8c4c-pt5s9 1135m 3547Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 152m 574Mi 04:08:10 DEBUG --- stderr --- 04:08:10 DEBUG 04:08:11 INFO 04:08:11 INFO [loop_until]: kubectl --namespace=xlou top node 04:08:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:08:11 INFO [loop_until]: OK (rc = 0) 04:08:11 DEBUG --- stdout --- 04:08:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1428m 8% 4099Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 58m 0% 4079Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 212m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5278Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1171m 7% 4216Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1659m 10% 14260Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14071Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14132Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 225m 1% 2368Mi 4% 04:08:11 DEBUG --- stderr --- 04:08:11 DEBUG 04:09:10 INFO 04:09:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:09:10 INFO [loop_until]: OK (rc = 0) 04:09:10 DEBUG --- stdout --- 04:09:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4471Mi am-869fdb5db9-wt7sg 6m 3232Mi ds-cts-0 6m 403Mi ds-cts-1 7m 372Mi ds-cts-2 7m 375Mi ds-idrepo-0 1734m 13801Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1300m 3432Mi idm-65858d8c4c-pt5s9 1224m 3564Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 123m 574Mi 04:09:10 DEBUG --- stderr --- 04:09:10 DEBUG 04:09:11 INFO 04:09:11 INFO [loop_until]: kubectl --namespace=xlou top node 04:09:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:09:11 INFO [loop_until]: OK (rc = 0) 04:09:11 DEBUG --- stdout --- 04:09:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1390m 8% 4096Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 4089Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 216m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5276Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1300m 8% 4212Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 68m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1671m 10% 14242Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14071Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14119Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 189m 1% 2371Mi 4% 04:09:11 DEBUG --- stderr --- 04:09:11 DEBUG 04:10:10 INFO 04:10:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:10:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:10:10 INFO [loop_until]: OK (rc = 0) 04:10:10 DEBUG --- stdout --- 04:10:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 7m 4471Mi am-869fdb5db9-wt7sg 6m 3240Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 10m 374Mi ds-idrepo-0 1654m 13822Mi ds-idrepo-1 19m 13672Mi ds-idrepo-2 16m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1407m 3434Mi idm-65858d8c4c-pt5s9 1250m 3550Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 132m 574Mi 04:10:10 DEBUG --- stderr --- 04:10:10 DEBUG 04:10:12 INFO 04:10:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:10:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:10:12 INFO [loop_until]: OK (rc = 0) 04:10:12 DEBUG --- stdout --- 04:10:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 69m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1518m 9% 4102Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 991Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 4099Mi 6% gke-xlou-cdm-default-pool-f05840a3-tnc9 216m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5278Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1353m 8% 4225Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 63m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1812m 11% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14074Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 990Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14123Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 196m 1% 2370Mi 4% 04:10:12 DEBUG --- stderr --- 04:10:12 DEBUG 04:11:10 INFO 04:11:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:11:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:11:10 INFO [loop_until]: OK (rc = 0) 04:11:10 DEBUG --- stdout --- 04:11:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 11m 2858Mi am-869fdb5db9-8dg94 16m 4466Mi am-869fdb5db9-wt7sg 14m 3254Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1620m 13805Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 14m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1137m 3435Mi idm-65858d8c4c-pt5s9 1198m 3560Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 119m 574Mi 04:11:10 DEBUG --- stderr --- 04:11:10 DEBUG 04:11:12 INFO 04:11:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:11:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:11:12 INFO [loop_until]: OK (rc = 0) 04:11:12 DEBUG --- stdout --- 04:11:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 74m 0% 3644Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1319m 8% 4103Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 85m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 4114Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 202m 1% 2584Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 74m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1195m 7% 4226Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1669m 10% 14241Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14074Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14125Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 177m 1% 2370Mi 4% 04:11:12 DEBUG --- stderr --- 04:11:12 DEBUG 04:12:10 INFO 04:12:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:12:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:12:10 INFO [loop_until]: OK (rc = 0) 04:12:10 DEBUG --- stdout --- 04:12:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 7m 4467Mi am-869fdb5db9-wt7sg 6m 3263Mi ds-cts-0 6m 404Mi ds-cts-1 8m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1547m 13805Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1151m 3437Mi idm-65858d8c4c-pt5s9 1335m 3553Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 120m 574Mi 04:12:10 DEBUG --- stderr --- 04:12:10 DEBUG 04:12:12 INFO 04:12:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:12:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:12:12 INFO [loop_until]: OK (rc = 0) 04:12:12 DEBUG --- stdout --- 04:12:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3645Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1252m 7% 4105Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 990Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4120Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 219m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1370m 8% 4219Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1732m 10% 14235Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14125Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 190m 1% 2370Mi 4% 04:12:12 DEBUG --- stderr --- 04:12:12 DEBUG 04:13:10 INFO 04:13:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:13:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:13:10 INFO [loop_until]: OK (rc = 0) 04:13:10 DEBUG --- stdout --- 04:13:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 8m 4468Mi am-869fdb5db9-wt7sg 6m 3274Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1754m 13804Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1542m 3439Mi idm-65858d8c4c-pt5s9 1224m 3556Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 127m 575Mi 04:13:10 DEBUG --- stderr --- 04:13:10 DEBUG 04:13:12 INFO 04:13:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:13:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:13:12 INFO [loop_until]: OK (rc = 0) 04:13:12 DEBUG --- stdout --- 04:13:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3644Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1536m 9% 4107Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 57m 0% 4132Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 225m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1344m 8% 4231Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1776m 11% 14252Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 69m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14124Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 201m 1% 2371Mi 4% 04:13:12 DEBUG --- stderr --- 04:13:12 DEBUG 04:14:10 INFO 04:14:10 INFO [loop_until]: kubectl --namespace=xlou top pods 04:14:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:14:11 INFO [loop_until]: OK (rc = 0) 04:14:11 DEBUG --- stdout --- 04:14:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2857Mi am-869fdb5db9-8dg94 8m 4467Mi am-869fdb5db9-wt7sg 6m 3284Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 6m 374Mi ds-idrepo-0 1351m 13805Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1220m 3440Mi idm-65858d8c4c-pt5s9 1046m 3556Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 112m 575Mi 04:14:11 DEBUG --- stderr --- 04:14:11 DEBUG 04:14:12 INFO 04:14:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:14:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:14:12 INFO [loop_until]: OK (rc = 0) 04:14:12 DEBUG --- stdout --- 04:14:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3639Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1241m 7% 4102Mi 6% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 989Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4141Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 203m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5272Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1046m 6% 4224Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1441m 9% 14243Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14123Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 178m 1% 2372Mi 4% 04:14:12 DEBUG --- stderr --- 04:14:12 DEBUG 04:15:11 INFO 04:15:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:15:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:15:11 INFO [loop_until]: OK (rc = 0) 04:15:11 DEBUG --- stdout --- 04:15:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 2858Mi am-869fdb5db9-8dg94 7m 4467Mi am-869fdb5db9-wt7sg 6m 3295Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 6m 376Mi ds-idrepo-0 1875m 13823Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1446m 3450Mi idm-65858d8c4c-pt5s9 1309m 3559Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 128m 575Mi 04:15:11 DEBUG --- stderr --- 04:15:11 DEBUG 04:15:12 INFO 04:15:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:15:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:15:12 INFO [loop_until]: OK (rc = 0) 04:15:12 DEBUG --- stdout --- 04:15:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 69m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1571m 9% 4119Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4150Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 214m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1274m 8% 4229Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1845m 11% 14241Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14076Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 59m 0% 14125Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 199m 1% 2373Mi 4% 04:15:12 DEBUG --- stderr --- 04:15:12 DEBUG 04:16:11 INFO 04:16:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:16:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:16:11 INFO [loop_until]: OK (rc = 0) 04:16:11 DEBUG --- stdout --- 04:16:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 9m 4467Mi am-869fdb5db9-wt7sg 8m 3309Mi ds-cts-0 6m 404Mi ds-cts-1 7m 373Mi ds-cts-2 6m 375Mi ds-idrepo-0 1548m 13799Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 11m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1365m 3444Mi idm-65858d8c4c-pt5s9 976m 3561Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 118m 575Mi 04:16:11 DEBUG --- stderr --- 04:16:11 DEBUG 04:16:12 INFO 04:16:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:16:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:16:12 INFO [loop_until]: OK (rc = 0) 04:16:12 DEBUG --- stdout --- 04:16:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3643Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1458m 9% 4111Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 4162Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 213m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1180m 7% 4229Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1595m 10% 14239Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14124Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 189m 1% 2371Mi 4% 04:16:12 DEBUG --- stderr --- 04:16:12 DEBUG 04:17:11 INFO 04:17:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:17:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:17:11 INFO [loop_until]: OK (rc = 0) 04:17:11 DEBUG --- stdout --- 04:17:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 12m 4467Mi am-869fdb5db9-wt7sg 7m 3318Mi ds-cts-0 6m 403Mi ds-cts-1 7m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1771m 13806Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 17m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1366m 3467Mi idm-65858d8c4c-pt5s9 1173m 3579Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 123m 575Mi 04:17:11 DEBUG --- stderr --- 04:17:11 DEBUG 04:17:12 INFO 04:17:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:17:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:17:12 INFO [loop_until]: OK (rc = 0) 04:17:12 DEBUG --- stdout --- 04:17:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1428m 8% 4135Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 989Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4174Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 209m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1225m 7% 4249Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 59m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1590m 10% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14074Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14124Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 189m 1% 2372Mi 4% 04:17:12 DEBUG --- stderr --- 04:17:12 DEBUG 04:18:11 INFO 04:18:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:18:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:18:11 INFO [loop_until]: OK (rc = 0) 04:18:11 DEBUG --- stdout --- 04:18:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2857Mi am-869fdb5db9-8dg94 8m 4467Mi am-869fdb5db9-wt7sg 14m 3328Mi ds-cts-0 5m 404Mi ds-cts-1 7m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1576m 13806Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 14m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1424m 3448Mi idm-65858d8c4c-pt5s9 1093m 3564Mi lodemon-66684b7694-c5c6m 5m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 124m 575Mi 04:18:11 DEBUG --- stderr --- 04:18:11 DEBUG 04:18:12 INFO 04:18:12 INFO [loop_until]: kubectl --namespace=xlou top node 04:18:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:18:13 INFO [loop_until]: OK (rc = 0) 04:18:13 DEBUG --- stdout --- 04:18:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3640Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1434m 9% 4116Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 990Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4188Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 197m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1134m 7% 4232Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1614m 10% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14074Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 974Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14124Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 192m 1% 2373Mi 4% 04:18:13 DEBUG --- stderr --- 04:18:13 DEBUG 04:19:11 INFO 04:19:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:19:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:19:11 INFO [loop_until]: OK (rc = 0) 04:19:11 DEBUG --- stdout --- 04:19:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2858Mi am-869fdb5db9-8dg94 7m 4467Mi am-869fdb5db9-wt7sg 6m 3332Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1411m 13803Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1208m 3449Mi idm-65858d8c4c-pt5s9 951m 3564Mi lodemon-66684b7694-c5c6m 12m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 113m 575Mi 04:19:11 DEBUG --- stderr --- 04:19:11 DEBUG 04:19:13 INFO 04:19:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:19:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:19:13 INFO [loop_until]: OK (rc = 0) 04:19:13 DEBUG --- stdout --- 04:19:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3643Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1320m 8% 4117Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4187Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 213m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1021m 6% 4234Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1438m 9% 14245Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 73m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 942Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14124Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 182m 1% 2373Mi 4% 04:19:13 DEBUG --- stderr --- 04:19:13 DEBUG 04:20:11 INFO 04:20:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:20:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:20:11 INFO [loop_until]: OK (rc = 0) 04:20:11 DEBUG --- stdout --- 04:20:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 5m 2858Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 6m 3343Mi ds-cts-0 6m 404Mi ds-cts-1 7m 372Mi ds-cts-2 6m 375Mi ds-idrepo-0 1928m 13805Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1595m 3452Mi idm-65858d8c4c-pt5s9 1175m 3566Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 132m 575Mi 04:20:11 DEBUG --- stderr --- 04:20:11 DEBUG 04:20:13 INFO 04:20:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:20:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:20:13 INFO [loop_until]: OK (rc = 0) 04:20:13 DEBUG --- stdout --- 04:20:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1559m 9% 4117Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4204Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 221m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1275m 8% 4232Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1886m 11% 14239Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 74m 0% 14076Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14126Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 196m 1% 2376Mi 4% 04:20:13 DEBUG --- stderr --- 04:20:13 DEBUG 04:21:11 INFO 04:21:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:21:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:21:11 INFO [loop_until]: OK (rc = 0) 04:21:11 DEBUG --- stdout --- 04:21:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 9m 4468Mi am-869fdb5db9-wt7sg 6m 3353Mi ds-cts-0 6m 404Mi ds-cts-1 7m 373Mi ds-cts-2 6m 375Mi ds-idrepo-0 1805m 13801Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 14m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1595m 3453Mi idm-65858d8c4c-pt5s9 1125m 3575Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 134m 576Mi 04:21:11 DEBUG --- stderr --- 04:21:11 DEBUG 04:21:13 INFO 04:21:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:21:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:21:13 INFO [loop_until]: OK (rc = 0) 04:21:13 DEBUG --- stdout --- 04:21:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3643Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1629m 10% 4121Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 991Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 58m 0% 4213Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 223m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1240m 7% 4240Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1741m 10% 14263Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 74m 0% 14077Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 941Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14125Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 201m 1% 2373Mi 4% 04:21:13 DEBUG --- stderr --- 04:21:13 DEBUG 04:22:11 INFO 04:22:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:22:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:22:11 INFO [loop_until]: OK (rc = 0) 04:22:11 DEBUG --- stdout --- 04:22:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 8m 4468Mi am-869fdb5db9-wt7sg 7m 3364Mi ds-cts-0 6m 404Mi ds-cts-1 7m 373Mi ds-cts-2 6m 376Mi ds-idrepo-0 1695m 13812Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1293m 3457Mi idm-65858d8c4c-pt5s9 1262m 3570Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 130m 575Mi 04:22:11 DEBUG --- stderr --- 04:22:11 DEBUG 04:22:13 INFO 04:22:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:22:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:22:13 INFO [loop_until]: OK (rc = 0) 04:22:13 DEBUG --- stdout --- 04:22:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3639Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1363m 8% 4126Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4222Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 223m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 75m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1391m 8% 4238Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1725m 10% 14253Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 76m 0% 14079Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14128Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 195m 1% 2372Mi 4% 04:22:13 DEBUG --- stderr --- 04:22:13 DEBUG 04:23:11 INFO 04:23:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:23:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:23:11 INFO [loop_until]: OK (rc = 0) 04:23:11 DEBUG --- stdout --- 04:23:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2858Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 6m 3376Mi ds-cts-0 13m 405Mi ds-cts-1 7m 373Mi ds-cts-2 6m 376Mi ds-idrepo-0 1505m 13801Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1147m 3460Mi idm-65858d8c4c-pt5s9 1170m 3572Mi lodemon-66684b7694-c5c6m 8m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 116m 575Mi 04:23:11 DEBUG --- stderr --- 04:23:11 DEBUG 04:23:13 INFO 04:23:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:23:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:23:13 INFO [loop_until]: OK (rc = 0) 04:23:13 DEBUG --- stdout --- 04:23:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3639Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1203m 7% 4125Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 990Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4233Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 218m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1221m 7% 4247Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1631m 10% 14238Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 75m 0% 14075Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 66m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14126Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 194m 1% 2385Mi 4% 04:23:13 DEBUG --- stderr --- 04:23:13 DEBUG 04:24:12 INFO 04:24:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:24:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:24:12 INFO [loop_until]: OK (rc = 0) 04:24:12 DEBUG --- stdout --- 04:24:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 7m 4468Mi am-869fdb5db9-wt7sg 8m 3387Mi ds-cts-0 6m 404Mi ds-cts-1 7m 373Mi ds-cts-2 6m 376Mi ds-idrepo-0 1605m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1290m 3467Mi idm-65858d8c4c-pt5s9 1119m 3574Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 130m 575Mi 04:24:12 DEBUG --- stderr --- 04:24:12 DEBUG 04:24:13 INFO 04:24:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:24:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:24:13 INFO [loop_until]: OK (rc = 0) 04:24:13 DEBUG --- stdout --- 04:24:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3644Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1468m 9% 4133Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4244Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 217m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 76m 0% 5274Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1184m 7% 4243Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1632m 10% 14254Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 77m 0% 14077Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 63m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14128Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 192m 1% 2375Mi 4% 04:24:13 DEBUG --- stderr --- 04:24:13 DEBUG 04:25:12 INFO 04:25:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:25:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:25:12 INFO [loop_until]: OK (rc = 0) 04:25:12 DEBUG --- stdout --- 04:25:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 9m 4471Mi am-869fdb5db9-wt7sg 8m 3395Mi ds-cts-0 6m 404Mi ds-cts-1 7m 373Mi ds-cts-2 6m 375Mi ds-idrepo-0 1152m 13822Mi ds-idrepo-1 20m 13672Mi ds-idrepo-2 27m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 864m 3462Mi idm-65858d8c4c-pt5s9 579m 3574Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 116m 576Mi 04:25:12 DEBUG --- stderr --- 04:25:12 DEBUG 04:25:13 INFO 04:25:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:25:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:25:13 INFO [loop_until]: OK (rc = 0) 04:25:13 DEBUG --- stdout --- 04:25:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3643Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 789m 4% 4130Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4256Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 162m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 77m 0% 5276Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 598m 3% 4246Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1043m 6% 14268Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 80m 0% 14081Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 73m 0% 14128Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 150m 0% 2377Mi 4% 04:25:13 DEBUG --- stderr --- 04:25:13 DEBUG 04:26:12 INFO 04:26:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:26:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:26:12 INFO [loop_until]: OK (rc = 0) 04:26:12 DEBUG --- stdout --- 04:26:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 8m 4473Mi am-869fdb5db9-wt7sg 8m 3407Mi ds-cts-0 6m 404Mi ds-cts-1 7m 373Mi ds-cts-2 6m 375Mi ds-idrepo-0 13m 13822Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 3462Mi idm-65858d8c4c-pt5s9 7m 3574Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 103Mi 04:26:12 DEBUG --- stderr --- 04:26:12 DEBUG 04:26:13 INFO 04:26:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:26:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:26:13 INFO [loop_until]: OK (rc = 0) 04:26:13 DEBUG --- stdout --- 04:26:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 84m 0% 4131Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 989Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4265Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 110m 0% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 73m 0% 5277Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 71m 0% 4247Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 61m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 63m 0% 14268Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 72m 0% 14079Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14126Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1907Mi 3% 04:26:13 DEBUG --- stderr --- 04:26:13 DEBUG 127.0.0.1 - - [16/Aug/2023 04:26:45] "GET /monitoring/average?start_time=23-08-16_02:56:20&stop_time=23-08-16_03:24:45 HTTP/1.1" 200 - 04:27:12 INFO 04:27:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:27:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:27:12 INFO [loop_until]: OK (rc = 0) 04:27:12 DEBUG --- stdout --- 04:27:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 8m 4473Mi am-869fdb5db9-wt7sg 7m 3416Mi ds-cts-0 6m 404Mi ds-cts-1 7m 373Mi ds-cts-2 6m 375Mi ds-idrepo-0 11m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 3462Mi idm-65858d8c4c-pt5s9 10m 3574Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 103Mi 04:27:12 DEBUG --- stderr --- 04:27:12 DEBUG 04:27:14 INFO 04:27:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:27:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:27:14 INFO [loop_until]: OK (rc = 0) 04:27:14 DEBUG --- stdout --- 04:27:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 3650Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 87m 0% 4128Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4278Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 111m 0% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 76m 0% 5279Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 75m 0% 4244Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 61m 0% 14268Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 76m 0% 14078Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14130Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1908Mi 3% 04:27:14 DEBUG --- stderr --- 04:27:14 DEBUG 04:28:12 INFO 04:28:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:28:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:28:12 INFO [loop_until]: OK (rc = 0) 04:28:12 DEBUG --- stdout --- 04:28:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 12m 2858Mi am-869fdb5db9-8dg94 8m 4473Mi am-869fdb5db9-wt7sg 7m 3427Mi ds-cts-0 8m 404Mi ds-cts-1 7m 373Mi ds-cts-2 7m 375Mi ds-idrepo-0 1579m 13823Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1704m 3481Mi idm-65858d8c4c-pt5s9 1043m 3586Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 407m 488Mi 04:28:12 DEBUG --- stderr --- 04:28:12 DEBUG 04:28:14 INFO 04:28:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:28:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:28:14 INFO [loop_until]: OK (rc = 0) 04:28:14 DEBUG --- stdout --- 04:28:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3644Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1848m 11% 4145Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 988Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4289Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 213m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 77m 0% 5280Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1373m 8% 4258Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2075m 13% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 70m 0% 14080Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14128Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 409m 2% 2290Mi 3% 04:28:14 DEBUG --- stderr --- 04:28:14 DEBUG 04:29:12 INFO 04:29:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:29:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:29:12 INFO [loop_until]: OK (rc = 0) 04:29:12 DEBUG --- stdout --- 04:29:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 8m 4473Mi am-869fdb5db9-wt7sg 6m 3437Mi ds-cts-0 6m 404Mi ds-cts-1 7m 373Mi ds-cts-2 6m 375Mi ds-idrepo-0 2379m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1773m 3470Mi idm-65858d8c4c-pt5s9 1668m 3579Mi lodemon-66684b7694-c5c6m 5m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 227m 496Mi 04:29:12 DEBUG --- stderr --- 04:29:12 DEBUG 04:29:14 INFO 04:29:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:29:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:29:14 INFO [loop_until]: OK (rc = 0) 04:29:14 DEBUG --- stdout --- 04:29:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3645Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2133m 13% 4138Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 990Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4304Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 264m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 74m 0% 5280Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1555m 9% 4251Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2450m 15% 14263Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14081Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 63m 0% 943Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14129Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 284m 1% 2300Mi 3% 04:29:14 DEBUG --- stderr --- 04:29:14 DEBUG 04:30:12 INFO 04:30:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:30:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:30:12 INFO [loop_until]: OK (rc = 0) 04:30:12 DEBUG --- stdout --- 04:30:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 8m 4473Mi am-869fdb5db9-wt7sg 6m 3448Mi ds-cts-0 5m 404Mi ds-cts-1 9m 375Mi ds-cts-2 6m 376Mi ds-idrepo-0 2411m 13810Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1841m 3472Mi idm-65858d8c4c-pt5s9 1703m 3582Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 239m 497Mi 04:30:12 DEBUG --- stderr --- 04:30:12 DEBUG 04:30:14 INFO 04:30:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:30:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:30:14 INFO [loop_until]: OK (rc = 0) 04:30:14 DEBUG --- stdout --- 04:30:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1962m 12% 4150Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4313Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 254m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 71m 0% 5278Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1550m 9% 4251Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2288m 14% 14255Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14079Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14132Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 280m 1% 2298Mi 3% 04:30:14 DEBUG --- stderr --- 04:30:14 DEBUG 04:31:12 INFO 04:31:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:31:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:31:12 INFO [loop_until]: OK (rc = 0) 04:31:12 DEBUG --- stdout --- 04:31:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 9m 4473Mi am-869fdb5db9-wt7sg 6m 3457Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 375Mi ds-idrepo-0 2262m 13808Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1865m 3476Mi idm-65858d8c4c-pt5s9 1566m 3584Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 191m 500Mi 04:31:12 DEBUG --- stderr --- 04:31:12 DEBUG 04:31:14 INFO 04:31:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:31:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:31:14 INFO [loop_until]: OK (rc = 0) 04:31:14 DEBUG --- stdout --- 04:31:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1988m 12% 4153Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 987Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4323Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 256m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1818m 11% 4257Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2387m 15% 14269Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14080Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 974Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14131Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 242m 1% 2301Mi 3% 04:31:14 DEBUG --- stderr --- 04:31:14 DEBUG 04:32:12 INFO 04:32:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:32:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:32:12 INFO [loop_until]: OK (rc = 0) 04:32:12 DEBUG --- stdout --- 04:32:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 8m 4473Mi am-869fdb5db9-wt7sg 6m 3470Mi ds-cts-0 7m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 376Mi ds-idrepo-0 2729m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 19m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1951m 3481Mi idm-65858d8c4c-pt5s9 1482m 3586Mi lodemon-66684b7694-c5c6m 5m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 167m 501Mi 04:32:12 DEBUG --- stderr --- 04:32:12 DEBUG 04:32:14 INFO 04:32:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:32:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:32:14 INFO [loop_until]: OK (rc = 0) 04:32:14 DEBUG --- stdout --- 04:32:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2105m 13% 4144Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4335Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 248m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 71m 0% 5277Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1563m 9% 4255Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2362m 14% 14249Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14072Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14133Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 234m 1% 2301Mi 3% 04:32:14 DEBUG --- stderr --- 04:32:14 DEBUG 04:33:12 INFO 04:33:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:33:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:33:13 INFO [loop_until]: OK (rc = 0) 04:33:13 DEBUG --- stdout --- 04:33:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 8m 4473Mi am-869fdb5db9-wt7sg 8m 3481Mi ds-cts-0 7m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 375Mi ds-idrepo-0 2451m 13823Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 13m 13637Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1993m 3484Mi idm-65858d8c4c-pt5s9 1742m 3595Mi lodemon-66684b7694-c5c6m 4m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 187m 502Mi 04:33:13 DEBUG --- stderr --- 04:33:13 DEBUG 04:33:14 INFO 04:33:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:33:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:33:14 INFO [loop_until]: OK (rc = 0) 04:33:14 DEBUG --- stdout --- 04:33:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1829m 11% 4146Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 990Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4343Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 253m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5276Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1606m 10% 4257Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2258m 14% 14247Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14075Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14132Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 230m 1% 2300Mi 3% 04:33:14 DEBUG --- stderr --- 04:33:14 DEBUG 04:34:13 INFO 04:34:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:34:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:34:13 INFO [loop_until]: OK (rc = 0) 04:34:13 DEBUG --- stdout --- 04:34:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 9m 4474Mi am-869fdb5db9-wt7sg 6m 3490Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 376Mi ds-idrepo-0 2387m 13803Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 11m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1822m 3485Mi idm-65858d8c4c-pt5s9 1489m 3590Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 160m 502Mi 04:34:13 DEBUG --- stderr --- 04:34:13 DEBUG 04:34:14 INFO 04:34:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:34:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:34:14 INFO [loop_until]: OK (rc = 0) 04:34:14 DEBUG --- stdout --- 04:34:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3646Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2052m 12% 4151Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 989Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4352Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 263m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 71m 0% 5279Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1664m 10% 4257Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2323m 14% 14251Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14075Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 973Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14133Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 239m 1% 2302Mi 3% 04:34:14 DEBUG --- stderr --- 04:34:14 DEBUG 04:35:13 INFO 04:35:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:35:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:35:13 INFO [loop_until]: OK (rc = 0) 04:35:13 DEBUG --- stdout --- 04:35:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 6m 4474Mi am-869fdb5db9-wt7sg 6m 3500Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 11m 378Mi ds-idrepo-0 1940m 13802Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13637Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1748m 3487Mi idm-65858d8c4c-pt5s9 1430m 3592Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 169m 503Mi 04:35:13 DEBUG --- stderr --- 04:35:13 DEBUG 04:35:14 INFO 04:35:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:35:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:35:15 INFO [loop_until]: OK (rc = 0) 04:35:15 DEBUG --- stdout --- 04:35:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3645Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1926m 12% 4151Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 989Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4364Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 263m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 71m 0% 5279Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1667m 10% 4261Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2197m 13% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14076Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14134Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 233m 1% 2302Mi 3% 04:35:15 DEBUG --- stderr --- 04:35:15 DEBUG 04:36:13 INFO 04:36:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:36:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:36:13 INFO [loop_until]: OK (rc = 0) 04:36:13 DEBUG --- stdout --- 04:36:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 7m 4474Mi am-869fdb5db9-wt7sg 7m 3511Mi ds-cts-0 7m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2108m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13639Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2049m 3490Mi idm-65858d8c4c-pt5s9 1374m 3595Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 169m 504Mi 04:36:13 DEBUG --- stderr --- 04:36:13 DEBUG 04:36:15 INFO 04:36:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:36:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:36:15 INFO [loop_until]: OK (rc = 0) 04:36:15 DEBUG --- stdout --- 04:36:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3646Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1900m 11% 4154Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 70m 0% 4375Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 270m 1% 2571Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5276Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1835m 11% 4261Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2308m 14% 14250Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14076Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14134Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 239m 1% 2302Mi 3% 04:36:15 DEBUG --- stderr --- 04:36:15 DEBUG 04:37:13 INFO 04:37:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:37:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:37:13 INFO [loop_until]: OK (rc = 0) 04:37:13 DEBUG --- stdout --- 04:37:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2859Mi am-869fdb5db9-8dg94 8m 4474Mi am-869fdb5db9-wt7sg 6m 3520Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 2229m 13823Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 12m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1777m 3493Mi idm-65858d8c4c-pt5s9 1617m 3597Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 161m 504Mi 04:37:13 DEBUG --- stderr --- 04:37:13 DEBUG 04:37:15 INFO 04:37:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:37:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:37:15 INFO [loop_until]: OK (rc = 0) 04:37:15 DEBUG --- stdout --- 04:37:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3646Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1956m 12% 4163Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4387Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 255m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5277Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1500m 9% 4265Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2312m 14% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14136Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 228m 1% 2304Mi 3% 04:37:15 DEBUG --- stderr --- 04:37:15 DEBUG 04:38:13 INFO 04:38:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:38:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:38:13 INFO [loop_until]: OK (rc = 0) 04:38:13 DEBUG --- stdout --- 04:38:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 9m 4474Mi am-869fdb5db9-wt7sg 6m 3532Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2352m 13806Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1912m 3501Mi idm-65858d8c4c-pt5s9 1592m 3599Mi lodemon-66684b7694-c5c6m 10m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 175m 504Mi 04:38:13 DEBUG --- stderr --- 04:38:13 DEBUG 04:38:15 INFO 04:38:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:38:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:38:15 INFO [loop_until]: OK (rc = 0) 04:38:15 DEBUG --- stdout --- 04:38:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3644Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2064m 12% 4169Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 991Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4396Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 254m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5278Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1351m 8% 4265Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2101m 13% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14073Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14136Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 220m 1% 2302Mi 3% 04:38:15 DEBUG --- stderr --- 04:38:15 DEBUG 04:39:13 INFO 04:39:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:39:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:39:13 INFO [loop_until]: OK (rc = 0) 04:39:13 DEBUG --- stdout --- 04:39:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 23m 4483Mi am-869fdb5db9-wt7sg 6m 3544Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2205m 13799Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1826m 3503Mi idm-65858d8c4c-pt5s9 1592m 3600Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 163m 504Mi 04:39:13 DEBUG --- stderr --- 04:39:13 DEBUG 04:39:15 INFO 04:39:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:39:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:39:15 INFO [loop_until]: OK (rc = 0) 04:39:15 DEBUG --- stdout --- 04:39:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3645Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1891m 11% 4170Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 70m 0% 4414Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 254m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5277Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1611m 10% 4269Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2361m 14% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14074Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14132Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 230m 1% 2304Mi 3% 04:39:15 DEBUG --- stderr --- 04:39:15 DEBUG 04:40:13 INFO 04:40:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:40:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:40:13 INFO [loop_until]: OK (rc = 0) 04:40:13 DEBUG --- stdout --- 04:40:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 10m 4474Mi am-869fdb5db9-wt7sg 8m 3560Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 2296m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 20m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1925m 3507Mi idm-65858d8c4c-pt5s9 1655m 3608Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 167m 504Mi 04:40:13 DEBUG --- stderr --- 04:40:13 DEBUG 04:40:15 INFO 04:40:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:40:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:40:15 INFO [loop_until]: OK (rc = 0) 04:40:15 DEBUG --- stdout --- 04:40:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3644Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1890m 11% 4169Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 986Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4426Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 250m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5277Mi 8% gke-xlou-cdm-default-pool-f05840a3-zj9v 1460m 9% 4273Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2031m 12% 14253Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14074Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14136Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 226m 1% 2304Mi 3% 04:40:15 DEBUG --- stderr --- 04:40:15 DEBUG 04:41:13 INFO 04:41:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:41:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:41:13 INFO [loop_until]: OK (rc = 0) 04:41:13 DEBUG --- stdout --- 04:41:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 8m 4474Mi am-869fdb5db9-wt7sg 8m 3570Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2109m 13801Mi ds-idrepo-1 13m 13673Mi ds-idrepo-2 11m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1719m 3508Mi idm-65858d8c4c-pt5s9 1507m 3605Mi lodemon-66684b7694-c5c6m 7m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 157m 504Mi 04:41:13 DEBUG --- stderr --- 04:41:13 DEBUG 04:41:15 INFO 04:41:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:41:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:41:15 INFO [loop_until]: OK (rc = 0) 04:41:15 DEBUG --- stdout --- 04:41:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3645Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2194m 13% 4174Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4435Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 261m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5279Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1810m 11% 4275Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2512m 15% 14253Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14075Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 975Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14136Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 235m 1% 2305Mi 3% 04:41:15 DEBUG --- stderr --- 04:41:15 DEBUG 04:42:13 INFO 04:42:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:42:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:42:13 INFO [loop_until]: OK (rc = 0) 04:42:13 DEBUG --- stdout --- 04:42:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 8m 4474Mi am-869fdb5db9-wt7sg 6m 3580Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2050m 13801Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1831m 3510Mi idm-65858d8c4c-pt5s9 1167m 3606Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 159m 505Mi 04:42:13 DEBUG --- stderr --- 04:42:13 DEBUG 04:42:15 INFO 04:42:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:42:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:42:15 INFO [loop_until]: OK (rc = 0) 04:42:15 DEBUG --- stdout --- 04:42:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1927m 12% 4175Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 84m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4444Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 260m 1% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5280Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1604m 10% 4273Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2181m 13% 14257Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 60m 0% 14074Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14136Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 232m 1% 2302Mi 3% 04:42:15 DEBUG --- stderr --- 04:42:15 DEBUG 04:43:13 INFO 04:43:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:43:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:43:13 INFO [loop_until]: OK (rc = 0) 04:43:13 DEBUG --- stdout --- 04:43:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2858Mi am-869fdb5db9-8dg94 7m 4474Mi am-869fdb5db9-wt7sg 6m 3589Mi ds-cts-0 6m 400Mi ds-cts-1 7m 376Mi ds-cts-2 6m 377Mi ds-idrepo-0 2189m 13806Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 12m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1796m 3513Mi idm-65858d8c4c-pt5s9 1419m 3608Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 164m 506Mi 04:43:13 DEBUG --- stderr --- 04:43:13 DEBUG 04:43:16 INFO 04:43:16 INFO [loop_until]: kubectl --namespace=xlou top node 04:43:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:43:16 INFO [loop_until]: OK (rc = 0) 04:43:16 DEBUG --- stdout --- 04:43:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3645Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1866m 11% 4178Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 86m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4456Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 254m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5282Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1665m 10% 4272Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2260m 14% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14078Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14136Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 238m 1% 2303Mi 3% 04:43:16 DEBUG --- stderr --- 04:43:16 DEBUG 04:44:14 INFO 04:44:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:44:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:44:14 INFO [loop_until]: OK (rc = 0) 04:44:14 DEBUG --- stdout --- 04:44:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2858Mi am-869fdb5db9-8dg94 8m 4475Mi am-869fdb5db9-wt7sg 6m 3601Mi ds-cts-0 6m 400Mi ds-cts-1 6m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 1919m 13803Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 13m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1763m 3515Mi idm-65858d8c4c-pt5s9 1366m 3612Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 163m 505Mi 04:44:14 DEBUG --- stderr --- 04:44:14 DEBUG 04:44:16 INFO 04:44:16 INFO [loop_until]: kubectl --namespace=xlou top node 04:44:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:44:16 INFO [loop_until]: OK (rc = 0) 04:44:16 DEBUG --- stdout --- 04:44:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3646Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2015m 12% 4181Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4463Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 254m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5283Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1676m 10% 4279Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2255m 14% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14077Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 69m 0% 14137Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 241m 1% 2303Mi 3% 04:44:16 DEBUG --- stderr --- 04:44:16 DEBUG 04:45:14 INFO 04:45:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:45:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:45:14 INFO [loop_until]: OK (rc = 0) 04:45:14 DEBUG --- stdout --- 04:45:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2859Mi am-869fdb5db9-8dg94 8m 4475Mi am-869fdb5db9-wt7sg 6m 3612Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 12m 373Mi ds-idrepo-0 2280m 13823Mi ds-idrepo-1 18m 13672Mi ds-idrepo-2 14m 13639Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1851m 3518Mi idm-65858d8c4c-pt5s9 1806m 3615Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 174m 506Mi 04:45:14 DEBUG --- stderr --- 04:45:14 DEBUG 04:45:16 INFO 04:45:16 INFO [loop_until]: kubectl --namespace=xlou top node 04:45:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:45:16 INFO [loop_until]: OK (rc = 0) 04:45:16 DEBUG --- stdout --- 04:45:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3641Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1698m 10% 4182Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 82m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4474Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 243m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5282Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1635m 10% 4284Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2119m 13% 14274Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14076Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14136Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 225m 1% 2306Mi 3% 04:45:16 DEBUG --- stderr --- 04:45:16 DEBUG 04:46:14 INFO 04:46:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:46:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:46:14 INFO [loop_until]: OK (rc = 0) 04:46:14 DEBUG --- stdout --- 04:46:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2861Mi am-869fdb5db9-8dg94 8m 4475Mi am-869fdb5db9-wt7sg 6m 3620Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 373Mi ds-idrepo-0 2082m 13803Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13639Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1820m 3518Mi idm-65858d8c4c-pt5s9 1532m 3617Mi lodemon-66684b7694-c5c6m 6m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 161m 506Mi 04:46:14 DEBUG --- stderr --- 04:46:14 DEBUG 04:46:16 INFO 04:46:16 INFO [loop_until]: kubectl --namespace=xlou top node 04:46:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:46:16 INFO [loop_until]: OK (rc = 0) 04:46:16 DEBUG --- stdout --- 04:46:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3648Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2071m 13% 4188Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4486Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 240m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5278Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1585m 9% 4282Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2260m 14% 14273Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14075Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 82m 0% 14147Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 239m 1% 2306Mi 3% 04:46:16 DEBUG --- stderr --- 04:46:16 DEBUG 04:47:14 INFO 04:47:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:47:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:47:14 INFO [loop_until]: OK (rc = 0) 04:47:14 DEBUG --- stdout --- 04:47:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2862Mi am-869fdb5db9-8dg94 7m 4475Mi am-869fdb5db9-wt7sg 5m 3637Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 373Mi ds-idrepo-0 2240m 13822Mi ds-idrepo-1 15m 13672Mi ds-idrepo-2 11m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2015m 3524Mi idm-65858d8c4c-pt5s9 1370m 3617Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 174m 507Mi 04:47:14 DEBUG --- stderr --- 04:47:14 DEBUG 04:47:16 INFO 04:47:16 INFO [loop_until]: kubectl --namespace=xlou top node 04:47:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:47:16 INFO [loop_until]: OK (rc = 0) 04:47:16 DEBUG --- stdout --- 04:47:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3647Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2075m 13% 4187Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4495Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 252m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5282Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1486m 9% 4284Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2257m 14% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14077Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14137Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 238m 1% 2307Mi 3% 04:47:16 DEBUG --- stderr --- 04:47:16 DEBUG 04:48:14 INFO 04:48:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:48:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:48:14 INFO [loop_until]: OK (rc = 0) 04:48:14 DEBUG --- stdout --- 04:48:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2866Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 6m 3647Mi ds-cts-0 6m 400Mi ds-cts-1 7m 375Mi ds-cts-2 6m 374Mi ds-idrepo-0 2152m 13822Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 13m 13638Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1965m 3526Mi idm-65858d8c4c-pt5s9 1441m 3620Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 166m 507Mi 04:48:14 DEBUG --- stderr --- 04:48:14 DEBUG 04:48:16 INFO 04:48:16 INFO [loop_until]: kubectl --namespace=xlou top node 04:48:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:48:16 INFO [loop_until]: OK (rc = 0) 04:48:16 DEBUG --- stdout --- 04:48:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3651Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1980m 12% 4191Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4507Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 254m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5283Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1508m 9% 4289Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2071m 13% 14252Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14077Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14133Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 231m 1% 2305Mi 3% 04:48:16 DEBUG --- stderr --- 04:48:16 DEBUG 04:49:14 INFO 04:49:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:49:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:49:14 INFO [loop_until]: OK (rc = 0) 04:49:14 DEBUG --- stdout --- 04:49:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2868Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 6m 3658Mi ds-cts-0 6m 400Mi ds-cts-1 6m 375Mi ds-cts-2 6m 374Mi ds-idrepo-0 2018m 13800Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 18m 13640Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1906m 3533Mi idm-65858d8c4c-pt5s9 1390m 3621Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 191m 542Mi 04:49:14 DEBUG --- stderr --- 04:49:14 DEBUG 04:49:16 INFO 04:49:16 INFO [loop_until]: kubectl --namespace=xlou top node 04:49:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:49:16 INFO [loop_until]: OK (rc = 0) 04:49:16 DEBUG --- stdout --- 04:49:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3650Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1896m 11% 4196Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4520Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 253m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5279Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1480m 9% 4288Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2016m 12% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 71m 0% 14077Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 63m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14135Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 260m 1% 2339Mi 3% 04:49:16 DEBUG --- stderr --- 04:49:16 DEBUG 04:50:14 INFO 04:50:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:50:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:50:14 INFO [loop_until]: OK (rc = 0) 04:50:14 DEBUG --- stdout --- 04:50:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2868Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 7m 3668Mi ds-cts-0 6m 400Mi ds-cts-1 6m 375Mi ds-cts-2 6m 374Mi ds-idrepo-0 2079m 13804Mi ds-idrepo-1 14m 13672Mi ds-idrepo-2 15m 13640Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1696m 3535Mi idm-65858d8c4c-pt5s9 1537m 3624Mi lodemon-66684b7694-c5c6m 1m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 185m 542Mi 04:50:14 DEBUG --- stderr --- 04:50:14 DEBUG 04:50:16 INFO 04:50:16 INFO [loop_until]: kubectl --namespace=xlou top node 04:50:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:50:16 INFO [loop_until]: OK (rc = 0) 04:50:16 DEBUG --- stdout --- 04:50:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3653Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1738m 10% 4196Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 4527Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 248m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5284Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1687m 10% 4289Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2133m 13% 14255Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14078Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14135Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 255m 1% 2341Mi 3% 04:50:16 DEBUG --- stderr --- 04:50:16 DEBUG 04:51:14 INFO 04:51:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:51:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:51:14 INFO [loop_until]: OK (rc = 0) 04:51:14 DEBUG --- stdout --- 04:51:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2869Mi am-869fdb5db9-8dg94 7m 4478Mi am-869fdb5db9-wt7sg 6m 3679Mi ds-cts-0 7m 402Mi ds-cts-1 7m 375Mi ds-cts-2 6m 373Mi ds-idrepo-0 1860m 13808Mi ds-idrepo-1 16m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1651m 3536Mi idm-65858d8c4c-pt5s9 1350m 3625Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 180m 542Mi 04:51:14 DEBUG --- stderr --- 04:51:14 DEBUG 04:51:17 INFO 04:51:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:51:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:51:17 INFO [loop_until]: OK (rc = 0) 04:51:17 DEBUG --- stdout --- 04:51:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 3655Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1895m 11% 4199Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4541Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 246m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5281Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1532m 9% 4294Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2122m 13% 14253Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14082Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 68m 0% 14138Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 251m 1% 2343Mi 3% 04:51:17 DEBUG --- stderr --- 04:51:17 DEBUG 04:52:14 INFO 04:52:14 INFO [loop_until]: kubectl --namespace=xlou top pods 04:52:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:52:14 INFO [loop_until]: OK (rc = 0) 04:52:14 DEBUG --- stdout --- 04:52:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2865Mi am-869fdb5db9-8dg94 7m 4478Mi am-869fdb5db9-wt7sg 7m 3690Mi ds-cts-0 6m 402Mi ds-cts-1 6m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 2275m 13802Mi ds-idrepo-1 13m 13672Mi ds-idrepo-2 17m 13640Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1780m 3539Mi idm-65858d8c4c-pt5s9 1750m 3628Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 159m 542Mi 04:52:14 DEBUG --- stderr --- 04:52:14 DEBUG 04:52:17 INFO 04:52:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:52:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:52:17 INFO [loop_until]: OK (rc = 0) 04:52:17 DEBUG --- stdout --- 04:52:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3652Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1920m 12% 4201Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4552Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 257m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5284Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1695m 10% 4293Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2247m 14% 14269Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 70m 0% 14077Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14134Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 229m 1% 2343Mi 3% 04:52:17 DEBUG --- stderr --- 04:52:17 DEBUG 04:53:15 INFO 04:53:15 INFO [loop_until]: kubectl --namespace=xlou top pods 04:53:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:53:15 INFO [loop_until]: OK (rc = 0) 04:53:15 DEBUG --- stdout --- 04:53:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2865Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 6m 3700Mi ds-cts-0 6m 402Mi ds-cts-1 7m 377Mi ds-cts-2 6m 373Mi ds-idrepo-0 1988m 13801Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13640Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1742m 3538Mi idm-65858d8c4c-pt5s9 1238m 3630Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 146m 543Mi 04:53:15 DEBUG --- stderr --- 04:53:15 DEBUG 04:53:17 INFO 04:53:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:53:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:53:17 INFO [loop_until]: OK (rc = 0) 04:53:17 DEBUG --- stdout --- 04:53:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3650Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1749m 11% 4201Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4561Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 238m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5282Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1371m 8% 4298Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2000m 12% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 69m 0% 14080Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14138Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 214m 1% 2341Mi 3% 04:53:17 DEBUG --- stderr --- 04:53:17 DEBUG 04:54:15 INFO 04:54:15 INFO [loop_until]: kubectl --namespace=xlou top pods 04:54:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:54:15 INFO [loop_until]: OK (rc = 0) 04:54:15 DEBUG --- stdout --- 04:54:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2866Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 6m 3712Mi ds-cts-0 6m 402Mi ds-cts-1 8m 376Mi ds-cts-2 6m 373Mi ds-idrepo-0 2005m 13800Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 17m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1582m 3539Mi idm-65858d8c4c-pt5s9 1420m 3631Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 145m 543Mi 04:54:15 DEBUG --- stderr --- 04:54:15 DEBUG 04:54:17 INFO 04:54:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:54:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:54:17 INFO [loop_until]: OK (rc = 0) 04:54:17 DEBUG --- stdout --- 04:54:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3647Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1648m 10% 4206Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 991Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4572Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 258m 1% 2571Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5283Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1579m 9% 4301Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2079m 13% 14256Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 69m 0% 14080Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 976Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14135Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 215m 1% 2344Mi 3% 04:54:17 DEBUG --- stderr --- 04:54:17 DEBUG 04:55:15 INFO 04:55:15 INFO [loop_until]: kubectl --namespace=xlou top pods 04:55:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:55:15 INFO [loop_until]: OK (rc = 0) 04:55:15 DEBUG --- stdout --- 04:55:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2866Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 7m 3721Mi ds-cts-0 6m 402Mi ds-cts-1 7m 376Mi ds-cts-2 6m 373Mi ds-idrepo-0 2120m 13805Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 15m 13640Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1900m 3542Mi idm-65858d8c4c-pt5s9 1474m 3633Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 151m 542Mi 04:55:15 DEBUG --- stderr --- 04:55:15 DEBUG 04:55:17 INFO 04:55:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:55:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:55:17 INFO [loop_until]: OK (rc = 0) 04:55:17 DEBUG --- stdout --- 04:55:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 3651Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1765m 11% 4209Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4581Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 253m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5284Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1547m 9% 4311Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2212m 13% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14080Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 978Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14139Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 211m 1% 2344Mi 3% 04:55:17 DEBUG --- stderr --- 04:55:17 DEBUG 04:56:15 INFO 04:56:15 INFO [loop_until]: kubectl --namespace=xlou top pods 04:56:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:56:15 INFO [loop_until]: OK (rc = 0) 04:56:15 DEBUG --- stdout --- 04:56:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2867Mi am-869fdb5db9-8dg94 7m 4478Mi am-869fdb5db9-wt7sg 6m 3733Mi ds-cts-0 6m 402Mi ds-cts-1 7m 376Mi ds-cts-2 6m 373Mi ds-idrepo-0 2068m 13804Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 13m 13640Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1767m 3544Mi idm-65858d8c4c-pt5s9 1388m 3635Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 154m 543Mi 04:56:15 DEBUG --- stderr --- 04:56:15 DEBUG 04:56:17 INFO 04:56:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:56:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:56:17 INFO [loop_until]: OK (rc = 0) 04:56:17 DEBUG --- stdout --- 04:56:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3652Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1780m 11% 4207Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4591Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 251m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5284Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1380m 8% 4298Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2061m 12% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14082Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14143Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 222m 1% 2356Mi 4% 04:56:17 DEBUG --- stderr --- 04:56:17 DEBUG 04:57:15 INFO 04:57:15 INFO [loop_until]: kubectl --namespace=xlou top pods 04:57:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:57:15 INFO [loop_until]: OK (rc = 0) 04:57:15 DEBUG --- stdout --- 04:57:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2869Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 6m 3744Mi ds-cts-0 6m 402Mi ds-cts-1 6m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 1272m 13802Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13640Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1050m 3545Mi idm-65858d8c4c-pt5s9 818m 3638Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 138m 545Mi 04:57:15 DEBUG --- stderr --- 04:57:15 DEBUG 04:57:17 INFO 04:57:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:57:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:57:17 INFO [loop_until]: OK (rc = 0) 04:57:17 DEBUG --- stdout --- 04:57:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 3650Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 764m 4% 4208Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4601Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 191m 1% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5284Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 739m 4% 4303Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 1355m 8% 14256Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14079Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14144Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 193m 1% 2346Mi 4% 04:57:17 DEBUG --- stderr --- 04:57:17 DEBUG 04:58:15 INFO 04:58:15 INFO [loop_until]: kubectl --namespace=xlou top pods 04:58:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:58:15 INFO [loop_until]: OK (rc = 0) 04:58:15 DEBUG --- stdout --- 04:58:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2870Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 6m 3754Mi ds-cts-0 6m 402Mi ds-cts-1 6m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 11m 13802Mi ds-idrepo-1 10m 13673Mi ds-idrepo-2 20m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 7m 3545Mi idm-65858d8c4c-pt5s9 6m 3637Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 104Mi 04:58:15 DEBUG --- stderr --- 04:58:15 DEBUG 04:58:17 INFO 04:58:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:58:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:58:17 INFO [loop_until]: OK (rc = 0) 04:58:17 DEBUG --- stdout --- 04:58:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3654Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 82m 0% 4208Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4612Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 111m 0% 2586Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5282Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 71m 0% 4304Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 59m 0% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 71m 0% 14083Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14143Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1911Mi 3% 04:58:17 DEBUG --- stderr --- 04:58:17 DEBUG 127.0.0.1 - - [16/Aug/2023 04:59:11] "GET /monitoring/average?start_time=23-08-16_03:28:45&stop_time=23-08-16_03:57:10 HTTP/1.1" 200 - 04:59:15 INFO 04:59:15 INFO [loop_until]: kubectl --namespace=xlou top pods 04:59:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:59:15 INFO [loop_until]: OK (rc = 0) 04:59:15 DEBUG --- stdout --- 04:59:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2871Mi am-869fdb5db9-8dg94 8m 4478Mi am-869fdb5db9-wt7sg 6m 3763Mi ds-cts-0 6m 402Mi ds-cts-1 8m 376Mi ds-cts-2 6m 373Mi ds-idrepo-0 11m 13802Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 12m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 3545Mi idm-65858d8c4c-pt5s9 5m 3637Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 2m 104Mi 04:59:15 DEBUG --- stderr --- 04:59:15 DEBUG 04:59:17 INFO 04:59:17 INFO [loop_until]: kubectl --namespace=xlou top node 04:59:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:59:18 INFO [loop_until]: OK (rc = 0) 04:59:18 DEBUG --- stdout --- 04:59:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3653Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 83m 0% 4209Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4624Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 120m 0% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5281Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 70m 0% 4305Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 58m 0% 14257Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14083Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14144Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 75m 0% 1908Mi 3% 04:59:18 DEBUG --- stderr --- 04:59:18 DEBUG 05:00:15 INFO 05:00:15 INFO [loop_until]: kubectl --namespace=xlou top pods 05:00:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:00:15 INFO [loop_until]: OK (rc = 0) 05:00:15 DEBUG --- stdout --- 05:00:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2871Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 7m 3774Mi ds-cts-0 6m 402Mi ds-cts-1 7m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 2616m 13823Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2097m 3567Mi idm-65858d8c4c-pt5s9 1573m 3640Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 400m 495Mi 05:00:15 DEBUG --- stderr --- 05:00:15 DEBUG 05:00:18 INFO 05:00:18 INFO [loop_until]: kubectl --namespace=xlou top node 05:00:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:00:18 INFO [loop_until]: OK (rc = 0) 05:00:18 DEBUG --- stdout --- 05:00:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3656Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2529m 15% 4231Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4633Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 279m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5282Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2034m 12% 4323Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2493m 15% 14252Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14081Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14141Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 445m 2% 2297Mi 3% 05:00:18 DEBUG --- stderr --- 05:00:18 DEBUG 05:01:15 INFO 05:01:15 INFO [loop_until]: kubectl --namespace=xlou top pods 05:01:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:01:15 INFO [loop_until]: OK (rc = 0) 05:01:15 DEBUG --- stdout --- 05:01:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2873Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3786Mi ds-cts-0 6m 402Mi ds-cts-1 7m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 2570m 13807Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2082m 3551Mi idm-65858d8c4c-pt5s9 1767m 3644Mi lodemon-66684b7694-c5c6m 1m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 243m 494Mi 05:01:15 DEBUG --- stderr --- 05:01:15 DEBUG 05:01:18 INFO 05:01:18 INFO [loop_until]: kubectl --namespace=xlou top node 05:01:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:01:18 INFO [loop_until]: OK (rc = 0) 05:01:18 DEBUG --- stdout --- 05:01:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3659Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2416m 15% 4203Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4643Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 288m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5283Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2105m 13% 4307Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2788m 17% 14259Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14083Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14143Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 322m 2% 2295Mi 3% 05:01:18 DEBUG --- stderr --- 05:01:18 DEBUG 05:02:16 INFO 05:02:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:02:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:02:16 INFO [loop_until]: OK (rc = 0) 05:02:16 DEBUG --- stdout --- 05:02:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2873Mi am-869fdb5db9-8dg94 7m 4480Mi am-869fdb5db9-wt7sg 7m 3796Mi ds-cts-0 6m 402Mi ds-cts-1 7m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 2800m 13801Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2477m 3557Mi idm-65858d8c4c-pt5s9 1983m 3647Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 246m 505Mi 05:02:16 DEBUG --- stderr --- 05:02:16 DEBUG 05:02:18 INFO 05:02:18 INFO [loop_until]: kubectl --namespace=xlou top node 05:02:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:02:18 INFO [loop_until]: OK (rc = 0) 05:02:18 DEBUG --- stdout --- 05:02:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3654Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2544m 16% 4221Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4655Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 290m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1984m 12% 4314Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2880m 18% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14083Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14144Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 319m 2% 2308Mi 3% 05:02:18 DEBUG --- stderr --- 05:02:18 DEBUG 05:03:16 INFO 05:03:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:03:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:03:16 INFO [loop_until]: OK (rc = 0) 05:03:16 DEBUG --- stdout --- 05:03:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2873Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3807Mi ds-cts-0 6m 402Mi ds-cts-1 7m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 2766m 13806Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2444m 3560Mi idm-65858d8c4c-pt5s9 1369m 3648Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 202m 501Mi 05:03:16 DEBUG --- stderr --- 05:03:16 DEBUG 05:03:18 INFO 05:03:18 INFO [loop_until]: kubectl --namespace=xlou top node 05:03:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:03:18 INFO [loop_until]: OK (rc = 0) 05:03:18 DEBUG --- stdout --- 05:03:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3655Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2666m 16% 4224Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4667Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 281m 1% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1771m 11% 4318Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2786m 17% 14266Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14082Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14144Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 269m 1% 2302Mi 3% 05:03:18 DEBUG --- stderr --- 05:03:18 DEBUG 05:04:16 INFO 05:04:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:04:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:04:16 INFO [loop_until]: OK (rc = 0) 05:04:16 DEBUG --- stdout --- 05:04:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2874Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3817Mi ds-cts-0 11m 403Mi ds-cts-1 7m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 2742m 13806Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2261m 3562Mi idm-65858d8c4c-pt5s9 1893m 3652Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 189m 502Mi 05:04:16 DEBUG --- stderr --- 05:04:16 DEBUG 05:04:18 INFO 05:04:18 INFO [loop_until]: kubectl --namespace=xlou top node 05:04:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:04:18 INFO [loop_until]: OK (rc = 0) 05:04:18 DEBUG --- stdout --- 05:04:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3660Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2399m 15% 4238Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4677Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 289m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5288Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2030m 12% 4321Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2668m 16% 14264Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14082Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 979Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14144Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 268m 1% 2305Mi 3% 05:04:18 DEBUG --- stderr --- 05:04:18 DEBUG 05:05:16 INFO 05:05:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:05:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:05:16 INFO [loop_until]: OK (rc = 0) 05:05:16 DEBUG --- stdout --- 05:05:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2874Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3826Mi ds-cts-0 6m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 377Mi ds-idrepo-0 2985m 13801Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2466m 3565Mi idm-65858d8c4c-pt5s9 2042m 3654Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 188m 502Mi 05:05:16 DEBUG --- stderr --- 05:05:16 DEBUG 05:05:18 INFO 05:05:18 INFO [loop_until]: kubectl --namespace=xlou top node 05:05:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:05:18 INFO [loop_until]: OK (rc = 0) 05:05:18 DEBUG --- stdout --- 05:05:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3656Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2702m 17% 4229Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4685Mi 7% gke-xlou-cdm-default-pool-f05840a3-tnc9 278m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2124m 13% 4320Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2982m 18% 14284Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14083Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14145Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 273m 1% 2305Mi 3% 05:05:18 DEBUG --- stderr --- 05:05:18 DEBUG 05:06:16 INFO 05:06:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:06:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:06:16 INFO [loop_until]: OK (rc = 0) 05:06:16 DEBUG --- stdout --- 05:06:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2874Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3838Mi ds-cts-0 7m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 377Mi ds-idrepo-0 2704m 13821Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2204m 3568Mi idm-65858d8c4c-pt5s9 1963m 3656Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 195m 503Mi 05:06:16 DEBUG --- stderr --- 05:06:16 DEBUG 05:06:18 INFO 05:06:18 INFO [loop_until]: kubectl --namespace=xlou top node 05:06:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:06:18 INFO [loop_until]: OK (rc = 0) 05:06:18 DEBUG --- stdout --- 05:06:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3659Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2304m 14% 4234Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4694Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 286m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5285Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1936m 12% 4322Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2726m 17% 14265Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14083Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14146Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 264m 1% 2303Mi 3% 05:06:18 DEBUG --- stderr --- 05:06:18 DEBUG 05:07:16 INFO 05:07:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:07:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:07:16 INFO [loop_until]: OK (rc = 0) 05:07:16 DEBUG --- stdout --- 05:07:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2876Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3847Mi ds-cts-0 6m 403Mi ds-cts-1 6m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 2822m 13799Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2520m 3570Mi idm-65858d8c4c-pt5s9 1919m 3667Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 200m 502Mi 05:07:16 DEBUG --- stderr --- 05:07:16 DEBUG 05:07:18 INFO 05:07:18 INFO [loop_until]: kubectl --namespace=xlou top node 05:07:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:07:19 INFO [loop_until]: OK (rc = 0) 05:07:19 DEBUG --- stdout --- 05:07:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3658Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2504m 15% 4237Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4704Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 282m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5287Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2007m 12% 4328Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2795m 17% 14260Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14085Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14146Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 269m 1% 2302Mi 3% 05:07:19 DEBUG --- stderr --- 05:07:19 DEBUG 05:08:16 INFO 05:08:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:08:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:08:16 INFO [loop_until]: OK (rc = 0) 05:08:16 DEBUG --- stdout --- 05:08:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2878Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 7m 3857Mi ds-cts-0 6m 403Mi ds-cts-1 7m 374Mi ds-cts-2 5m 377Mi ds-idrepo-0 2720m 13801Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2191m 3572Mi idm-65858d8c4c-pt5s9 1833m 3663Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 205m 503Mi 05:08:16 DEBUG --- stderr --- 05:08:16 DEBUG 05:08:19 INFO 05:08:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:08:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:08:19 INFO [loop_until]: OK (rc = 0) 05:08:19 DEBUG --- stdout --- 05:08:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3657Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2398m 15% 4239Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4717Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 273m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1935m 12% 4331Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2610m 16% 14260Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14082Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 945Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14146Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 274m 1% 2305Mi 3% 05:08:19 DEBUG --- stderr --- 05:08:19 DEBUG 05:09:16 INFO 05:09:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:09:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:16 INFO [loop_until]: OK (rc = 0) 05:09:16 DEBUG --- stdout --- 05:09:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2879Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3869Mi ds-cts-0 6m 403Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2718m 13807Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 10m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2316m 3576Mi idm-65858d8c4c-pt5s9 1837m 3664Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 227m 536Mi 05:09:16 DEBUG --- stderr --- 05:09:16 DEBUG 05:09:19 INFO 05:09:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:09:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:09:19 INFO [loop_until]: OK (rc = 0) 05:09:19 DEBUG --- stdout --- 05:09:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3663Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2480m 15% 4237Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4728Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 281m 1% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1812m 11% 4327Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2682m 16% 14266Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14086Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 944Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14147Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 298m 1% 2337Mi 3% 05:09:19 DEBUG --- stderr --- 05:09:19 DEBUG 05:10:16 INFO 05:10:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:10:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:10:16 INFO [loop_until]: OK (rc = 0) 05:10:16 DEBUG --- stdout --- 05:10:16 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 2880Mi am-869fdb5db9-8dg94 10m 4481Mi am-869fdb5db9-wt7sg 7m 3880Mi ds-cts-0 6m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 377Mi ds-idrepo-0 2718m 13798Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2251m 3579Mi idm-65858d8c4c-pt5s9 1767m 3667Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 227m 536Mi 05:10:16 DEBUG --- stderr --- 05:10:16 DEBUG 05:10:19 INFO 05:10:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:10:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:10:19 INFO [loop_until]: OK (rc = 0) 05:10:19 DEBUG --- stdout --- 05:10:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3665Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2541m 15% 4236Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 85m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 4737Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 288m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5285Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1758m 11% 4331Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2798m 17% 14283Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14087Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14147Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 303m 1% 2333Mi 3% 05:10:19 DEBUG --- stderr --- 05:10:19 DEBUG 05:11:16 INFO 05:11:16 INFO [loop_until]: kubectl --namespace=xlou top pods 05:11:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:11:17 INFO [loop_until]: OK (rc = 0) 05:11:17 DEBUG --- stdout --- 05:11:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 15m 2881Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3890Mi ds-cts-0 6m 403Mi ds-cts-1 7m 376Mi ds-cts-2 6m 378Mi ds-idrepo-0 2677m 13808Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2521m 3582Mi idm-65858d8c4c-pt5s9 1766m 3670Mi lodemon-66684b7694-c5c6m 1m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 181m 536Mi 05:11:17 DEBUG --- stderr --- 05:11:17 DEBUG 05:11:19 INFO 05:11:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:11:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:11:19 INFO [loop_until]: OK (rc = 0) 05:11:19 DEBUG --- stdout --- 05:11:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3665Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2411m 15% 4241Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 82m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4747Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 278m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5284Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1895m 11% 4339Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2705m 17% 14268Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14087Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14147Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 252m 1% 2335Mi 3% 05:11:19 DEBUG --- stderr --- 05:11:19 DEBUG 05:12:17 INFO 05:12:17 INFO [loop_until]: kubectl --namespace=xlou top pods 05:12:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:12:17 INFO [loop_until]: OK (rc = 0) 05:12:17 DEBUG --- stdout --- 05:12:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2884Mi am-869fdb5db9-8dg94 9m 4480Mi am-869fdb5db9-wt7sg 8m 3899Mi ds-cts-0 6m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 377Mi ds-idrepo-0 2668m 13809Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2377m 3585Mi idm-65858d8c4c-pt5s9 1932m 3673Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 189m 536Mi 05:12:17 DEBUG --- stderr --- 05:12:17 DEBUG 05:12:19 INFO 05:12:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:12:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:12:19 INFO [loop_until]: OK (rc = 0) 05:12:19 DEBUG --- stdout --- 05:12:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 3667Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2444m 15% 4246Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 4757Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 278m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5287Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2068m 13% 4336Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2756m 17% 14272Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14086Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14145Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 249m 1% 2336Mi 3% 05:12:19 DEBUG --- stderr --- 05:12:19 DEBUG 05:13:17 INFO 05:13:17 INFO [loop_until]: kubectl --namespace=xlou top pods 05:13:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:13:17 INFO [loop_until]: OK (rc = 0) 05:13:17 DEBUG --- stdout --- 05:13:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2891Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3908Mi ds-cts-0 6m 403Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2732m 13797Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2532m 3589Mi idm-65858d8c4c-pt5s9 1723m 3674Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 186m 536Mi 05:13:17 DEBUG --- stderr --- 05:13:17 DEBUG 05:13:19 INFO 05:13:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:13:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:13:19 INFO [loop_until]: OK (rc = 0) 05:13:19 DEBUG --- stdout --- 05:13:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 3674Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2602m 16% 4250Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4769Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 286m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5285Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1799m 11% 4338Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2800m 17% 14271Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14088Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14145Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 255m 1% 2338Mi 3% 05:13:19 DEBUG --- stderr --- 05:13:19 DEBUG 05:14:17 INFO 05:14:17 INFO [loop_until]: kubectl --namespace=xlou top pods 05:14:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:14:17 INFO [loop_until]: OK (rc = 0) 05:14:17 DEBUG --- stdout --- 05:14:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2900Mi am-869fdb5db9-8dg94 8m 4481Mi am-869fdb5db9-wt7sg 5m 3918Mi ds-cts-0 6m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 377Mi ds-idrepo-0 2715m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2274m 3591Mi idm-65858d8c4c-pt5s9 1824m 3676Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 185m 536Mi 05:14:17 DEBUG --- stderr --- 05:14:17 DEBUG 05:14:19 INFO 05:14:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:14:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:14:19 INFO [loop_until]: OK (rc = 0) 05:14:19 DEBUG --- stdout --- 05:14:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 3686Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2403m 15% 4252Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 84m 0% 1001Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4782Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 266m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5287Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1935m 12% 4340Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2710m 17% 14266Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 59m 0% 14088Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14146Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 254m 1% 2336Mi 3% 05:14:19 DEBUG --- stderr --- 05:14:19 DEBUG 05:15:17 INFO 05:15:17 INFO [loop_until]: kubectl --namespace=xlou top pods 05:15:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:15:17 INFO [loop_until]: OK (rc = 0) 05:15:17 DEBUG --- stdout --- 05:15:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2909Mi am-869fdb5db9-8dg94 7m 4481Mi am-869fdb5db9-wt7sg 6m 3932Mi ds-cts-0 7m 403Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2779m 13822Mi ds-idrepo-1 15m 13673Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2314m 3616Mi idm-65858d8c4c-pt5s9 2012m 3680Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 182m 536Mi 05:15:17 DEBUG --- stderr --- 05:15:17 DEBUG 05:15:19 INFO 05:15:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:15:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:15:19 INFO [loop_until]: OK (rc = 0) 05:15:19 DEBUG --- stdout --- 05:15:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3695Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2330m 14% 4279Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 991Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4788Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 283m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5288Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2190m 13% 4344Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2934m 18% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14086Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14149Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 258m 1% 2338Mi 3% 05:15:19 DEBUG --- stderr --- 05:15:19 DEBUG 05:16:17 INFO 05:16:17 INFO [loop_until]: kubectl --namespace=xlou top pods 05:16:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:16:17 INFO [loop_until]: OK (rc = 0) 05:16:17 DEBUG --- stdout --- 05:16:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2920Mi am-869fdb5db9-8dg94 8m 4481Mi am-869fdb5db9-wt7sg 6m 3942Mi ds-cts-0 6m 403Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2943m 13804Mi ds-idrepo-1 10m 13672Mi ds-idrepo-2 12m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2617m 3595Mi idm-65858d8c4c-pt5s9 2027m 3682Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 201m 536Mi 05:16:17 DEBUG --- stderr --- 05:16:17 DEBUG 05:16:19 INFO 05:16:19 INFO [loop_until]: kubectl --namespace=xlou top node 05:16:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:16:19 INFO [loop_until]: OK (rc = 0) 05:16:19 DEBUG --- stdout --- 05:16:19 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3706Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2656m 16% 4258Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4800Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 289m 1% 2572Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5287Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2088m 13% 4345Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2867m 18% 14259Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14087Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14150Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 266m 1% 2337Mi 3% 05:16:19 DEBUG --- stderr --- 05:16:19 DEBUG 05:17:17 INFO 05:17:17 INFO [loop_until]: kubectl --namespace=xlou top pods 05:17:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:17:17 INFO [loop_until]: OK (rc = 0) 05:17:17 DEBUG --- stdout --- 05:17:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2930Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3953Mi ds-cts-0 6m 403Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2789m 13823Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 10m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2206m 3599Mi idm-65858d8c4c-pt5s9 2039m 3685Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 191m 537Mi 05:17:17 DEBUG --- stderr --- 05:17:17 DEBUG 05:17:20 INFO 05:17:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:17:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:17:20 INFO [loop_until]: OK (rc = 0) 05:17:20 DEBUG --- stdout --- 05:17:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 3718Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2297m 14% 4263Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 4807Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 287m 1% 2603Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2145m 13% 4351Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2750m 17% 14261Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14086Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14149Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 262m 1% 2339Mi 3% 05:17:20 DEBUG --- stderr --- 05:17:20 DEBUG 05:18:17 INFO 05:18:17 INFO [loop_until]: kubectl --namespace=xlou top pods 05:18:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:18:17 INFO [loop_until]: OK (rc = 0) 05:18:17 DEBUG --- stdout --- 05:18:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2941Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 6m 3961Mi ds-cts-0 6m 403Mi ds-cts-1 6m 374Mi ds-cts-2 6m 377Mi ds-idrepo-0 3133m 13802Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2649m 3603Mi idm-65858d8c4c-pt5s9 1862m 3687Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 189m 537Mi 05:18:17 DEBUG --- stderr --- 05:18:17 DEBUG 05:18:20 INFO 05:18:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:18:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:18:20 INFO [loop_until]: OK (rc = 0) 05:18:20 DEBUG --- stdout --- 05:18:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3723Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2678m 16% 4264Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4820Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 288m 1% 2600Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 71m 0% 5284Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2018m 12% 4348Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2961m 18% 14261Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14084Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14152Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 264m 1% 2339Mi 3% 05:18:20 DEBUG --- stderr --- 05:18:20 DEBUG 05:19:17 INFO 05:19:17 INFO [loop_until]: kubectl --namespace=xlou top pods 05:19:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:19:17 INFO [loop_until]: OK (rc = 0) 05:19:17 DEBUG --- stdout --- 05:19:17 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2952Mi am-869fdb5db9-8dg94 9m 4480Mi am-869fdb5db9-wt7sg 6m 3973Mi ds-cts-0 6m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 2667m 13824Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2286m 3605Mi idm-65858d8c4c-pt5s9 2068m 3689Mi lodemon-66684b7694-c5c6m 1m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 190m 537Mi 05:19:17 DEBUG --- stderr --- 05:19:17 DEBUG 05:19:20 INFO 05:19:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:19:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:19:20 INFO [loop_until]: OK (rc = 0) 05:19:20 DEBUG --- stdout --- 05:19:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3734Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2339m 14% 4270Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4829Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 284m 1% 2589Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5285Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1981m 12% 4356Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2712m 17% 14269Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14085Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14147Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 250m 1% 2339Mi 3% 05:19:20 DEBUG --- stderr --- 05:19:20 DEBUG 05:20:18 INFO 05:20:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:20:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:20:18 INFO [loop_until]: OK (rc = 0) 05:20:18 DEBUG --- stdout --- 05:20:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2963Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 9m 3983Mi ds-cts-0 6m 403Mi ds-cts-1 7m 375Mi ds-cts-2 6m 376Mi ds-idrepo-0 2716m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 18m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2438m 3609Mi idm-65858d8c4c-pt5s9 1890m 3692Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 189m 538Mi 05:20:18 DEBUG --- stderr --- 05:20:18 DEBUG 05:20:20 INFO 05:20:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:20:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:20:20 INFO [loop_until]: OK (rc = 0) 05:20:20 DEBUG --- stdout --- 05:20:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3746Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2524m 15% 4263Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 4842Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 280m 1% 2589Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5287Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2179m 13% 4380Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2917m 18% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14085Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14150Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 261m 1% 2336Mi 3% 05:20:20 DEBUG --- stderr --- 05:20:20 DEBUG 05:21:18 INFO 05:21:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:21:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:21:18 INFO [loop_until]: OK (rc = 0) 05:21:18 DEBUG --- stdout --- 05:21:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2973Mi am-869fdb5db9-8dg94 8m 4480Mi am-869fdb5db9-wt7sg 7m 3993Mi ds-cts-0 6m 403Mi ds-cts-1 7m 376Mi ds-cts-2 6m 377Mi ds-idrepo-0 2721m 13823Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2338m 3611Mi idm-65858d8c4c-pt5s9 2075m 3695Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 185m 538Mi 05:21:18 DEBUG --- stderr --- 05:21:18 DEBUG 05:21:20 INFO 05:21:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:21:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:21:20 INFO [loop_until]: OK (rc = 0) 05:21:20 DEBUG --- stdout --- 05:21:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3754Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2390m 15% 4274Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 4852Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 284m 1% 2581Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2007m 12% 4359Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2844m 17% 14290Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14088Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14147Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 262m 1% 2336Mi 3% 05:21:20 DEBUG --- stderr --- 05:21:20 DEBUG 05:22:18 INFO 05:22:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:22:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:22:18 INFO [loop_until]: OK (rc = 0) 05:22:18 DEBUG --- stdout --- 05:22:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 2982Mi am-869fdb5db9-8dg94 8m 4482Mi am-869fdb5db9-wt7sg 6m 4004Mi ds-cts-0 6m 403Mi ds-cts-1 7m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2990m 13804Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 14m 13641Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2529m 3614Mi idm-65858d8c4c-pt5s9 1746m 3698Mi lodemon-66684b7694-c5c6m 1m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 195m 538Mi 05:22:18 DEBUG --- stderr --- 05:22:18 DEBUG 05:22:20 INFO 05:22:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:22:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:22:20 INFO [loop_until]: OK (rc = 0) 05:22:20 DEBUG --- stdout --- 05:22:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3766Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2713m 17% 4275Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4862Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 295m 1% 2581Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5291Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1811m 11% 4363Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2823m 17% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14087Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14151Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 266m 1% 2337Mi 3% 05:22:20 DEBUG --- stderr --- 05:22:20 DEBUG 05:23:18 INFO 05:23:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:23:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:23:18 INFO [loop_until]: OK (rc = 0) 05:23:18 DEBUG --- stdout --- 05:23:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 2994Mi am-869fdb5db9-8dg94 7m 4489Mi am-869fdb5db9-wt7sg 6m 4016Mi ds-cts-0 6m 403Mi ds-cts-1 7m 375Mi ds-cts-2 7m 377Mi ds-idrepo-0 2734m 13808Mi ds-idrepo-1 15m 13673Mi ds-idrepo-2 12m 13642Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2191m 3617Mi idm-65858d8c4c-pt5s9 1938m 3700Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 179m 538Mi 05:23:18 DEBUG --- stderr --- 05:23:18 DEBUG 05:23:20 INFO 05:23:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:23:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:23:20 INFO [loop_until]: OK (rc = 0) 05:23:20 DEBUG --- stdout --- 05:23:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3779Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2363m 14% 4278Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 4871Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 286m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5297Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2097m 13% 4364Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2768m 17% 14278Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14085Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14154Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 260m 1% 2338Mi 3% 05:23:20 DEBUG --- stderr --- 05:23:20 DEBUG 05:24:18 INFO 05:24:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:24:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:24:18 INFO [loop_until]: OK (rc = 0) 05:24:18 DEBUG --- stdout --- 05:24:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3004Mi am-869fdb5db9-8dg94 8m 4502Mi am-869fdb5db9-wt7sg 6m 4025Mi ds-cts-0 6m 403Mi ds-cts-1 7m 374Mi ds-cts-2 6m 377Mi ds-idrepo-0 2615m 13822Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 12m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2431m 3627Mi idm-65858d8c4c-pt5s9 1753m 3701Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 182m 538Mi 05:24:18 DEBUG --- stderr --- 05:24:18 DEBUG 05:24:20 INFO 05:24:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:24:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:24:20 INFO [loop_until]: OK (rc = 0) 05:24:20 DEBUG --- stdout --- 05:24:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3787Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2497m 15% 4291Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 4882Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 293m 1% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5304Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1983m 12% 4370Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2873m 18% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14092Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14158Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 258m 1% 2340Mi 3% 05:24:20 DEBUG --- stderr --- 05:24:20 DEBUG 05:25:18 INFO 05:25:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:25:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:25:18 INFO [loop_until]: OK (rc = 0) 05:25:18 DEBUG --- stdout --- 05:25:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 3012Mi am-869fdb5db9-8dg94 8m 4512Mi am-869fdb5db9-wt7sg 7m 4033Mi ds-cts-0 13m 404Mi ds-cts-1 6m 375Mi ds-cts-2 6m 377Mi ds-idrepo-0 2561m 13809Mi ds-idrepo-1 12m 13674Mi ds-idrepo-2 12m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2392m 3623Mi idm-65858d8c4c-pt5s9 1668m 3704Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 180m 538Mi 05:25:18 DEBUG --- stderr --- 05:25:18 DEBUG 05:25:20 INFO 05:25:20 INFO [loop_until]: kubectl --namespace=xlou top node 05:25:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:25:20 INFO [loop_until]: OK (rc = 0) 05:25:20 DEBUG --- stdout --- 05:25:20 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 3797Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2405m 15% 4283Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 4892Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 282m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 68m 0% 5316Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1763m 11% 4367Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2718m 17% 14272Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14092Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 68m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 59m 0% 14154Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 252m 1% 2340Mi 3% 05:25:20 DEBUG --- stderr --- 05:25:20 DEBUG 05:26:18 INFO 05:26:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:26:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:26:18 INFO [loop_until]: OK (rc = 0) 05:26:18 DEBUG --- stdout --- 05:26:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3025Mi am-869fdb5db9-8dg94 8m 4518Mi am-869fdb5db9-wt7sg 9m 4046Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 11m 378Mi ds-idrepo-0 2781m 13808Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2336m 3624Mi idm-65858d8c4c-pt5s9 1803m 3706Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 192m 539Mi 05:26:18 DEBUG --- stderr --- 05:26:18 DEBUG 05:26:21 INFO 05:26:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:26:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:26:21 INFO [loop_until]: OK (rc = 0) 05:26:21 DEBUG --- stdout --- 05:26:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3805Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2453m 15% 4286Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 69m 0% 4904Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 279m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5326Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1950m 12% 4372Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 62m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2717m 17% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14089Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14165Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 254m 1% 2340Mi 3% 05:26:21 DEBUG --- stderr --- 05:26:21 DEBUG 05:27:18 INFO 05:27:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:27:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:27:18 INFO [loop_until]: OK (rc = 0) 05:27:18 DEBUG --- stdout --- 05:27:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3034Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 7m 4057Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 2678m 13822Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2042m 3626Mi idm-65858d8c4c-pt5s9 2063m 3709Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 181m 540Mi 05:27:18 DEBUG --- stderr --- 05:27:18 DEBUG 05:27:21 INFO 05:27:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:27:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:27:21 INFO [loop_until]: OK (rc = 0) 05:27:21 DEBUG --- stdout --- 05:27:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3819Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2214m 13% 4290Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 4914Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 281m 1% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5439Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2099m 13% 4371Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2668m 16% 14291Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14087Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14155Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 257m 1% 2337Mi 3% 05:27:21 DEBUG --- stderr --- 05:27:21 DEBUG 05:28:18 INFO 05:28:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:28:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:28:18 INFO [loop_until]: OK (rc = 0) 05:28:18 DEBUG --- stdout --- 05:28:18 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3044Mi am-869fdb5db9-8dg94 18m 4634Mi am-869fdb5db9-wt7sg 6m 4067Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 2784m 13800Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2415m 3636Mi idm-65858d8c4c-pt5s9 1889m 3711Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 194m 539Mi 05:28:18 DEBUG --- stderr --- 05:28:18 DEBUG 05:28:21 INFO 05:28:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:28:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:28:21 INFO [loop_until]: OK (rc = 0) 05:28:21 DEBUG --- stdout --- 05:28:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3829Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2415m 15% 4296Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 4925Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 282m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 76m 0% 5437Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 1976m 12% 4402Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2640m 16% 14292Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14090Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14153Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 265m 1% 2341Mi 3% 05:28:21 DEBUG --- stderr --- 05:28:21 DEBUG 05:29:18 INFO 05:29:18 INFO [loop_until]: kubectl --namespace=xlou top pods 05:29:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:19 INFO [loop_until]: OK (rc = 0) 05:29:19 DEBUG --- stdout --- 05:29:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3056Mi am-869fdb5db9-8dg94 5m 4634Mi am-869fdb5db9-wt7sg 6m 4076Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 2860m 13803Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2356m 3633Mi idm-65858d8c4c-pt5s9 2198m 3714Mi lodemon-66684b7694-c5c6m 1m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 193m 540Mi 05:29:19 DEBUG --- stderr --- 05:29:19 DEBUG 05:29:21 INFO 05:29:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:29:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:29:21 INFO [loop_until]: OK (rc = 0) 05:29:21 DEBUG --- stdout --- 05:29:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3839Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2371m 14% 4294Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 4935Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 288m 1% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5441Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2254m 14% 4377Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2985m 18% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14090Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14152Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 262m 1% 2340Mi 3% 05:29:21 DEBUG --- stderr --- 05:29:21 DEBUG 05:30:19 INFO 05:30:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:30:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:30:19 INFO [loop_until]: OK (rc = 0) 05:30:19 DEBUG --- stdout --- 05:30:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3067Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 10m 4088Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 11m 13800Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 3634Mi idm-65858d8c4c-pt5s9 8m 3714Mi lodemon-66684b7694-c5c6m 1m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 41m 105Mi 05:30:19 DEBUG --- stderr --- 05:30:19 DEBUG 05:30:21 INFO 05:30:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:30:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:30:21 INFO [loop_until]: OK (rc = 0) 05:30:21 DEBUG --- stdout --- 05:30:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3848Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 84m 0% 4293Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 69m 0% 4945Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 105m 0% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 77m 0% 4380Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 59m 0% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14092Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 60m 0% 994Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14154Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 88m 0% 1914Mi 3% 05:30:21 DEBUG --- stderr --- 05:30:21 DEBUG 05:31:19 INFO 05:31:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:31:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:31:19 INFO [loop_until]: OK (rc = 0) 05:31:19 DEBUG --- stdout --- 05:31:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3075Mi am-869fdb5db9-8dg94 8m 4636Mi am-869fdb5db9-wt7sg 7m 4099Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 10m 13801Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 3634Mi idm-65858d8c4c-pt5s9 7m 3713Mi lodemon-66684b7694-c5c6m 2m 67Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 105Mi 05:31:19 DEBUG --- stderr --- 05:31:19 DEBUG 05:31:21 INFO 05:31:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:31:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:31:21 INFO [loop_until]: OK (rc = 0) 05:31:21 DEBUG --- stdout --- 05:31:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3863Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 81m 0% 4298Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 4955Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 107m 0% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 74m 0% 4378Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 62m 0% 14281Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14091Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14156Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 78m 0% 1923Mi 3% 05:31:21 DEBUG --- stderr --- 05:31:21 DEBUG 127.0.0.1 - - [16/Aug/2023 05:31:36] "GET /monitoring/average?start_time=23-08-16_04:01:11&stop_time=23-08-16_04:29:35 HTTP/1.1" 200 - 05:32:19 INFO 05:32:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:32:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:32:19 INFO [loop_until]: OK (rc = 0) 05:32:19 DEBUG --- stdout --- 05:32:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3088Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 58m 4227Mi ds-cts-0 7m 404Mi ds-cts-1 8m 374Mi ds-cts-2 7m 378Mi ds-idrepo-0 896m 13823Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 38m 3634Mi idm-65858d8c4c-pt5s9 805m 3717Mi lodemon-66684b7694-c5c6m 4m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 708m 475Mi 05:32:19 DEBUG --- stderr --- 05:32:19 DEBUG 05:32:21 INFO 05:32:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:32:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:32:21 INFO [loop_until]: OK (rc = 0) 05:32:21 DEBUG --- stdout --- 05:32:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3873Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 1101m 6% 4315Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 126m 0% 5083Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 180m 1% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 912m 5% 4380Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 571m 3% 14272Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14093Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14155Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 724m 4% 2276Mi 3% 05:32:21 DEBUG --- stderr --- 05:32:21 DEBUG 05:33:19 INFO 05:33:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:33:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:33:19 INFO [loop_until]: OK (rc = 0) 05:33:19 DEBUG --- stdout --- 05:33:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3097Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 6m 4236Mi ds-cts-0 7m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 3834m 13806Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3254m 3642Mi idm-65858d8c4c-pt5s9 2489m 3721Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 345m 503Mi 05:33:19 DEBUG --- stderr --- 05:33:19 DEBUG 05:33:21 INFO 05:33:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:33:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:33:21 INFO [loop_until]: OK (rc = 0) 05:33:21 DEBUG --- stdout --- 05:33:21 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3884Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3610m 22% 4301Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5094Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 309m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2495m 15% 4386Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3565m 22% 14278Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14095Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 425m 2% 2305Mi 3% 05:33:21 DEBUG --- stderr --- 05:33:21 DEBUG 05:34:19 INFO 05:34:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:34:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:34:19 INFO [loop_until]: OK (rc = 0) 05:34:19 DEBUG --- stdout --- 05:34:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3108Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 7m 4246Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 3252m 13804Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3096m 3646Mi idm-65858d8c4c-pt5s9 2301m 3724Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 328m 502Mi 05:34:19 DEBUG --- stderr --- 05:34:19 DEBUG 05:34:21 INFO 05:34:21 INFO [loop_until]: kubectl --namespace=xlou top node 05:34:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:34:22 INFO [loop_until]: OK (rc = 0) 05:34:22 DEBUG --- stdout --- 05:34:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3894Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2974m 18% 4307Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5104Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 313m 1% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2320m 14% 4389Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3342m 21% 14291Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14094Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 946Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14155Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 365m 2% 2311Mi 3% 05:34:22 DEBUG --- stderr --- 05:34:22 DEBUG 05:35:19 INFO 05:35:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:35:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:35:19 INFO [loop_until]: OK (rc = 0) 05:35:19 DEBUG --- stdout --- 05:35:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3119Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 7m 4257Mi ds-cts-0 6m 405Mi ds-cts-1 7m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 3033m 13802Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2772m 3650Mi idm-65858d8c4c-pt5s9 1976m 3727Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 244m 506Mi 05:35:19 DEBUG --- stderr --- 05:35:19 DEBUG 05:35:22 INFO 05:35:22 INFO [loop_until]: kubectl --namespace=xlou top node 05:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:35:22 INFO [loop_until]: OK (rc = 0) 05:35:22 DEBUG --- stdout --- 05:35:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3904Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2874m 18% 4313Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 5114Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 302m 1% 2582Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2046m 12% 4392Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3051m 19% 14276Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14093Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14156Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 319m 2% 2307Mi 3% 05:35:22 DEBUG --- stderr --- 05:35:22 DEBUG 05:36:19 INFO 05:36:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:36:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:36:19 INFO [loop_until]: OK (rc = 0) 05:36:19 DEBUG --- stdout --- 05:36:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3129Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 17m 4269Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 3381m 13798Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2769m 3653Mi idm-65858d8c4c-pt5s9 2509m 3729Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 274m 540Mi 05:36:19 DEBUG --- stderr --- 05:36:19 DEBUG 05:36:22 INFO 05:36:22 INFO [loop_until]: kubectl --namespace=xlou top node 05:36:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:36:22 INFO [loop_until]: OK (rc = 0) 05:36:22 DEBUG --- stdout --- 05:36:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 3912Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2954m 18% 4314Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 74m 0% 5125Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 323m 2% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2606m 16% 4389Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3414m 21% 14268Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14093Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14155Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 324m 2% 2341Mi 3% 05:36:22 DEBUG --- stderr --- 05:36:22 DEBUG 05:37:19 INFO 05:37:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:37:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:37:19 INFO [loop_until]: OK (rc = 0) 05:37:19 DEBUG --- stdout --- 05:37:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3140Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 9m 4279Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 3239m 13805Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2778m 3654Mi idm-65858d8c4c-pt5s9 2252m 3731Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 223m 539Mi 05:37:19 DEBUG --- stderr --- 05:37:19 DEBUG 05:37:22 INFO 05:37:22 INFO [loop_until]: kubectl --namespace=xlou top node 05:37:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:37:22 INFO [loop_until]: OK (rc = 0) 05:37:22 DEBUG --- stdout --- 05:37:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3924Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2964m 18% 4327Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 5138Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 316m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 59m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2342m 14% 4397Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3372m 21% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14093Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14155Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 294m 1% 2341Mi 3% 05:37:22 DEBUG --- stderr --- 05:37:22 DEBUG 05:38:19 INFO 05:38:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:38:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:38:19 INFO [loop_until]: OK (rc = 0) 05:38:19 DEBUG --- stdout --- 05:38:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3150Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 7m 4289Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 3220m 13803Mi ds-idrepo-1 12m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2779m 3657Mi idm-65858d8c4c-pt5s9 2303m 3734Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 218m 542Mi 05:38:19 DEBUG --- stderr --- 05:38:19 DEBUG 05:38:22 INFO 05:38:22 INFO [loop_until]: kubectl --namespace=xlou top node 05:38:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:38:22 INFO [loop_until]: OK (rc = 0) 05:38:22 DEBUG --- stdout --- 05:38:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 3937Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2931m 18% 4317Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5144Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 311m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2310m 14% 4395Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3330m 20% 14278Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 60m 0% 14093Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14152Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 285m 1% 2344Mi 3% 05:38:22 DEBUG --- stderr --- 05:38:22 DEBUG 05:39:19 INFO 05:39:19 INFO [loop_until]: kubectl --namespace=xlou top pods 05:39:19 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:39:19 INFO [loop_until]: OK (rc = 0) 05:39:19 DEBUG --- stdout --- 05:39:19 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3161Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 7m 4301Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 379Mi ds-idrepo-0 3662m 13806Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3366m 3661Mi idm-65858d8c4c-pt5s9 2576m 3739Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 236m 542Mi 05:39:19 DEBUG --- stderr --- 05:39:19 DEBUG 05:39:22 INFO 05:39:22 INFO [loop_until]: kubectl --namespace=xlou top node 05:39:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:39:22 INFO [loop_until]: OK (rc = 0) 05:39:22 DEBUG --- stdout --- 05:39:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 3947Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3348m 21% 4321Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5157Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 336m 2% 2570Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2698m 16% 4396Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3639m 22% 14275Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14093Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14152Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 303m 1% 2346Mi 4% 05:39:22 DEBUG --- stderr --- 05:39:22 DEBUG 05:40:20 INFO 05:40:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:40:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:40:20 INFO [loop_until]: OK (rc = 0) 05:40:20 DEBUG --- stdout --- 05:40:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3173Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 8m 4309Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 3233m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2880m 3691Mi idm-65858d8c4c-pt5s9 2451m 3741Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 228m 543Mi 05:40:20 DEBUG --- stderr --- 05:40:20 DEBUG 05:40:22 INFO 05:40:22 INFO [loop_until]: kubectl --namespace=xlou top node 05:40:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:40:22 INFO [loop_until]: OK (rc = 0) 05:40:22 DEBUG --- stdout --- 05:40:22 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3956Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2975m 18% 4351Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5167Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 312m 1% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5441Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2498m 15% 4404Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3398m 21% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14095Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14158Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 293m 1% 2345Mi 3% 05:40:22 DEBUG --- stderr --- 05:40:22 DEBUG 05:41:20 INFO 05:41:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:41:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:41:20 INFO [loop_until]: OK (rc = 0) 05:41:20 DEBUG --- stdout --- 05:41:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3182Mi am-869fdb5db9-8dg94 7m 4636Mi am-869fdb5db9-wt7sg 7m 4321Mi ds-cts-0 6m 404Mi ds-cts-1 7m 374Mi ds-cts-2 6m 378Mi ds-idrepo-0 3320m 13805Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3059m 3669Mi idm-65858d8c4c-pt5s9 2309m 3742Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 220m 543Mi 05:41:20 DEBUG --- stderr --- 05:41:20 DEBUG 05:41:22 INFO 05:41:22 INFO [loop_until]: kubectl --namespace=xlou top node 05:41:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:41:23 INFO [loop_until]: OK (rc = 0) 05:41:23 DEBUG --- stdout --- 05:41:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 67m 0% 3978Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2994m 18% 4329Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5177Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 320m 2% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2375m 14% 4405Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3462m 21% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14092Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14160Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 291m 1% 2345Mi 3% 05:41:23 DEBUG --- stderr --- 05:41:23 DEBUG 05:42:20 INFO 05:42:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:42:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:42:20 INFO [loop_until]: OK (rc = 0) 05:42:20 DEBUG --- stdout --- 05:42:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3193Mi am-869fdb5db9-8dg94 8m 4636Mi am-869fdb5db9-wt7sg 7m 4332Mi ds-cts-0 6m 404Mi ds-cts-1 6m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 3607m 13822Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2981m 3688Mi idm-65858d8c4c-pt5s9 2333m 3746Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 223m 543Mi 05:42:20 DEBUG --- stderr --- 05:42:20 DEBUG 05:42:23 INFO 05:42:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:42:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:42:23 INFO [loop_until]: OK (rc = 0) 05:42:23 DEBUG --- stdout --- 05:42:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3974Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3161m 19% 4335Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5190Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 321m 2% 2573Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 65m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2540m 15% 4410Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3486m 21% 14283Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 59m 0% 14095Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14159Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 301m 1% 2345Mi 3% 05:42:23 DEBUG --- stderr --- 05:42:23 DEBUG 05:43:20 INFO 05:43:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:43:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:43:20 INFO [loop_until]: OK (rc = 0) 05:43:20 DEBUG --- stdout --- 05:43:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3202Mi am-869fdb5db9-8dg94 18m 4637Mi am-869fdb5db9-wt7sg 12m 4341Mi ds-cts-0 6m 405Mi ds-cts-1 7m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 3355m 13801Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2778m 3677Mi idm-65858d8c4c-pt5s9 2598m 3750Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 230m 544Mi 05:43:20 DEBUG --- stderr --- 05:43:20 DEBUG 05:43:23 INFO 05:43:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:43:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:43:23 INFO [loop_until]: OK (rc = 0) 05:43:23 DEBUG --- stdout --- 05:43:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 58m 0% 3989Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2955m 18% 4334Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 5203Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 321m 2% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 70m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2490m 15% 4412Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3418m 21% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14096Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 287m 1% 2347Mi 4% 05:43:23 DEBUG --- stderr --- 05:43:23 DEBUG 05:44:20 INFO 05:44:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:44:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:44:20 INFO [loop_until]: OK (rc = 0) 05:44:20 DEBUG --- stdout --- 05:44:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3213Mi am-869fdb5db9-8dg94 5m 4637Mi am-869fdb5db9-wt7sg 7m 4352Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 3610m 13824Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3095m 3679Mi idm-65858d8c4c-pt5s9 2589m 3775Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 249m 544Mi 05:44:20 DEBUG --- stderr --- 05:44:20 DEBUG 05:44:23 INFO 05:44:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:44:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:44:23 INFO [loop_until]: OK (rc = 0) 05:44:23 DEBUG --- stdout --- 05:44:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 3998Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3141m 19% 4351Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5211Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 332m 2% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2709m 17% 4439Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3805m 23% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14092Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 947Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14160Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 311m 1% 2345Mi 4% 05:44:23 DEBUG --- stderr --- 05:44:23 DEBUG 05:45:20 INFO 05:45:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:45:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:45:20 INFO [loop_until]: OK (rc = 0) 05:45:20 DEBUG --- stdout --- 05:45:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 6m 3225Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 6m 4364Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 3309m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2642m 3681Mi idm-65858d8c4c-pt5s9 2399m 3755Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 220m 544Mi 05:45:20 DEBUG --- stderr --- 05:45:20 DEBUG 05:45:23 INFO 05:45:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:45:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:45:23 INFO [loop_until]: OK (rc = 0) 05:45:23 DEBUG --- stdout --- 05:45:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4008Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2848m 17% 4341Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 992Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5220Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 324m 2% 2583Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2481m 15% 4419Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3421m 21% 14295Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14095Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 948Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14158Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 293m 1% 2344Mi 3% 05:45:23 DEBUG --- stderr --- 05:45:23 DEBUG 05:46:20 INFO 05:46:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:46:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:46:20 INFO [loop_until]: OK (rc = 0) 05:46:20 DEBUG --- stdout --- 05:46:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3235Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4373Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 3892m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3446m 3686Mi idm-65858d8c4c-pt5s9 2547m 3761Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 233m 545Mi 05:46:20 DEBUG --- stderr --- 05:46:20 DEBUG 05:46:23 INFO 05:46:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:46:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:46:23 INFO [loop_until]: OK (rc = 0) 05:46:23 DEBUG --- stdout --- 05:46:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4018Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3258m 20% 4343Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 87m 0% 1017Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 5230Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 333m 2% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2697m 16% 4425Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 52m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3634m 22% 14282Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 59m 0% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 315m 1% 2345Mi 4% 05:46:23 DEBUG --- stderr --- 05:46:23 DEBUG 05:47:20 INFO 05:47:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:47:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:47:20 INFO [loop_until]: OK (rc = 0) 05:47:20 DEBUG --- stdout --- 05:47:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 3246Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4382Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 3172m 13805Mi ds-idrepo-1 15m 13672Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2761m 3688Mi idm-65858d8c4c-pt5s9 2250m 3770Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 234m 544Mi 05:47:20 DEBUG --- stderr --- 05:47:20 DEBUG 05:47:23 INFO 05:47:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:47:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:47:23 INFO [loop_until]: OK (rc = 0) 05:47:23 DEBUG --- stdout --- 05:47:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 4029Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2891m 18% 4346Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5244Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 310m 1% 2583Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2326m 14% 4433Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3269m 20% 14283Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14096Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 977Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14156Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 294m 1% 2348Mi 4% 05:47:23 DEBUG --- stderr --- 05:47:23 DEBUG 05:48:20 INFO 05:48:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:48:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:48:20 INFO [loop_until]: OK (rc = 0) 05:48:20 DEBUG --- stdout --- 05:48:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3257Mi am-869fdb5db9-8dg94 8m 4637Mi am-869fdb5db9-wt7sg 6m 4395Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 378Mi ds-idrepo-0 3558m 13804Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 15m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3223m 3692Mi idm-65858d8c4c-pt5s9 2382m 3766Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 225m 545Mi 05:48:20 DEBUG --- stderr --- 05:48:20 DEBUG 05:48:23 INFO 05:48:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:48:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:48:23 INFO [loop_until]: OK (rc = 0) 05:48:23 DEBUG --- stdout --- 05:48:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4041Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3216m 20% 4349Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5254Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 327m 2% 2581Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2424m 15% 4429Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3694m 23% 14280Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14095Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14161Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 304m 1% 2345Mi 4% 05:48:23 DEBUG --- stderr --- 05:48:23 DEBUG 05:49:20 INFO 05:49:20 INFO [loop_until]: kubectl --namespace=xlou top pods 05:49:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:49:20 INFO [loop_until]: OK (rc = 0) 05:49:20 DEBUG --- stdout --- 05:49:20 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 3267Mi am-869fdb5db9-8dg94 5m 4637Mi am-869fdb5db9-wt7sg 9m 4405Mi ds-cts-0 6m 404Mi ds-cts-1 8m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 3483m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2980m 3696Mi idm-65858d8c4c-pt5s9 2583m 3769Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 231m 545Mi 05:49:20 DEBUG --- stderr --- 05:49:20 DEBUG 05:49:23 INFO 05:49:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:49:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:49:23 INFO [loop_until]: OK (rc = 0) 05:49:23 DEBUG --- stdout --- 05:49:23 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 4049Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3183m 20% 4354Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 5265Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 316m 1% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2704m 17% 4430Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3549m 22% 14289Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 63m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14161Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 308m 1% 2349Mi 4% 05:49:23 DEBUG --- stderr --- 05:49:23 DEBUG 05:50:21 INFO 05:50:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:50:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:50:21 INFO [loop_until]: OK (rc = 0) 05:50:21 DEBUG --- stdout --- 05:50:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3278Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4415Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 3469m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 12m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3104m 3699Mi idm-65858d8c4c-pt5s9 2250m 3772Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 231m 545Mi 05:50:21 DEBUG --- stderr --- 05:50:21 DEBUG 05:50:23 INFO 05:50:23 INFO [loop_until]: kubectl --namespace=xlou top node 05:50:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:50:24 INFO [loop_until]: OK (rc = 0) 05:50:24 DEBUG --- stdout --- 05:50:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4060Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3221m 20% 4355Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 83m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5275Mi 8% gke-xlou-cdm-default-pool-f05840a3-tnc9 322m 2% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2383m 14% 4434Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3486m 21% 14282Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14098Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14160Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 302m 1% 2348Mi 4% 05:50:24 DEBUG --- stderr --- 05:50:24 DEBUG 05:51:21 INFO 05:51:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:51:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:51:21 INFO [loop_until]: OK (rc = 0) 05:51:21 DEBUG --- stdout --- 05:51:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3287Mi am-869fdb5db9-8dg94 5m 4637Mi am-869fdb5db9-wt7sg 7m 4424Mi ds-cts-0 7m 404Mi ds-cts-1 7m 377Mi ds-cts-2 6m 379Mi ds-idrepo-0 3387m 13805Mi ds-idrepo-1 26m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2959m 3707Mi idm-65858d8c4c-pt5s9 2222m 3774Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 226m 545Mi 05:51:21 DEBUG --- stderr --- 05:51:21 DEBUG 05:51:24 INFO 05:51:24 INFO [loop_until]: kubectl --namespace=xlou top node 05:51:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:51:24 INFO [loop_until]: OK (rc = 0) 05:51:24 DEBUG --- stdout --- 05:51:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4072Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3002m 18% 4366Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 319m 2% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2404m 15% 4435Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3231m 20% 14304Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 80m 0% 14160Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 294m 1% 2348Mi 4% 05:51:24 DEBUG --- stderr --- 05:51:24 DEBUG 05:52:21 INFO 05:52:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:52:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:52:21 INFO [loop_until]: OK (rc = 0) 05:52:21 DEBUG --- stdout --- 05:52:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3299Mi am-869fdb5db9-8dg94 7m 4637Mi am-869fdb5db9-wt7sg 10m 4433Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 3209m 13797Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2635m 3709Mi idm-65858d8c4c-pt5s9 2234m 3777Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 220m 546Mi 05:52:21 DEBUG --- stderr --- 05:52:21 DEBUG 05:52:24 INFO 05:52:24 INFO [loop_until]: kubectl --namespace=xlou top node 05:52:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:52:24 INFO [loop_until]: OK (rc = 0) 05:52:24 DEBUG --- stdout --- 05:52:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 4085Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 2727m 17% 4367Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 5297Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 320m 2% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2316m 14% 4443Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3233m 20% 14277Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14096Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 57m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14163Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 291m 1% 2348Mi 4% 05:52:24 DEBUG --- stderr --- 05:52:24 DEBUG 05:53:21 INFO 05:53:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:53:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:53:21 INFO [loop_until]: OK (rc = 0) 05:53:21 DEBUG --- stdout --- 05:53:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 3308Mi am-869fdb5db9-8dg94 5m 4637Mi am-869fdb5db9-wt7sg 7m 4445Mi ds-cts-0 8m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 3619m 13822Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3131m 3711Mi idm-65858d8c4c-pt5s9 2470m 3800Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 236m 546Mi 05:53:21 DEBUG --- stderr --- 05:53:21 DEBUG 05:53:24 INFO 05:53:24 INFO [loop_until]: kubectl --namespace=xlou top node 05:53:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:53:24 INFO [loop_until]: OK (rc = 0) 05:53:24 DEBUG --- stdout --- 05:53:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4097Mi 6% gke-xlou-cdm-default-pool-f05840a3-jnx6 3323m 20% 4368Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 60m 0% 5308Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 320m 2% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2553m 16% 4464Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3713m 23% 14302Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14098Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 61m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14161Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 299m 1% 2347Mi 4% 05:53:24 DEBUG --- stderr --- 05:53:24 DEBUG 05:54:21 INFO 05:54:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:54:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:54:21 INFO [loop_until]: OK (rc = 0) 05:54:21 DEBUG --- stdout --- 05:54:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3321Mi am-869fdb5db9-8dg94 5m 4637Mi am-869fdb5db9-wt7sg 12m 4456Mi ds-cts-0 6m 404Mi ds-cts-1 7m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 3551m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3129m 3717Mi idm-65858d8c4c-pt5s9 2429m 3781Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 249m 546Mi 05:54:21 DEBUG --- stderr --- 05:54:21 DEBUG 05:54:24 INFO 05:54:24 INFO [loop_until]: kubectl --namespace=xlou top node 05:54:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:54:24 INFO [loop_until]: OK (rc = 0) 05:54:24 DEBUG --- stdout --- 05:54:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 58m 0% 4105Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3096m 19% 4374Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5317Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 323m 2% 2575Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5457Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2412m 15% 4445Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3405m 21% 14287Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14099Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 62m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 980Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14163Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 307m 1% 2349Mi 4% 05:54:24 DEBUG --- stderr --- 05:54:24 DEBUG 05:55:21 INFO 05:55:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:55:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:55:21 INFO [loop_until]: OK (rc = 0) 05:55:21 DEBUG --- stdout --- 05:55:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3329Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4467Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 7m 379Mi ds-idrepo-0 3645m 13822Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3191m 3721Mi idm-65858d8c4c-pt5s9 2659m 3785Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 238m 547Mi 05:55:21 DEBUG --- stderr --- 05:55:21 DEBUG 05:55:24 INFO 05:55:24 INFO [loop_until]: kubectl --namespace=xlou top node 05:55:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:55:24 INFO [loop_until]: OK (rc = 0) 05:55:24 DEBUG --- stdout --- 05:55:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4117Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3186m 20% 4374Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5328Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 317m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2673m 16% 4446Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3550m 22% 14277Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14100Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 981Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14162Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 307m 1% 2350Mi 4% 05:55:24 DEBUG --- stderr --- 05:55:24 DEBUG 05:56:21 INFO 05:56:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:56:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:56:21 INFO [loop_until]: OK (rc = 0) 05:56:21 DEBUG --- stdout --- 05:56:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3342Mi am-869fdb5db9-8dg94 7m 4637Mi am-869fdb5db9-wt7sg 7m 4477Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 3164m 13803Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 27m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2792m 3722Mi idm-65858d8c4c-pt5s9 2361m 3788Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 222m 547Mi 05:56:21 DEBUG --- stderr --- 05:56:21 DEBUG 05:56:24 INFO 05:56:24 INFO [loop_until]: kubectl --namespace=xlou top node 05:56:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:56:24 INFO [loop_until]: OK (rc = 0) 05:56:24 DEBUG --- stdout --- 05:56:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4129Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 2955m 18% 4380Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5336Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 299m 1% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2397m 15% 4446Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3362m 21% 14282Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 80m 0% 14098Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 60m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14164Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 297m 1% 2352Mi 4% 05:56:24 DEBUG --- stderr --- 05:56:24 DEBUG 05:57:21 INFO 05:57:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:57:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:57:21 INFO [loop_until]: OK (rc = 0) 05:57:21 DEBUG --- stdout --- 05:57:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3350Mi am-869fdb5db9-8dg94 7m 4637Mi am-869fdb5db9-wt7sg 9m 4489Mi ds-cts-0 6m 404Mi ds-cts-1 6m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 3175m 13804Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 22m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2793m 3727Mi idm-65858d8c4c-pt5s9 2278m 3789Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 239m 547Mi 05:57:21 DEBUG --- stderr --- 05:57:21 DEBUG 05:57:24 INFO 05:57:24 INFO [loop_until]: kubectl --namespace=xlou top node 05:57:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:57:24 INFO [loop_until]: OK (rc = 0) 05:57:24 DEBUG --- stdout --- 05:57:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 4136Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 2720m 17% 4384Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5348Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 310m 1% 2581Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2312m 14% 4448Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3238m 20% 14309Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 59m 0% 14166Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 299m 1% 2348Mi 4% 05:57:24 DEBUG --- stderr --- 05:57:24 DEBUG 05:58:21 INFO 05:58:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:58:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:58:21 INFO [loop_until]: OK (rc = 0) 05:58:21 DEBUG --- stdout --- 05:58:21 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3360Mi am-869fdb5db9-8dg94 7m 4638Mi am-869fdb5db9-wt7sg 7m 4499Mi ds-cts-0 6m 404Mi ds-cts-1 6m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 3408m 13823Mi ds-idrepo-1 17m 13673Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3141m 3729Mi idm-65858d8c4c-pt5s9 2401m 3794Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 229m 547Mi 05:58:21 DEBUG --- stderr --- 05:58:21 DEBUG 05:58:24 INFO 05:58:24 INFO [loop_until]: kubectl --namespace=xlou top node 05:58:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:58:24 INFO [loop_until]: OK (rc = 0) 05:58:24 DEBUG --- stdout --- 05:58:24 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 4145Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3138m 19% 4388Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 5357Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 295m 1% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2599m 16% 4454Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3481m 21% 14271Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 67m 0% 14163Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 304m 1% 2349Mi 4% 05:58:24 DEBUG --- stderr --- 05:58:24 DEBUG 05:59:21 INFO 05:59:21 INFO [loop_until]: kubectl --namespace=xlou top pods 05:59:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:59:22 INFO [loop_until]: OK (rc = 0) 05:59:22 DEBUG --- stdout --- 05:59:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3371Mi am-869fdb5db9-8dg94 5m 4637Mi am-869fdb5db9-wt7sg 7m 4510Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 3472m 13806Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3195m 3732Mi idm-65858d8c4c-pt5s9 2303m 3797Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 238m 548Mi 05:59:22 DEBUG --- stderr --- 05:59:22 DEBUG 05:59:25 INFO 05:59:25 INFO [loop_until]: kubectl --namespace=xlou top node 05:59:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 05:59:25 INFO [loop_until]: OK (rc = 0) 05:59:25 DEBUG --- stdout --- 05:59:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 4157Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 2976m 18% 4390Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5367Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 315m 1% 2581Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2461m 15% 4453Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3262m 20% 14283Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 59m 0% 14163Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 304m 1% 2349Mi 4% 05:59:25 DEBUG --- stderr --- 05:59:25 DEBUG 06:00:22 INFO 06:00:22 INFO [loop_until]: kubectl --namespace=xlou top pods 06:00:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:00:22 INFO [loop_until]: OK (rc = 0) 06:00:22 DEBUG --- stdout --- 06:00:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3384Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 8m 4519Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 3542m 13822Mi ds-idrepo-1 11m 13674Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2916m 3736Mi idm-65858d8c4c-pt5s9 2407m 3799Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 232m 548Mi 06:00:22 DEBUG --- stderr --- 06:00:22 DEBUG 06:00:25 INFO 06:00:25 INFO [loop_until]: kubectl --namespace=xlou top node 06:00:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:00:25 INFO [loop_until]: OK (rc = 0) 06:00:25 DEBUG --- stdout --- 06:00:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4168Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 2941m 18% 4392Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5378Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 315m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2446m 15% 4457Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3292m 20% 14299Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14099Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 58m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14167Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 302m 1% 2349Mi 4% 06:00:25 DEBUG --- stderr --- 06:00:25 DEBUG 06:01:22 INFO 06:01:22 INFO [loop_until]: kubectl --namespace=xlou top pods 06:01:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:01:22 INFO [loop_until]: OK (rc = 0) 06:01:22 DEBUG --- stdout --- 06:01:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 3393Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4531Mi ds-cts-0 6m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 3396m 13804Mi ds-idrepo-1 10m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 2609m 3740Mi idm-65858d8c4c-pt5s9 2613m 3802Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 231m 548Mi 06:01:22 DEBUG --- stderr --- 06:01:22 DEBUG 06:01:25 INFO 06:01:25 INFO [loop_until]: kubectl --namespace=xlou top node 06:01:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:01:25 INFO [loop_until]: OK (rc = 0) 06:01:25 DEBUG --- stdout --- 06:01:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4179Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 2743m 17% 4395Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5388Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 319m 2% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5441Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2594m 16% 4462Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3379m 21% 14290Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 59m 0% 14163Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 296m 1% 2352Mi 4% 06:01:25 DEBUG --- stderr --- 06:01:25 DEBUG 06:02:22 INFO 06:02:22 INFO [loop_until]: kubectl --namespace=xlou top pods 06:02:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:02:22 INFO [loop_until]: OK (rc = 0) 06:02:22 DEBUG --- stdout --- 06:02:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3402Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 9m 4561Mi ds-cts-0 8m 404Mi ds-cts-1 7m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 581m 13828Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 386m 3740Mi idm-65858d8c4c-pt5s9 538m 3803Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 121m 553Mi 06:02:22 DEBUG --- stderr --- 06:02:22 DEBUG 06:02:25 INFO 06:02:25 INFO [loop_until]: kubectl --namespace=xlou top node 06:02:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:02:25 INFO [loop_until]: OK (rc = 0) 06:02:25 DEBUG --- stdout --- 06:02:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4190Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 418m 2% 4386Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 5418Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 179m 1% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 227m 1% 4460Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 381m 2% 14309Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14099Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 59m 0% 14168Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 162m 1% 2349Mi 4% 06:02:25 DEBUG --- stderr --- 06:02:25 DEBUG 06:03:22 INFO 06:03:22 INFO [loop_until]: kubectl --namespace=xlou top pods 06:03:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:03:22 INFO [loop_until]: OK (rc = 0) 06:03:22 DEBUG --- stdout --- 06:03:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3414Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4571Mi ds-cts-0 6m 404Mi ds-cts-1 6m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 12m 13828Mi ds-idrepo-1 10m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 3740Mi idm-65858d8c4c-pt5s9 5m 3802Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 106Mi 06:03:22 DEBUG --- stderr --- 06:03:22 DEBUG 06:03:25 INFO 06:03:25 INFO [loop_until]: kubectl --namespace=xlou top node 06:03:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:03:25 INFO [loop_until]: OK (rc = 0) 06:03:25 DEBUG --- stdout --- 06:03:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4200Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 77m 0% 4396Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 70m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5427Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 110m 0% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 70m 0% 4461Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 63m 0% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14100Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 986Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1919Mi 3% 06:03:25 DEBUG --- stderr --- 06:03:25 DEBUG 127.0.0.1 - - [16/Aug/2023 06:04:01] "GET /monitoring/average?start_time=23-08-16_04:33:36&stop_time=23-08-16_05:02:00 HTTP/1.1" 200 - 06:04:22 INFO 06:04:22 INFO [loop_until]: kubectl --namespace=xlou top pods 06:04:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:04:22 INFO [loop_until]: OK (rc = 0) 06:04:22 DEBUG --- stdout --- 06:04:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 13m 3425Mi am-869fdb5db9-8dg94 6m 4635Mi am-869fdb5db9-wt7sg 7m 4579Mi ds-cts-0 6m 404Mi ds-cts-1 6m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 11m 13827Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 3740Mi idm-65858d8c4c-pt5s9 5m 3802Mi lodemon-66684b7694-c5c6m 4m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 492m 419Mi 06:04:22 DEBUG --- stderr --- 06:04:22 DEBUG 06:04:25 INFO 06:04:25 INFO [loop_until]: kubectl --namespace=xlou top node 06:04:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:04:25 INFO [loop_until]: OK (rc = 0) 06:04:25 DEBUG --- stdout --- 06:04:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 70m 0% 4212Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 80m 0% 4397Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5437Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 114m 0% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 73m 0% 4460Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 59m 0% 14308Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14100Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 986Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14168Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 588m 3% 2212Mi 3% 06:04:25 DEBUG --- stderr --- 06:04:25 DEBUG 06:05:22 INFO 06:05:22 INFO [loop_until]: kubectl --namespace=xlou top pods 06:05:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:05:22 INFO [loop_until]: OK (rc = 0) 06:05:22 DEBUG --- stdout --- 06:05:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 3441Mi am-869fdb5db9-8dg94 6m 4635Mi am-869fdb5db9-wt7sg 7m 4590Mi ds-cts-0 6m 405Mi ds-cts-1 7m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 4220m 13822Mi ds-idrepo-1 11m 13674Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3348m 3744Mi idm-65858d8c4c-pt5s9 2891m 3810Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 445m 524Mi 06:05:22 DEBUG --- stderr --- 06:05:22 DEBUG 06:05:25 INFO 06:05:25 INFO [loop_until]: kubectl --namespace=xlou top node 06:05:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:05:25 INFO [loop_until]: OK (rc = 0) 06:05:25 DEBUG --- stdout --- 06:05:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4227Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3784m 23% 4431Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 338m 2% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5439Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2976m 18% 4470Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4467m 28% 14291Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14096Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 988Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 58m 0% 14169Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 496m 3% 2329Mi 3% 06:05:25 DEBUG --- stderr --- 06:05:25 DEBUG 06:06:22 INFO 06:06:22 INFO [loop_until]: kubectl --namespace=xlou top pods 06:06:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:06:22 INFO [loop_until]: OK (rc = 0) 06:06:22 DEBUG --- stdout --- 06:06:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3451Mi am-869fdb5db9-8dg94 8m 4635Mi am-869fdb5db9-wt7sg 9m 4602Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 4398m 13827Mi ds-idrepo-1 10m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3677m 3749Mi idm-65858d8c4c-pt5s9 2815m 3814Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 398m 528Mi 06:06:22 DEBUG --- stderr --- 06:06:22 DEBUG 06:06:25 INFO 06:06:25 INFO [loop_until]: kubectl --namespace=xlou top node 06:06:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:06:25 INFO [loop_until]: OK (rc = 0) 06:06:25 DEBUG --- stdout --- 06:06:25 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 4239Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3670m 23% 4406Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 67m 0% 5460Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 357m 2% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5440Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2996m 18% 4473Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4479m 28% 14288Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14100Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 63m 0% 964Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 66m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 416m 2% 2332Mi 3% 06:06:25 DEBUG --- stderr --- 06:06:25 DEBUG 06:07:22 INFO 06:07:22 INFO [loop_until]: kubectl --namespace=xlou top pods 06:07:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:07:22 INFO [loop_until]: OK (rc = 0) 06:07:22 DEBUG --- stdout --- 06:07:22 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 19m 3468Mi am-869fdb5db9-8dg94 6m 4635Mi am-869fdb5db9-wt7sg 9m 4612Mi ds-cts-0 5m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 381Mi ds-idrepo-0 4303m 13807Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3604m 3754Mi idm-65858d8c4c-pt5s9 2781m 3818Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 284m 534Mi 06:07:22 DEBUG --- stderr --- 06:07:22 DEBUG 06:07:26 INFO 06:07:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:07:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:07:26 INFO [loop_until]: OK (rc = 0) 06:07:26 DEBUG --- stdout --- 06:07:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4251Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3422m 21% 4411Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5471Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 353m 2% 2581Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2654m 16% 4478Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 969Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4261m 26% 14288Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14101Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14169Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 338m 2% 2340Mi 3% 06:07:26 DEBUG --- stderr --- 06:07:26 DEBUG 06:08:23 INFO 06:08:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:08:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:08:23 INFO [loop_until]: OK (rc = 0) 06:08:23 DEBUG --- stdout --- 06:08:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3478Mi am-869fdb5db9-8dg94 21m 4641Mi am-869fdb5db9-wt7sg 7m 4620Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 4218m 13822Mi ds-idrepo-1 10m 13673Mi ds-idrepo-2 19m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3493m 3757Mi idm-65858d8c4c-pt5s9 2451m 3821Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 280m 570Mi 06:08:23 DEBUG --- stderr --- 06:08:23 DEBUG 06:08:26 INFO 06:08:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:08:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:08:26 INFO [loop_until]: OK (rc = 0) 06:08:26 DEBUG --- stdout --- 06:08:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 4264Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3628m 22% 4416Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5481Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 357m 2% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 75m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2656m 16% 4483Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4007m 25% 14291Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 70m 0% 14103Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 346m 2% 2374Mi 4% 06:08:26 DEBUG --- stderr --- 06:08:26 DEBUG 06:09:23 INFO 06:09:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:09:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:09:23 INFO [loop_until]: OK (rc = 0) 06:09:23 DEBUG --- stdout --- 06:09:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3489Mi am-869fdb5db9-8dg94 5m 4635Mi am-869fdb5db9-wt7sg 6m 4632Mi ds-cts-0 6m 404Mi ds-cts-1 6m 376Mi ds-cts-2 6m 380Mi ds-idrepo-0 4097m 13827Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 20m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3512m 3761Mi idm-65858d8c4c-pt5s9 3023m 3823Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 256m 572Mi 06:09:23 DEBUG --- stderr --- 06:09:23 DEBUG 06:09:26 INFO 06:09:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:09:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:09:26 INFO [loop_until]: OK (rc = 0) 06:09:26 DEBUG --- stdout --- 06:09:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4275Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3572m 22% 4420Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5492Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 355m 2% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3020m 19% 4482Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4283m 26% 14289Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 71m 0% 14097Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14168Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 333m 2% 2376Mi 4% 06:09:26 DEBUG --- stderr --- 06:09:26 DEBUG 06:10:23 INFO 06:10:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:10:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:10:23 INFO [loop_until]: OK (rc = 0) 06:10:23 DEBUG --- stdout --- 06:10:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 13m 3499Mi am-869fdb5db9-8dg94 9m 4636Mi am-869fdb5db9-wt7sg 7m 4643Mi ds-cts-0 6m 404Mi ds-cts-1 6m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 4311m 13806Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 12m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3893m 3765Mi idm-65858d8c4c-pt5s9 2858m 3825Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 266m 573Mi 06:10:23 DEBUG --- stderr --- 06:10:23 DEBUG 06:10:26 INFO 06:10:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:10:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:10:26 INFO [loop_until]: OK (rc = 0) 06:10:26 DEBUG --- stdout --- 06:10:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 69m 0% 4281Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3851m 24% 4423Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 357m 2% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2962m 18% 4482Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4531m 28% 14303Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14099Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 62m 0% 998Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14167Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 331m 2% 2378Mi 4% 06:10:26 DEBUG --- stderr --- 06:10:26 DEBUG 06:11:23 INFO 06:11:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:11:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:11:23 INFO [loop_until]: OK (rc = 0) 06:11:23 DEBUG --- stdout --- 06:11:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3508Mi am-869fdb5db9-8dg94 7m 4636Mi am-869fdb5db9-wt7sg 9m 4653Mi ds-cts-0 6m 404Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 4036m 13802Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3846m 3768Mi idm-65858d8c4c-pt5s9 2519m 3828Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 256m 572Mi 06:11:23 DEBUG --- stderr --- 06:11:23 DEBUG 06:11:26 INFO 06:11:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:11:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:11:26 INFO [loop_until]: OK (rc = 0) 06:11:26 DEBUG --- stdout --- 06:11:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4297Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3943m 24% 4437Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5511Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 352m 2% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2869m 18% 4490Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4230m 26% 14291Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14098Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 986Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 65m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 324m 2% 2377Mi 4% 06:11:26 DEBUG --- stderr --- 06:11:26 DEBUG 06:12:23 INFO 06:12:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:12:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:12:23 INFO [loop_until]: OK (rc = 0) 06:12:23 DEBUG --- stdout --- 06:12:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 10m 3522Mi am-869fdb5db9-8dg94 8m 4636Mi am-869fdb5db9-wt7sg 6m 4664Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 3864m 13802Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3473m 3772Mi idm-65858d8c4c-pt5s9 2799m 3830Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 261m 573Mi 06:12:23 DEBUG --- stderr --- 06:12:23 DEBUG 06:12:26 INFO 06:12:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:12:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:12:26 INFO [loop_until]: OK (rc = 0) 06:12:26 DEBUG --- stdout --- 06:12:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 4308Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3662m 23% 4426Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5524Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 347m 2% 2582Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2983m 18% 4494Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4169m 26% 14288Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14098Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14172Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 325m 2% 2379Mi 4% 06:12:26 DEBUG --- stderr --- 06:12:26 DEBUG 06:13:23 INFO 06:13:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:13:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:13:23 INFO [loop_until]: OK (rc = 0) 06:13:23 DEBUG --- stdout --- 06:13:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 13m 3530Mi am-869fdb5db9-8dg94 10m 4636Mi am-869fdb5db9-wt7sg 7m 4676Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 4091m 13822Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3629m 3775Mi idm-65858d8c4c-pt5s9 2859m 3835Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 249m 574Mi 06:13:23 DEBUG --- stderr --- 06:13:23 DEBUG 06:13:26 INFO 06:13:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:13:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:13:26 INFO [loop_until]: OK (rc = 0) 06:13:26 DEBUG --- stdout --- 06:13:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4319Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3799m 23% 4425Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 5534Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 356m 2% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2976m 18% 4493Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4229m 26% 14314Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14104Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 326m 2% 2380Mi 4% 06:13:26 DEBUG --- stderr --- 06:13:26 DEBUG 06:14:23 INFO 06:14:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:14:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:14:23 INFO [loop_until]: OK (rc = 0) 06:14:23 DEBUG --- stdout --- 06:14:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 9m 3543Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 14m 4686Mi ds-cts-0 6m 405Mi ds-cts-1 5m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 3647m 13806Mi ds-idrepo-1 10m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3490m 3779Mi idm-65858d8c4c-pt5s9 2742m 3837Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 262m 573Mi 06:14:23 DEBUG --- stderr --- 06:14:23 DEBUG 06:14:26 INFO 06:14:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:14:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:14:26 INFO [loop_until]: OK (rc = 0) 06:14:26 DEBUG --- stdout --- 06:14:26 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4325Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3472m 21% 4432Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 72m 0% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 340m 2% 2578Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2699m 16% 4494Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 52m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4025m 25% 14285Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14102Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14171Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 323m 2% 2380Mi 4% 06:14:26 DEBUG --- stderr --- 06:14:26 DEBUG 06:15:23 INFO 06:15:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:15:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:15:23 INFO [loop_until]: OK (rc = 0) 06:15:23 DEBUG --- stdout --- 06:15:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 9m 3554Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 6m 4694Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 4118m 13822Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 12m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3637m 3783Mi idm-65858d8c4c-pt5s9 2772m 3839Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 268m 574Mi 06:15:23 DEBUG --- stderr --- 06:15:23 DEBUG 06:15:26 INFO 06:15:26 INFO [loop_until]: kubectl --namespace=xlou top node 06:15:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:15:27 INFO [loop_until]: OK (rc = 0) 06:15:27 DEBUG --- stdout --- 06:15:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4337Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3743m 23% 4433Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5557Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 350m 2% 2579Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 59m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2934m 18% 4499Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3915m 24% 14282Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14103Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14171Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 328m 2% 2377Mi 4% 06:15:27 DEBUG --- stderr --- 06:15:27 DEBUG 06:16:23 INFO 06:16:23 INFO [loop_until]: kubectl --namespace=xlou top pods 06:16:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:16:23 INFO [loop_until]: OK (rc = 0) 06:16:23 DEBUG --- stdout --- 06:16:23 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 3563Mi am-869fdb5db9-8dg94 7m 4636Mi am-869fdb5db9-wt7sg 13m 4706Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 4205m 13808Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3474m 3787Mi idm-65858d8c4c-pt5s9 2942m 3844Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 259m 574Mi 06:16:23 DEBUG --- stderr --- 06:16:23 DEBUG 06:16:27 INFO 06:16:27 INFO [loop_until]: kubectl --namespace=xlou top node 06:16:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:16:27 INFO [loop_until]: OK (rc = 0) 06:16:27 DEBUG --- stdout --- 06:16:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4345Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3676m 23% 4442Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5566Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 345m 2% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2977m 18% 4500Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4246m 26% 14308Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14100Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 319m 2% 2379Mi 4% 06:16:27 DEBUG --- stderr --- 06:16:27 DEBUG 06:17:24 INFO 06:17:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:17:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:17:24 INFO [loop_until]: OK (rc = 0) 06:17:24 DEBUG --- stdout --- 06:17:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3575Mi am-869fdb5db9-8dg94 7m 4636Mi am-869fdb5db9-wt7sg 6m 4715Mi ds-cts-0 5m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 4186m 13822Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3619m 3792Mi idm-65858d8c4c-pt5s9 2816m 3847Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 256m 575Mi 06:17:24 DEBUG --- stderr --- 06:17:24 DEBUG 06:17:27 INFO 06:17:27 INFO [loop_until]: kubectl --namespace=xlou top node 06:17:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:17:27 INFO [loop_until]: OK (rc = 0) 06:17:27 DEBUG --- stdout --- 06:17:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4360Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3645m 22% 4447Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5577Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 355m 2% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3098m 19% 4504Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4123m 25% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14104Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 949Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14171Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 333m 2% 2379Mi 4% 06:17:27 DEBUG --- stderr --- 06:17:27 DEBUG 06:18:24 INFO 06:18:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:18:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:18:24 INFO [loop_until]: OK (rc = 0) 06:18:24 DEBUG --- stdout --- 06:18:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3584Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 8m 4727Mi ds-cts-0 7m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 381Mi ds-idrepo-0 3960m 13803Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3410m 3794Mi idm-65858d8c4c-pt5s9 3024m 3850Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 274m 574Mi 06:18:24 DEBUG --- stderr --- 06:18:24 DEBUG 06:18:27 INFO 06:18:27 INFO [loop_until]: kubectl --namespace=xlou top node 06:18:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:18:27 INFO [loop_until]: OK (rc = 0) 06:18:27 DEBUG --- stdout --- 06:18:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 68m 0% 4381Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3591m 22% 4451Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 85m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5586Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 354m 2% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 58m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2967m 18% 4505Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4115m 25% 14308Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 60m 0% 14101Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14170Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 334m 2% 2379Mi 4% 06:18:27 DEBUG --- stderr --- 06:18:27 DEBUG 06:19:24 INFO 06:19:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:19:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:19:24 INFO [loop_until]: OK (rc = 0) 06:19:24 DEBUG --- stdout --- 06:19:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3595Mi am-869fdb5db9-8dg94 7m 4636Mi am-869fdb5db9-wt7sg 32m 4740Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 8m 380Mi ds-idrepo-0 4369m 13825Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3842m 3799Mi idm-65858d8c4c-pt5s9 3045m 3853Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 268m 575Mi 06:19:24 DEBUG --- stderr --- 06:19:24 DEBUG 06:19:27 INFO 06:19:27 INFO [loop_until]: kubectl --namespace=xlou top node 06:19:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:19:27 INFO [loop_until]: OK (rc = 0) 06:19:27 DEBUG --- stdout --- 06:19:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 4379Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3994m 25% 4451Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 5600Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 358m 2% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 63m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3182m 20% 4517Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4434m 27% 14309Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14102Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14171Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 343m 2% 2381Mi 4% 06:19:27 DEBUG --- stderr --- 06:19:27 DEBUG 06:20:24 INFO 06:20:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:20:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:20:24 INFO [loop_until]: OK (rc = 0) 06:20:24 DEBUG --- stdout --- 06:20:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3607Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 10m 4750Mi ds-cts-0 6m 405Mi ds-cts-1 6m 376Mi ds-cts-2 6m 380Mi ds-idrepo-0 3781m 13805Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3637m 3803Mi idm-65858d8c4c-pt5s9 2863m 3856Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 260m 575Mi 06:20:24 DEBUG --- stderr --- 06:20:24 DEBUG 06:20:27 INFO 06:20:27 INFO [loop_until]: kubectl --namespace=xlou top node 06:20:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:20:27 INFO [loop_until]: OK (rc = 0) 06:20:27 DEBUG --- stdout --- 06:20:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4390Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3754m 23% 4446Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 82m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5612Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 349m 2% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2862m 18% 4511Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4010m 25% 14292Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14102Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14172Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 326m 2% 2378Mi 4% 06:20:27 DEBUG --- stderr --- 06:20:27 DEBUG 06:21:24 INFO 06:21:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:21:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:21:24 INFO [loop_until]: OK (rc = 0) 06:21:24 DEBUG --- stdout --- 06:21:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3616Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 8m 4759Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 381Mi ds-idrepo-0 4076m 13806Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3703m 3807Mi idm-65858d8c4c-pt5s9 2601m 3858Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 256m 576Mi 06:21:24 DEBUG --- stderr --- 06:21:24 DEBUG 06:21:27 INFO 06:21:27 INFO [loop_until]: kubectl --namespace=xlou top node 06:21:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:21:27 INFO [loop_until]: OK (rc = 0) 06:21:27 DEBUG --- stdout --- 06:21:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4402Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 4026m 25% 4450Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5619Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 335m 2% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 58m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2733m 17% 4515Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4188m 26% 14294Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14104Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14174Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 331m 2% 2376Mi 4% 06:21:27 DEBUG --- stderr --- 06:21:27 DEBUG 06:22:24 INFO 06:22:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:22:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:22:24 INFO [loop_until]: OK (rc = 0) 06:22:24 DEBUG --- stdout --- 06:22:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3627Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 8m 4771Mi ds-cts-0 6m 405Mi ds-cts-1 6m 376Mi ds-cts-2 6m 381Mi ds-idrepo-0 4278m 13809Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3722m 3875Mi idm-65858d8c4c-pt5s9 3219m 3937Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 269m 577Mi 06:22:24 DEBUG --- stderr --- 06:22:24 DEBUG 06:22:27 INFO 06:22:27 INFO [loop_until]: kubectl --namespace=xlou top node 06:22:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:22:27 INFO [loop_until]: OK (rc = 0) 06:22:27 DEBUG --- stdout --- 06:22:27 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4413Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3977m 25% 4531Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5630Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 357m 2% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3493m 21% 4595Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 52m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4689m 29% 14299Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14104Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 339m 2% 2379Mi 4% 06:22:27 DEBUG --- stderr --- 06:22:27 DEBUG 06:23:24 INFO 06:23:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:23:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:23:24 INFO [loop_until]: OK (rc = 0) 06:23:24 DEBUG --- stdout --- 06:23:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3638Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 7m 4778Mi ds-cts-0 6m 405Mi ds-cts-1 5m 376Mi ds-cts-2 6m 379Mi ds-idrepo-0 4116m 13802Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3705m 3880Mi idm-65858d8c4c-pt5s9 3169m 3945Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 272m 577Mi 06:23:24 DEBUG --- stderr --- 06:23:24 DEBUG 06:23:27 INFO 06:23:27 INFO [loop_until]: kubectl --namespace=xlou top node 06:23:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:23:28 INFO [loop_until]: OK (rc = 0) 06:23:28 DEBUG --- stdout --- 06:23:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 4423Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3737m 23% 4533Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5639Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 350m 2% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3285m 20% 4601Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4178m 26% 14314Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14106Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 326m 2% 2381Mi 4% 06:23:28 DEBUG --- stderr --- 06:23:28 DEBUG 06:24:24 INFO 06:24:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:24:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:24:24 INFO [loop_until]: OK (rc = 0) 06:24:24 DEBUG --- stdout --- 06:24:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 7m 3647Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 6m 4788Mi ds-cts-0 6m 405Mi ds-cts-1 6m 375Mi ds-cts-2 6m 379Mi ds-idrepo-0 4173m 13810Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3860m 3884Mi idm-65858d8c4c-pt5s9 2635m 3946Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 271m 577Mi 06:24:24 DEBUG --- stderr --- 06:24:24 DEBUG 06:24:28 INFO 06:24:28 INFO [loop_until]: kubectl --namespace=xlou top node 06:24:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:24:28 INFO [loop_until]: OK (rc = 0) 06:24:28 DEBUG --- stdout --- 06:24:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 4436Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3947m 24% 4538Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 5663Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 352m 2% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 58m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2791m 17% 4605Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4069m 25% 14301Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14106Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14177Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 341m 2% 2382Mi 4% 06:24:28 DEBUG --- stderr --- 06:24:28 DEBUG 06:25:24 INFO 06:25:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:25:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:25:24 INFO [loop_until]: OK (rc = 0) 06:25:24 DEBUG --- stdout --- 06:25:24 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3657Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 7m 4801Mi ds-cts-0 6m 405Mi ds-cts-1 6m 376Mi ds-cts-2 6m 380Mi ds-idrepo-0 4472m 13817Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3544m 3899Mi idm-65858d8c4c-pt5s9 3280m 3953Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 262m 577Mi 06:25:24 DEBUG --- stderr --- 06:25:24 DEBUG 06:25:28 INFO 06:25:28 INFO [loop_until]: kubectl --namespace=xlou top node 06:25:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:25:28 INFO [loop_until]: OK (rc = 0) 06:25:28 DEBUG --- stdout --- 06:25:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 58m 0% 4448Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3621m 22% 4554Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 1002Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5662Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 351m 2% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3337m 21% 4610Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4331m 27% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14104Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 59m 0% 14178Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 332m 2% 2383Mi 4% 06:25:28 DEBUG --- stderr --- 06:25:28 DEBUG 06:26:24 INFO 06:26:24 INFO [loop_until]: kubectl --namespace=xlou top pods 06:26:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:26:25 INFO [loop_until]: OK (rc = 0) 06:26:25 DEBUG --- stdout --- 06:26:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3669Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 10m 4812Mi ds-cts-0 6m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 4068m 13797Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3568m 3902Mi idm-65858d8c4c-pt5s9 2815m 3954Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 257m 578Mi 06:26:25 DEBUG --- stderr --- 06:26:25 DEBUG 06:26:28 INFO 06:26:28 INFO [loop_until]: kubectl --namespace=xlou top node 06:26:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:26:28 INFO [loop_until]: OK (rc = 0) 06:26:28 DEBUG --- stdout --- 06:26:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4458Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3772m 23% 4558Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5673Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 352m 2% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2756m 17% 4613Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 52m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4099m 25% 14314Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14102Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 320m 2% 2384Mi 4% 06:26:28 DEBUG --- stderr --- 06:26:28 DEBUG 06:27:25 INFO 06:27:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:27:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:27:25 INFO [loop_until]: OK (rc = 0) 06:27:25 DEBUG --- stdout --- 06:27:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3678Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 6m 4822Mi ds-cts-0 5m 405Mi ds-cts-1 5m 375Mi ds-cts-2 6m 380Mi ds-idrepo-0 4034m 13806Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3604m 3905Mi idm-65858d8c4c-pt5s9 2665m 3958Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 257m 578Mi 06:27:25 DEBUG --- stderr --- 06:27:25 DEBUG 06:27:28 INFO 06:27:28 INFO [loop_until]: kubectl --namespace=xlou top node 06:27:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:27:28 INFO [loop_until]: OK (rc = 0) 06:27:28 DEBUG --- stdout --- 06:27:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4470Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3627m 22% 4562Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5684Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 360m 2% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2840m 17% 4615Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3950m 24% 14287Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14107Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 986Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14178Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 328m 2% 2384Mi 4% 06:27:28 DEBUG --- stderr --- 06:27:28 DEBUG 06:28:25 INFO 06:28:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:28:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:28:25 INFO [loop_until]: OK (rc = 0) 06:28:25 DEBUG --- stdout --- 06:28:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3690Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 7m 4833Mi ds-cts-0 6m 405Mi ds-cts-1 6m 376Mi ds-cts-2 6m 380Mi ds-idrepo-0 4017m 13811Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3448m 3909Mi idm-65858d8c4c-pt5s9 2889m 3960Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 257m 578Mi 06:28:25 DEBUG --- stderr --- 06:28:25 DEBUG 06:28:28 INFO 06:28:28 INFO [loop_until]: kubectl --namespace=xlou top node 06:28:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:28:28 INFO [loop_until]: OK (rc = 0) 06:28:28 DEBUG --- stdout --- 06:28:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4476Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3547m 22% 4563Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5692Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 340m 2% 2577Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5448Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2995m 18% 4619Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4110m 25% 14313Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14105Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 331m 2% 2384Mi 4% 06:28:28 DEBUG --- stderr --- 06:28:28 DEBUG 06:29:25 INFO 06:29:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:29:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:29:25 INFO [loop_until]: OK (rc = 0) 06:29:25 DEBUG --- stdout --- 06:29:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3700Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 7m 4842Mi ds-cts-0 6m 405Mi ds-cts-1 5m 376Mi ds-cts-2 6m 380Mi ds-idrepo-0 3953m 13803Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3783m 3914Mi idm-65858d8c4c-pt5s9 2826m 3963Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 266m 579Mi 06:29:25 DEBUG --- stderr --- 06:29:25 DEBUG 06:29:28 INFO 06:29:28 INFO [loop_until]: kubectl --namespace=xlou top node 06:29:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:29:28 INFO [loop_until]: OK (rc = 0) 06:29:28 DEBUG --- stdout --- 06:29:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4485Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3896m 24% 4556Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 5703Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 357m 2% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 58m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3009m 18% 4620Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4218m 26% 14297Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14105Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 342m 2% 2385Mi 4% 06:29:28 DEBUG --- stderr --- 06:29:28 DEBUG 06:30:25 INFO 06:30:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:30:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:30:25 INFO [loop_until]: OK (rc = 0) 06:30:25 DEBUG --- stdout --- 06:30:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3709Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 7m 4853Mi ds-cts-0 6m 405Mi ds-cts-1 6m 376Mi ds-cts-2 6m 380Mi ds-idrepo-0 4215m 13823Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3782m 3916Mi idm-65858d8c4c-pt5s9 2960m 3967Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 266m 579Mi 06:30:25 DEBUG --- stderr --- 06:30:25 DEBUG 06:30:28 INFO 06:30:28 INFO [loop_until]: kubectl --namespace=xlou top node 06:30:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:30:28 INFO [loop_until]: OK (rc = 0) 06:30:28 DEBUG --- stdout --- 06:30:28 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4497Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3990m 25% 4567Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5710Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 353m 2% 2583Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5457Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3182m 20% 4618Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4474m 28% 14297Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14106Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 986Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14179Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 347m 2% 2386Mi 4% 06:30:28 DEBUG --- stderr --- 06:30:28 DEBUG 06:31:25 INFO 06:31:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:31:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:31:25 INFO [loop_until]: OK (rc = 0) 06:31:25 DEBUG --- stdout --- 06:31:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3721Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 6m 4864Mi ds-cts-0 6m 406Mi ds-cts-1 6m 376Mi ds-cts-2 7m 380Mi ds-idrepo-0 4105m 13800Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3595m 3920Mi idm-65858d8c4c-pt5s9 2761m 3969Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 263m 579Mi 06:31:25 DEBUG --- stderr --- 06:31:25 DEBUG 06:31:28 INFO 06:31:28 INFO [loop_until]: kubectl --namespace=xlou top node 06:31:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:31:29 INFO [loop_until]: OK (rc = 0) 06:31:29 DEBUG --- stdout --- 06:31:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4510Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3942m 24% 4574Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5722Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 356m 2% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 58m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3024m 19% 4625Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4051m 25% 14292Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14108Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14176Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 339m 2% 2384Mi 4% 06:31:29 DEBUG --- stderr --- 06:31:29 DEBUG 06:32:25 INFO 06:32:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:32:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:32:25 INFO [loop_until]: OK (rc = 0) 06:32:25 DEBUG --- stdout --- 06:32:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3733Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 7m 4876Mi ds-cts-0 8m 405Mi ds-cts-1 8m 377Mi ds-cts-2 6m 381Mi ds-idrepo-0 4413m 13808Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 11m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4140m 3972Mi idm-65858d8c4c-pt5s9 3045m 3998Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 285m 662Mi 06:32:25 DEBUG --- stderr --- 06:32:25 DEBUG 06:32:29 INFO 06:32:29 INFO [loop_until]: kubectl --namespace=xlou top node 06:32:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:32:29 INFO [loop_until]: OK (rc = 0) 06:32:29 DEBUG --- stdout --- 06:32:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4521Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 4315m 27% 4620Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 1001Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5736Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 367m 2% 2585Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3222m 20% 4660Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4484m 28% 14300Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14107Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14180Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 361m 2% 2469Mi 4% 06:32:29 DEBUG --- stderr --- 06:32:29 DEBUG 06:33:25 INFO 06:33:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:33:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:33:25 INFO [loop_until]: OK (rc = 0) 06:33:25 DEBUG --- stdout --- 06:33:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3741Mi am-869fdb5db9-8dg94 5m 4636Mi am-869fdb5db9-wt7sg 6m 4887Mi ds-cts-0 6m 405Mi ds-cts-1 5m 376Mi ds-cts-2 8m 380Mi ds-idrepo-0 3952m 13811Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 11m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3283m 3974Mi idm-65858d8c4c-pt5s9 2794m 4000Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 248m 662Mi 06:33:25 DEBUG --- stderr --- 06:33:25 DEBUG 06:33:29 INFO 06:33:29 INFO [loop_until]: kubectl --namespace=xlou top node 06:33:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:33:29 INFO [loop_until]: OK (rc = 0) 06:33:29 DEBUG --- stdout --- 06:33:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4532Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3512m 22% 4623Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 59m 0% 5742Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 354m 2% 2583Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2805m 17% 4652Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 57m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4037m 25% 14319Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 54m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14178Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 307m 1% 2470Mi 4% 06:33:29 DEBUG --- stderr --- 06:33:29 DEBUG 06:34:25 INFO 06:34:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:34:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:34:25 INFO [loop_until]: OK (rc = 0) 06:34:25 DEBUG --- stdout --- 06:34:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3753Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 6m 4895Mi ds-cts-0 5m 405Mi ds-cts-1 6m 376Mi ds-cts-2 6m 380Mi ds-idrepo-0 3730m 13798Mi ds-idrepo-1 12m 13672Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3064m 3976Mi idm-65858d8c4c-pt5s9 2594m 4002Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 232m 661Mi 06:34:25 DEBUG --- stderr --- 06:34:25 DEBUG 06:34:29 INFO 06:34:29 INFO [loop_until]: kubectl --namespace=xlou top node 06:34:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:34:29 INFO [loop_until]: OK (rc = 0) 06:34:29 DEBUG --- stdout --- 06:34:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4539Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3456m 21% 4627Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5754Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 347m 2% 2582Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2503m 15% 4656Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3780m 23% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14106Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 951Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 316m 1% 2470Mi 4% 06:34:29 DEBUG --- stderr --- 06:34:29 DEBUG 06:35:25 INFO 06:35:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:35:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:35:25 INFO [loop_until]: OK (rc = 0) 06:35:25 DEBUG --- stdout --- 06:35:25 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3765Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 7m 4907Mi ds-cts-0 5m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 381Mi ds-idrepo-0 11m 13802Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 3976Mi idm-65858d8c4c-pt5s9 7m 4001Mi lodemon-66684b7694-c5c6m 2m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 21m 107Mi 06:35:25 DEBUG --- stderr --- 06:35:25 DEBUG 06:35:29 INFO 06:35:29 INFO [loop_until]: kubectl --namespace=xlou top node 06:35:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:35:29 INFO [loop_until]: OK (rc = 0) 06:35:29 DEBUG --- stdout --- 06:35:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 4550Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 81m 0% 4629Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5767Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 108m 0% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 73m 0% 4658Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 65m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 58m 0% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 60m 0% 14108Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 950Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 64m 0% 14175Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 78m 0% 1918Mi 3% 06:35:29 DEBUG --- stderr --- 06:35:29 DEBUG 06:36:25 INFO 06:36:25 INFO [loop_until]: kubectl --namespace=xlou top pods 06:36:25 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:36:26 INFO [loop_until]: OK (rc = 0) 06:36:26 DEBUG --- stdout --- 06:36:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3777Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 7m 4915Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 10m 13802Mi ds-idrepo-1 11m 13672Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5m 3976Mi idm-65858d8c4c-pt5s9 6m 4000Mi lodemon-66684b7694-c5c6m 1m 68Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 107Mi 06:36:26 DEBUG --- stderr --- 06:36:26 DEBUG 127.0.0.1 - - [16/Aug/2023 06:36:26] "GET /monitoring/average?start_time=23-08-16_05:06:01&stop_time=23-08-16_05:34:26 HTTP/1.1" 200 - 06:36:29 INFO 06:36:29 INFO [loop_until]: kubectl --namespace=xlou top node 06:36:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:36:29 INFO [loop_until]: OK (rc = 0) 06:36:29 DEBUG --- stdout --- 06:36:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4557Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 80m 0% 4626Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5778Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 111m 0% 2576Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 68m 0% 4657Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 57m 0% 14300Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14106Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14181Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1920Mi 3% 06:36:29 DEBUG --- stderr --- 06:36:29 DEBUG 06:37:26 INFO 06:37:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:37:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:37:26 INFO [loop_until]: OK (rc = 0) 06:37:26 DEBUG --- stdout --- 06:37:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 3791Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 7m 4927Mi ds-cts-0 7m 405Mi ds-cts-1 7m 377Mi ds-cts-2 6m 381Mi ds-idrepo-0 3254m 13826Mi ds-idrepo-1 11m 13673Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 1537m 3980Mi idm-65858d8c4c-pt5s9 1545m 4005Mi lodemon-66684b7694-c5c6m 1m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 525m 511Mi 06:37:26 DEBUG --- stderr --- 06:37:26 DEBUG 06:37:29 INFO 06:37:29 INFO [loop_until]: kubectl --namespace=xlou top node 06:37:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:37:29 INFO [loop_until]: OK (rc = 0) 06:37:29 DEBUG --- stdout --- 06:37:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 4574Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 2490m 15% 4623Mi 7% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5786Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 266m 1% 2574Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2600m 16% 4661Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2993m 18% 14297Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 59m 0% 14179Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 667m 4% 2316Mi 3% 06:37:29 DEBUG --- stderr --- 06:37:29 DEBUG 06:38:26 INFO 06:38:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:38:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:38:26 INFO [loop_until]: OK (rc = 0) 06:38:26 DEBUG --- stdout --- 06:38:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3802Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 6m 4939Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 4538m 13801Mi ds-idrepo-1 1101m 13674Mi ds-idrepo-2 11m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5062m 4402Mi idm-65858d8c4c-pt5s9 3584m 4017Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 431m 601Mi 06:38:26 DEBUG --- stderr --- 06:38:26 DEBUG 06:38:29 INFO 06:38:29 INFO [loop_until]: kubectl --namespace=xlou top node 06:38:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:38:29 INFO [loop_until]: OK (rc = 0) 06:38:29 DEBUG --- stdout --- 06:38:29 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4587Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 5024m 31% 5070Mi 8% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5798Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 386m 2% 2584Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5441Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3653m 22% 4671Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3578m 22% 14319Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14107Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 952Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 982Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1497m 9% 14180Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 506m 3% 2405Mi 4% 06:38:29 DEBUG --- stderr --- 06:38:29 DEBUG 06:39:26 INFO 06:39:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:39:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:39:26 INFO [loop_until]: OK (rc = 0) 06:39:26 DEBUG --- stdout --- 06:39:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3813Mi am-869fdb5db9-8dg94 6m 4636Mi am-869fdb5db9-wt7sg 7m 4948Mi ds-cts-0 6m 405Mi ds-cts-1 5m 377Mi ds-cts-2 7m 380Mi ds-idrepo-0 3624m 13800Mi ds-idrepo-1 957m 13703Mi ds-idrepo-2 11m 13643Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4849m 4886Mi idm-65858d8c4c-pt5s9 3956m 4036Mi lodemon-66684b7694-c5c6m 3m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 316m 676Mi 06:39:26 DEBUG --- stderr --- 06:39:26 DEBUG 06:39:29 INFO 06:39:29 INFO [loop_until]: kubectl --namespace=xlou top node 06:39:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:39:30 INFO [loop_until]: OK (rc = 0) 06:39:30 DEBUG --- stdout --- 06:39:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 56m 0% 4601Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 5018m 31% 5527Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5808Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 384m 2% 2582Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3750m 23% 4689Mi 7% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3402m 21% 14297Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1387m 8% 14211Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 393m 2% 2481Mi 4% 06:39:30 DEBUG --- stderr --- 06:39:30 DEBUG 06:40:26 INFO 06:40:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:40:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:40:26 INFO [loop_until]: OK (rc = 0) 06:40:26 DEBUG --- stdout --- 06:40:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3826Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4962Mi ds-cts-0 5m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 381Mi ds-idrepo-0 2287m 13805Mi ds-idrepo-1 1489m 13706Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3430m 4902Mi idm-65858d8c4c-pt5s9 3798m 4255Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 303m 722Mi 06:40:26 DEBUG --- stderr --- 06:40:26 DEBUG 06:40:30 INFO 06:40:30 INFO [loop_until]: kubectl --namespace=xlou top node 06:40:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:40:30 INFO [loop_until]: OK (rc = 0) 06:40:30 DEBUG --- stdout --- 06:40:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4611Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3343m 21% 5543Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 994Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5821Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 370m 2% 2608Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 4086m 25% 4862Mi 8% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2531m 15% 14301Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14106Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1517m 9% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 341m 2% 2530Mi 4% 06:40:30 DEBUG --- stderr --- 06:40:30 DEBUG 06:41:26 INFO 06:41:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:41:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:41:26 INFO [loop_until]: OK (rc = 0) 06:41:26 DEBUG --- stdout --- 06:41:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3831Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4974Mi ds-cts-0 6m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 381Mi ds-idrepo-0 3017m 13825Mi ds-idrepo-1 784m 13707Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4860m 4958Mi idm-65858d8c4c-pt5s9 3868m 5049Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 300m 785Mi 06:41:26 DEBUG --- stderr --- 06:41:26 DEBUG 06:41:30 INFO 06:41:30 INFO [loop_until]: kubectl --namespace=xlou top node 06:41:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:41:30 INFO [loop_until]: OK (rc = 0) 06:41:30 DEBUG --- stdout --- 06:41:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4615Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 4253m 26% 5602Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5830Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 387m 2% 2635Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5443Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 4260m 26% 5699Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2572m 16% 14324Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14108Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1822m 11% 14244Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 353m 2% 2590Mi 4% 06:41:30 DEBUG --- stderr --- 06:41:30 DEBUG 06:42:26 INFO 06:42:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:42:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:42:26 INFO [loop_until]: OK (rc = 0) 06:42:26 DEBUG --- stdout --- 06:42:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3842Mi am-869fdb5db9-8dg94 7m 4637Mi am-869fdb5db9-wt7sg 7m 4985Mi ds-cts-0 5m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 381Mi ds-idrepo-0 4581m 13808Mi ds-idrepo-1 13m 13805Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3749m 4964Mi idm-65858d8c4c-pt5s9 3454m 5079Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 284m 930Mi 06:42:26 DEBUG --- stderr --- 06:42:26 DEBUG 06:42:30 INFO 06:42:30 INFO [loop_until]: kubectl --namespace=xlou top node 06:42:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:42:30 INFO [loop_until]: OK (rc = 0) 06:42:30 DEBUG --- stdout --- 06:42:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4628Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3991m 25% 5608Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5844Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 387m 2% 2603Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 58m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 4426m 27% 5727Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4784m 30% 14319Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14303Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 348m 2% 2733Mi 4% 06:42:30 DEBUG --- stderr --- 06:42:30 DEBUG 06:43:26 INFO 06:43:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:43:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:43:26 INFO [loop_until]: OK (rc = 0) 06:43:26 DEBUG --- stdout --- 06:43:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3851Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 4995Mi ds-cts-0 6m 405Mi ds-cts-1 5m 377Mi ds-cts-2 5m 381Mi ds-idrepo-0 3127m 13809Mi ds-idrepo-1 2064m 13808Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4680m 4969Mi idm-65858d8c4c-pt5s9 3900m 5106Mi lodemon-66684b7694-c5c6m 1m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 269m 932Mi 06:43:26 DEBUG --- stderr --- 06:43:26 DEBUG 06:43:30 INFO 06:43:30 INFO [loop_until]: kubectl --namespace=xlou top node 06:43:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:43:30 INFO [loop_until]: OK (rc = 0) 06:43:30 DEBUG --- stdout --- 06:43:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4633Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 5082m 31% 5625Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5856Mi 9% gke-xlou-cdm-default-pool-f05840a3-tnc9 393m 2% 2632Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 4041m 25% 5750Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3534m 22% 14319Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14112Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 965Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 984Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1735m 10% 14308Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 370m 2% 2734Mi 4% 06:43:30 DEBUG --- stderr --- 06:43:30 DEBUG 06:44:26 INFO 06:44:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:44:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:44:26 INFO [loop_until]: OK (rc = 0) 06:44:26 DEBUG --- stdout --- 06:44:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 8m 3862Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5005Mi ds-cts-0 6m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 1796m 13803Mi ds-idrepo-1 2021m 13811Mi ds-idrepo-2 13m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3085m 4975Mi idm-65858d8c4c-pt5s9 2878m 5112Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 282m 930Mi 06:44:26 DEBUG --- stderr --- 06:44:26 DEBUG 06:44:30 INFO 06:44:30 INFO [loop_until]: kubectl --namespace=xlou top node 06:44:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:44:30 INFO [loop_until]: OK (rc = 0) 06:44:30 DEBUG --- stdout --- 06:44:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 4649Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 3830m 24% 5622Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 82m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5868Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 393m 2% 2588Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2087m 13% 5758Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2007m 12% 14322Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 69m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1630m 10% 14306Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 364m 2% 2733Mi 4% 06:44:30 DEBUG --- stderr --- 06:44:30 DEBUG 06:45:26 INFO 06:45:26 INFO [loop_until]: kubectl --namespace=xlou top pods 06:45:26 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:45:26 INFO [loop_until]: OK (rc = 0) 06:45:26 DEBUG --- stdout --- 06:45:26 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3872Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5016Mi ds-cts-0 6m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 381Mi ds-idrepo-0 4239m 13805Mi ds-idrepo-1 2198m 13804Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5194m 4977Mi idm-65858d8c4c-pt5s9 3483m 5114Mi lodemon-66684b7694-c5c6m 1m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 291m 931Mi 06:45:26 DEBUG --- stderr --- 06:45:26 DEBUG 06:45:30 INFO 06:45:30 INFO [loop_until]: kubectl --namespace=xlou top node 06:45:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:45:30 INFO [loop_until]: OK (rc = 0) 06:45:30 DEBUG --- stdout --- 06:45:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4657Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 5297m 33% 5622Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5876Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 377m 2% 2614Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3830m 24% 5760Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2306m 14% 14305Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2269m 14% 14324Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 343m 2% 2735Mi 4% 06:45:30 DEBUG --- stderr --- 06:45:30 DEBUG 06:46:27 INFO 06:46:27 INFO [loop_until]: kubectl --namespace=xlou top pods 06:46:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:46:27 INFO [loop_until]: OK (rc = 0) 06:46:27 DEBUG --- stdout --- 06:46:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3885Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5027Mi ds-cts-0 6m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 381Mi ds-idrepo-0 2646m 13828Mi ds-idrepo-1 1522m 13812Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5247m 4980Mi idm-65858d8c4c-pt5s9 3743m 5120Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 307m 932Mi 06:46:27 DEBUG --- stderr --- 06:46:27 DEBUG 06:46:30 INFO 06:46:30 INFO [loop_until]: kubectl --namespace=xlou top node 06:46:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:46:30 INFO [loop_until]: OK (rc = 0) 06:46:30 DEBUG --- stdout --- 06:46:30 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4669Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 4924m 30% 5623Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 5888Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 388m 2% 2626Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3783m 23% 5766Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 962Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2306m 14% 14328Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14106Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1564m 9% 14314Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 346m 2% 2737Mi 4% 06:46:30 DEBUG --- stderr --- 06:46:30 DEBUG 06:47:27 INFO 06:47:27 INFO [loop_until]: kubectl --namespace=xlou top pods 06:47:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:47:27 INFO [loop_until]: OK (rc = 0) 06:47:27 DEBUG --- stdout --- 06:47:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3895Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 8m 5037Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 3641m 13823Mi ds-idrepo-1 2651m 13796Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4142m 4982Mi idm-65858d8c4c-pt5s9 4122m 5132Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 330m 945Mi 06:47:27 DEBUG --- stderr --- 06:47:27 DEBUG 06:47:30 INFO 06:47:30 INFO [loop_until]: kubectl --namespace=xlou top node 06:47:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:47:31 INFO [loop_until]: OK (rc = 0) 06:47:31 DEBUG --- stdout --- 06:47:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 4680Mi 7% gke-xlou-cdm-default-pool-f05840a3-jnx6 5335m 33% 5627Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 81m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 69m 0% 5898Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 425m 2% 2636Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5448Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3970m 24% 5776Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 969Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2263m 14% 14322Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14107Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 56m 0% 988Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1633m 10% 14317Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 395m 2% 2739Mi 4% 06:47:31 DEBUG --- stderr --- 06:47:31 DEBUG 06:48:27 INFO 06:48:27 INFO [loop_until]: kubectl --namespace=xlou top pods 06:48:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:48:27 INFO [loop_until]: OK (rc = 0) 06:48:27 DEBUG --- stdout --- 06:48:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3905Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 8m 5050Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 5m 380Mi ds-idrepo-0 2655m 13801Mi ds-idrepo-1 1644m 13822Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4907m 5171Mi idm-65858d8c4c-pt5s9 4482m 5129Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 281m 946Mi 06:48:27 DEBUG --- stderr --- 06:48:27 DEBUG 06:48:31 INFO 06:48:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:48:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:48:31 INFO [loop_until]: OK (rc = 0) 06:48:31 DEBUG --- stdout --- 06:48:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4694Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 5361m 33% 5777Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5907Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 403m 2% 2648Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3841m 24% 5777Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4029m 25% 14300Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 54m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 51m 0% 988Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1623m 10% 14288Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 366m 2% 2748Mi 4% 06:48:31 DEBUG --- stderr --- 06:48:31 DEBUG 06:49:27 INFO 06:49:27 INFO [loop_until]: kubectl --namespace=xlou top pods 06:49:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:49:27 INFO [loop_until]: OK (rc = 0) 06:49:27 DEBUG --- stdout --- 06:49:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 8m 3918Mi am-869fdb5db9-8dg94 7m 4637Mi am-869fdb5db9-wt7sg 7m 5061Mi ds-cts-0 8m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 3035m 13798Mi ds-idrepo-1 1908m 13800Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4020m 4985Mi idm-65858d8c4c-pt5s9 6468m 5133Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 289m 944Mi 06:49:27 DEBUG --- stderr --- 06:49:27 DEBUG 06:49:31 INFO 06:49:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:49:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:49:31 INFO [loop_until]: OK (rc = 0) 06:49:31 DEBUG --- stdout --- 06:49:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 4701Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 3785m 23% 5626Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5918Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 405m 2% 2651Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5449Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 5414m 34% 5780Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3832m 24% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14108Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 55m 0% 988Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1742m 10% 14290Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 358m 2% 2751Mi 4% 06:49:31 DEBUG --- stderr --- 06:49:31 DEBUG 06:50:27 INFO 06:50:27 INFO [loop_until]: kubectl --namespace=xlou top pods 06:50:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:50:27 INFO [loop_until]: OK (rc = 0) 06:50:27 DEBUG --- stdout --- 06:50:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 3927Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5070Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 8m 380Mi ds-idrepo-0 3553m 13803Mi ds-idrepo-1 2666m 13797Mi ds-idrepo-2 13m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 7747m 4987Mi idm-65858d8c4c-pt5s9 2307m 5137Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 279m 946Mi 06:50:27 DEBUG --- stderr --- 06:50:27 DEBUG 06:50:31 INFO 06:50:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:50:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:50:31 INFO [loop_until]: OK (rc = 0) 06:50:31 DEBUG --- stdout --- 06:50:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 4716Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 7475m 47% 5630Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 91m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 5930Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 408m 2% 2647Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5444Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 2513m 15% 5780Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3569m 22% 14303Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 61m 0% 988Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1764m 11% 14291Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 357m 2% 2745Mi 4% 06:50:31 DEBUG --- stderr --- 06:50:31 DEBUG 06:51:27 INFO 06:51:27 INFO [loop_until]: kubectl --namespace=xlou top pods 06:51:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:51:27 INFO [loop_until]: OK (rc = 0) 06:51:27 DEBUG --- stdout --- 06:51:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3939Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5080Mi ds-cts-0 6m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 3105m 13800Mi ds-idrepo-1 2325m 13821Mi ds-idrepo-2 13m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4151m 4983Mi idm-65858d8c4c-pt5s9 5889m 5138Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 292m 945Mi 06:51:27 DEBUG --- stderr --- 06:51:27 DEBUG 06:51:31 INFO 06:51:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:51:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:51:31 INFO [loop_until]: OK (rc = 0) 06:51:31 DEBUG --- stdout --- 06:51:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 65m 0% 4725Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 3522m 22% 5625Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 5943Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 400m 2% 2641Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 6905m 43% 5779Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3569m 22% 14300Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 67m 0% 14112Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1751m 11% 14293Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 362m 2% 2749Mi 4% 06:51:31 DEBUG --- stderr --- 06:51:31 DEBUG 06:52:27 INFO 06:52:27 INFO [loop_until]: kubectl --namespace=xlou top pods 06:52:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:52:27 INFO [loop_until]: OK (rc = 0) 06:52:27 DEBUG --- stdout --- 06:52:27 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 3948Mi am-869fdb5db9-8dg94 7m 4637Mi am-869fdb5db9-wt7sg 12m 5092Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 7m 380Mi ds-idrepo-0 3147m 13822Mi ds-idrepo-1 1360m 13791Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4996m 4984Mi idm-65858d8c4c-pt5s9 5225m 5135Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 297m 943Mi 06:52:27 DEBUG --- stderr --- 06:52:27 DEBUG 06:52:31 INFO 06:52:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:52:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:52:31 INFO [loop_until]: OK (rc = 0) 06:52:31 DEBUG --- stdout --- 06:52:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4737Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 4817m 30% 5627Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 69m 0% 5956Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 401m 2% 2619Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5448Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 5256m 33% 5777Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3934m 24% 14323Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 65m 0% 14113Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 59m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1880m 11% 14291Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 369m 2% 2751Mi 4% 06:52:31 DEBUG --- stderr --- 06:52:31 DEBUG 06:53:27 INFO 06:53:27 INFO [loop_until]: kubectl --namespace=xlou top pods 06:53:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:53:28 INFO [loop_until]: OK (rc = 0) 06:53:28 DEBUG --- stdout --- 06:53:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3959Mi am-869fdb5db9-8dg94 5m 4637Mi am-869fdb5db9-wt7sg 7m 5102Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 4751m 13822Mi ds-idrepo-1 11m 13822Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3860m 4985Mi idm-65858d8c4c-pt5s9 3466m 5134Mi lodemon-66684b7694-c5c6m 1m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 257m 943Mi 06:53:28 DEBUG --- stderr --- 06:53:28 DEBUG 06:53:31 INFO 06:53:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:53:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:53:31 INFO [loop_until]: OK (rc = 0) 06:53:31 DEBUG --- stdout --- 06:53:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4748Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 3899m 24% 5629Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 5964Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 362m 2% 2597Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 59m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3684m 23% 5778Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4830m 30% 14322Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14320Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 329m 2% 2751Mi 4% 06:53:31 DEBUG --- stderr --- 06:53:31 DEBUG 06:54:28 INFO 06:54:28 INFO [loop_until]: kubectl --namespace=xlou top pods 06:54:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:54:28 INFO [loop_until]: OK (rc = 0) 06:54:28 DEBUG --- stdout --- 06:54:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3971Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5113Mi ds-cts-0 5m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 1741m 13824Mi ds-idrepo-1 2836m 13799Mi ds-idrepo-2 11m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4708m 4984Mi idm-65858d8c4c-pt5s9 3473m 5133Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 287m 943Mi 06:54:28 DEBUG --- stderr --- 06:54:28 DEBUG 06:54:31 INFO 06:54:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:54:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:54:31 INFO [loop_until]: OK (rc = 0) 06:54:31 DEBUG --- stdout --- 06:54:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 59m 0% 4758Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 4828m 30% 5625Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5973Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 398m 2% 2583Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3438m 21% 5776Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 56m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2246m 14% 14298Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 64m 0% 14113Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 983Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2748m 17% 14295Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 352m 2% 2750Mi 4% 06:54:31 DEBUG --- stderr --- 06:54:31 DEBUG 06:55:28 INFO 06:55:28 INFO [loop_until]: kubectl --namespace=xlou top pods 06:55:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:55:28 INFO [loop_until]: OK (rc = 0) 06:55:28 DEBUG --- stdout --- 06:55:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3981Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5125Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 3504m 13794Mi ds-idrepo-1 1435m 13795Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5361m 4985Mi idm-65858d8c4c-pt5s9 3911m 5135Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 292m 943Mi 06:55:28 DEBUG --- stderr --- 06:55:28 DEBUG 06:55:31 INFO 06:55:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:55:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:55:31 INFO [loop_until]: OK (rc = 0) 06:55:31 DEBUG --- stdout --- 06:55:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 66m 0% 4780Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 5800m 36% 5629Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5985Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 411m 2% 2589Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5448Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3934m 24% 5779Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3268m 20% 14293Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 66m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 50m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2251m 14% 14292Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 367m 2% 2751Mi 4% 06:55:31 DEBUG --- stderr --- 06:55:31 DEBUG 06:56:28 INFO 06:56:28 INFO [loop_until]: kubectl --namespace=xlou top pods 06:56:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:56:28 INFO [loop_until]: OK (rc = 0) 06:56:28 DEBUG --- stdout --- 06:56:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 3993Mi am-869fdb5db9-8dg94 7m 4637Mi am-869fdb5db9-wt7sg 7m 5137Mi ds-cts-0 6m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 4589m 13801Mi ds-idrepo-1 11m 13822Mi ds-idrepo-2 12m 13646Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4106m 4986Mi idm-65858d8c4c-pt5s9 3526m 5135Mi lodemon-66684b7694-c5c6m 4m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 263m 945Mi 06:56:28 DEBUG --- stderr --- 06:56:28 DEBUG 06:56:31 INFO 06:56:31 INFO [loop_until]: kubectl --namespace=xlou top node 06:56:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:56:32 INFO [loop_until]: OK (rc = 0) 06:56:32 DEBUG --- stdout --- 06:56:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4776Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 4082m 25% 5627Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 5995Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 378m 2% 2598Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 57m 0% 5449Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3763m 23% 5780Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4854m 30% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14113Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 986Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 58m 0% 14320Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 326m 2% 2749Mi 4% 06:56:32 DEBUG --- stderr --- 06:56:32 DEBUG 06:57:28 INFO 06:57:28 INFO [loop_until]: kubectl --namespace=xlou top pods 06:57:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:57:28 INFO [loop_until]: OK (rc = 0) 06:57:28 DEBUG --- stdout --- 06:57:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 4002Mi am-869fdb5db9-8dg94 12m 4638Mi am-869fdb5db9-wt7sg 6m 5146Mi ds-cts-0 5m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 2282m 13801Mi ds-idrepo-1 3426m 13796Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5995m 4986Mi idm-65858d8c4c-pt5s9 3728m 5135Mi lodemon-66684b7694-c5c6m 4m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 305m 943Mi 06:57:28 DEBUG --- stderr --- 06:57:28 DEBUG 06:57:32 INFO 06:57:32 INFO [loop_until]: kubectl --namespace=xlou top node 06:57:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:57:32 INFO [loop_until]: OK (rc = 0) 06:57:32 DEBUG --- stdout --- 06:57:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4787Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 5831m 36% 5626Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 1001Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 6007Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 407m 2% 2589Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3937m 24% 5777Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2617m 16% 14304Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2895m 18% 14294Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 372m 2% 2751Mi 4% 06:57:32 DEBUG --- stderr --- 06:57:32 DEBUG 06:58:28 INFO 06:58:28 INFO [loop_until]: kubectl --namespace=xlou top pods 06:58:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:58:28 INFO [loop_until]: OK (rc = 0) 06:58:28 DEBUG --- stdout --- 06:58:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 4013Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5155Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 3469m 13807Mi ds-idrepo-1 3062m 13795Mi ds-idrepo-2 11m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4293m 4984Mi idm-65858d8c4c-pt5s9 3684m 5134Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 279m 943Mi 06:58:28 DEBUG --- stderr --- 06:58:28 DEBUG 06:58:32 INFO 06:58:32 INFO [loop_until]: kubectl --namespace=xlou top node 06:58:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:58:32 INFO [loop_until]: OK (rc = 0) 06:58:32 DEBUG --- stdout --- 06:58:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4799Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 4637m 29% 5614Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 6019Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 387m 2% 2585Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3692m 23% 5777Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 2506m 15% 14327Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14108Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2566m 16% 14289Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 351m 2% 2748Mi 4% 06:58:32 DEBUG --- stderr --- 06:58:32 DEBUG 06:59:28 INFO 06:59:28 INFO [loop_until]: kubectl --namespace=xlou top pods 06:59:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:59:28 INFO [loop_until]: OK (rc = 0) 06:59:28 DEBUG --- stdout --- 06:59:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 4023Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5166Mi ds-cts-0 5m 406Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 2363m 13825Mi ds-idrepo-1 2231m 13793Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5954m 4990Mi idm-65858d8c4c-pt5s9 4231m 5134Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 292m 944Mi 06:59:28 DEBUG --- stderr --- 06:59:28 DEBUG 06:59:32 INFO 06:59:32 INFO [loop_until]: kubectl --namespace=xlou top node 06:59:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 06:59:32 INFO [loop_until]: OK (rc = 0) 06:59:32 DEBUG --- stdout --- 06:59:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4809Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 6158m 38% 5628Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 6027Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 407m 2% 2587Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3972m 24% 5780Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3450m 21% 14326Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14112Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2443m 15% 14316Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 366m 2% 2751Mi 4% 06:59:32 DEBUG --- stderr --- 06:59:32 DEBUG 07:00:28 INFO 07:00:28 INFO [loop_until]: kubectl --namespace=xlou top pods 07:00:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:00:28 INFO [loop_until]: OK (rc = 0) 07:00:28 DEBUG --- stdout --- 07:00:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 4036Mi am-869fdb5db9-8dg94 6m 4637Mi am-869fdb5db9-wt7sg 7m 5178Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 4925m 13802Mi ds-idrepo-1 12m 13822Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3860m 4990Mi idm-65858d8c4c-pt5s9 3703m 5136Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 263m 944Mi 07:00:28 DEBUG --- stderr --- 07:00:28 DEBUG 07:00:32 INFO 07:00:32 INFO [loop_until]: kubectl --namespace=xlou top node 07:00:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:00:32 INFO [loop_until]: OK (rc = 0) 07:00:32 DEBUG --- stdout --- 07:00:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 64m 0% 4820Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 4049m 25% 5620Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 6051Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 365m 2% 2596Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5445Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3676m 23% 5781Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4903m 30% 14322Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 57m 0% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 335m 2% 2748Mi 4% 07:00:32 DEBUG --- stderr --- 07:00:32 DEBUG 07:01:28 INFO 07:01:28 INFO [loop_until]: kubectl --namespace=xlou top pods 07:01:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:01:28 INFO [loop_until]: OK (rc = 0) 07:01:28 DEBUG --- stdout --- 07:01:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 4043Mi am-869fdb5db9-8dg94 8m 4637Mi am-869fdb5db9-wt7sg 7m 5187Mi ds-cts-0 6m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 3686m 13823Mi ds-idrepo-1 706m 13800Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3844m 4990Mi idm-65858d8c4c-pt5s9 3229m 5136Mi lodemon-66684b7694-c5c6m 6m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 276m 944Mi 07:01:28 DEBUG --- stderr --- 07:01:28 DEBUG 07:01:32 INFO 07:01:32 INFO [loop_until]: kubectl --namespace=xlou top node 07:01:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:01:32 INFO [loop_until]: OK (rc = 0) 07:01:32 DEBUG --- stdout --- 07:01:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4827Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 4327m 27% 5631Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 78m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 6048Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 378m 2% 2587Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3341m 21% 5782Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3736m 23% 14324Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 957Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1046m 6% 14298Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 348m 2% 2749Mi 4% 07:01:32 DEBUG --- stderr --- 07:01:32 DEBUG 07:02:28 INFO 07:02:28 INFO [loop_until]: kubectl --namespace=xlou top pods 07:02:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:02:28 INFO [loop_until]: OK (rc = 0) 07:02:28 DEBUG --- stdout --- 07:02:28 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 4056Mi am-869fdb5db9-8dg94 11m 4641Mi am-869fdb5db9-wt7sg 7m 5198Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 4777m 13822Mi ds-idrepo-1 12m 13800Mi ds-idrepo-2 10m 13645Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4234m 4991Mi idm-65858d8c4c-pt5s9 3586m 5137Mi lodemon-66684b7694-c5c6m 4m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 267m 944Mi 07:02:28 DEBUG --- stderr --- 07:02:28 DEBUG 07:02:32 INFO 07:02:32 INFO [loop_until]: kubectl --namespace=xlou top node 07:02:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:02:32 INFO [loop_until]: OK (rc = 0) 07:02:32 DEBUG --- stdout --- 07:02:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4842Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 4099m 25% 5634Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 79m 0% 996Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 61m 0% 6062Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 375m 2% 2594Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 66m 0% 5451Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3921m 24% 5780Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4917m 30% 14317Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14114Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14296Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 341m 2% 2751Mi 4% 07:02:32 DEBUG --- stderr --- 07:02:32 DEBUG 07:03:28 INFO 07:03:28 INFO [loop_until]: kubectl --namespace=xlou top pods 07:03:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:03:29 INFO [loop_until]: OK (rc = 0) 07:03:29 DEBUG --- stdout --- 07:03:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 10m 4068Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 13m 5210Mi ds-cts-0 5m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 3302m 13803Mi ds-idrepo-1 1294m 13822Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3577m 4990Mi idm-65858d8c4c-pt5s9 3266m 5136Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 267m 944Mi 07:03:29 DEBUG --- stderr --- 07:03:29 DEBUG 07:03:33 INFO 07:03:33 INFO [loop_until]: kubectl --namespace=xlou top node 07:03:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:03:33 INFO [loop_until]: OK (rc = 0) 07:03:33 DEBUG --- stdout --- 07:03:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4851Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 3968m 24% 5630Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 85m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 70m 0% 6069Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 367m 2% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5449Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3292m 20% 5780Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3322m 20% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 54m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 50m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1173m 7% 14322Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 341m 2% 2748Mi 4% 07:03:33 DEBUG --- stderr --- 07:03:33 DEBUG 07:04:29 INFO 07:04:29 INFO [loop_until]: kubectl --namespace=xlou top pods 07:04:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:04:29 INFO [loop_until]: OK (rc = 0) 07:04:29 DEBUG --- stdout --- 07:04:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 5Mi am-869fdb5db9-5j69v 8m 4078Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 7m 5219Mi ds-cts-0 5m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 2905m 13805Mi ds-idrepo-1 1620m 13823Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 5178m 4991Mi idm-65858d8c4c-pt5s9 3774m 5136Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 290m 944Mi 07:04:29 DEBUG --- stderr --- 07:04:29 DEBUG 07:04:33 INFO 07:04:33 INFO [loop_until]: kubectl --namespace=xlou top node 07:04:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:04:33 INFO [loop_until]: OK (rc = 0) 07:04:33 DEBUG --- stdout --- 07:04:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4862Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 5405m 34% 5627Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 998Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 62m 0% 6080Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 379m 2% 2582Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 59m 0% 5452Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3862m 24% 5788Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3069m 19% 14307Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 62m 0% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 2295m 14% 14322Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 365m 2% 2748Mi 4% 07:04:33 DEBUG --- stderr --- 07:04:33 DEBUG 07:05:29 INFO 07:05:29 INFO [loop_until]: kubectl --namespace=xlou top pods 07:05:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:05:29 INFO [loop_until]: OK (rc = 0) 07:05:29 DEBUG --- stdout --- 07:05:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 4088Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 7m 5229Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 4647m 13823Mi ds-idrepo-1 11m 13797Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 3974m 4991Mi idm-65858d8c4c-pt5s9 3579m 5137Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 261m 945Mi 07:05:29 DEBUG --- stderr --- 07:05:29 DEBUG 07:05:33 INFO 07:05:33 INFO [loop_until]: kubectl --namespace=xlou top node 07:05:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:05:33 INFO [loop_until]: OK (rc = 0) 07:05:33 DEBUG --- stdout --- 07:05:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 58m 0% 4874Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 3886m 24% 5629Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 77m 0% 993Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 6089Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 372m 2% 2588Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5448Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3705m 23% 5775Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 4734m 29% 14325Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 953Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 62m 0% 14296Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 337m 2% 2750Mi 4% 07:05:33 DEBUG --- stderr --- 07:05:33 DEBUG 07:06:29 INFO 07:06:29 INFO [loop_until]: kubectl --namespace=xlou top pods 07:06:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:06:29 INFO [loop_until]: OK (rc = 0) 07:06:29 DEBUG --- stdout --- 07:06:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 4097Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 6m 5239Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 2977m 13822Mi ds-idrepo-1 1603m 13796Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 4121m 4991Mi idm-65858d8c4c-pt5s9 3046m 5136Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 273m 944Mi 07:06:29 DEBUG --- stderr --- 07:06:29 DEBUG 07:06:33 INFO 07:06:33 INFO [loop_until]: kubectl --namespace=xlou top node 07:06:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:06:33 INFO [loop_until]: OK (rc = 0) 07:06:33 DEBUG --- stdout --- 07:06:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4887Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 4029m 25% 5621Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 75m 0% 995Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 66m 0% 6100Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 372m 2% 2584Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5450Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 3045m 19% 5778Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 3232m 20% 14304Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 57m 0% 954Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 986Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 1058m 6% 14295Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 325m 2% 2752Mi 4% 07:06:33 DEBUG --- stderr --- 07:06:33 DEBUG 07:07:29 INFO 07:07:29 INFO [loop_until]: kubectl --namespace=xlou top pods 07:07:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:07:29 INFO [loop_until]: OK (rc = 0) 07:07:29 DEBUG --- stdout --- 07:07:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 9m 4111Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 7m 5251Mi ds-cts-0 6m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 266m 13803Mi ds-idrepo-1 11m 13796Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 4990Mi idm-65858d8c4c-pt5s9 5m 5135Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 99m 109Mi 07:07:29 DEBUG --- stderr --- 07:07:29 DEBUG 07:07:33 INFO 07:07:33 INFO [loop_until]: kubectl --namespace=xlou top node 07:07:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:07:33 INFO [loop_until]: OK (rc = 0) 07:07:33 DEBUG --- stdout --- 07:07:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 61m 0% 4898Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 85m 0% 5632Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 74m 0% 997Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 64m 0% 6111Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 108m 0% 2586Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 61m 0% 5450Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 71m 0% 5776Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 960Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 57m 0% 14309Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 60m 0% 14112Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 53m 0% 985Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 61m 0% 14291Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 120m 0% 1924Mi 3% 07:07:33 DEBUG --- stderr --- 07:07:33 DEBUG 07:08:29 INFO 07:08:29 INFO [loop_until]: kubectl --namespace=xlou top pods 07:08:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:08:29 INFO [loop_until]: OK (rc = 0) 07:08:29 DEBUG --- stdout --- 07:08:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 7m 4120Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 7m 5261Mi ds-cts-0 5m 405Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 11m 13802Mi ds-idrepo-1 11m 13796Mi ds-idrepo-2 10m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 7m 4989Mi idm-65858d8c4c-pt5s9 6m 5135Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 1m 109Mi 07:08:29 DEBUG --- stderr --- 07:08:29 DEBUG 07:08:33 INFO 07:08:33 INFO [loop_until]: kubectl --namespace=xlou top node 07:08:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:08:33 INFO [loop_until]: OK (rc = 0) 07:08:33 DEBUG --- stdout --- 07:08:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 4909Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 80m 0% 5632Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 76m 0% 999Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 6121Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 106m 0% 2583Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5449Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 69m 0% 5783Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 55m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 57m 0% 14309Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 61m 0% 14112Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 58m 0% 14296Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1923Mi 3% 07:08:33 DEBUG --- stderr --- 07:08:33 DEBUG 127.0.0.1 - - [16/Aug/2023 07:08:47] "GET /monitoring/average?start_time=23-08-16_05:38:26&stop_time=23-08-16_06:06:46 HTTP/1.1" 200 - 07:09:29 INFO 07:09:29 INFO [loop_until]: kubectl --namespace=xlou top pods 07:09:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:09:29 INFO [loop_until]: OK (rc = 0) 07:09:29 DEBUG --- stdout --- 07:09:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 4Mi am-869fdb5db9-5j69v 8m 4131Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 7m 5269Mi ds-cts-0 5m 405Mi ds-cts-1 6m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 10m 13803Mi ds-idrepo-1 12m 13796Mi ds-idrepo-2 11m 13644Mi end-user-ui-6845bc78c7-sqnhx 1m 4Mi idm-65858d8c4c-d6c9h 6m 4989Mi idm-65858d8c4c-pt5s9 5m 5134Mi lodemon-66684b7694-c5c6m 5m 69Mi login-ui-74d6fb46c-qcg59 1m 3Mi overseer-0-788b4494cc-bdwtm 2m 109Mi 07:09:29 DEBUG --- stderr --- 07:09:29 DEBUG 07:09:33 INFO 07:09:33 INFO [loop_until]: kubectl --namespace=xlou top node 07:09:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:09:33 INFO [loop_until]: OK (rc = 0) 07:09:33 DEBUG --- stdout --- 07:09:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 63m 0% 4919Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 81m 0% 5632Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 80m 0% 1000Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 72m 0% 6127Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 110m 0% 2581Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 67m 0% 5450Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 5780Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 60m 0% 14309Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 70m 0% 14116Mi 24% gke-xlou-cdm-ds-32e4dcb1-l2t2 58m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 988Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 63m 0% 14297Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 76m 0% 1925Mi 3% 07:09:33 DEBUG --- stderr --- 07:09:33 DEBUG 07:10:29 INFO 07:10:29 INFO [loop_until]: kubectl --namespace=xlou top pods 07:10:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:10:29 INFO [loop_until]: OK (rc = 0) 07:10:29 DEBUG --- stdout --- 07:10:29 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 6Mi am-869fdb5db9-5j69v 8m 4144Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 7m 5287Mi ds-cts-0 11m 407Mi ds-cts-1 5m 377Mi ds-cts-2 6m 380Mi ds-idrepo-0 259m 13753Mi ds-idrepo-1 209m 13747Mi ds-idrepo-2 207m 13568Mi end-user-ui-6845bc78c7-sqnhx 1m 5Mi idm-65858d8c4c-d6c9h 6m 4989Mi idm-65858d8c4c-pt5s9 6m 5134Mi lodemon-66684b7694-c5c6m 6m 69Mi login-ui-74d6fb46c-qcg59 1m 4Mi overseer-0-788b4494cc-bdwtm 459m 118Mi 07:10:29 DEBUG --- stderr --- 07:10:29 DEBUG 07:10:33 INFO 07:10:33 INFO [loop_until]: kubectl --namespace=xlou top node 07:10:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:10:34 INFO [loop_until]: OK (rc = 0) 07:10:34 DEBUG --- stdout --- 07:10:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4929Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 82m 0% 5630Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 1005Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 68m 0% 6145Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 116m 0% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 64m 0% 5448Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 72m 0% 5783Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 53m 0% 961Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 309m 1% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 181m 1% 14043Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 958Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 59m 0% 987Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 183m 1% 14255Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 582m 3% 1930Mi 3% 07:10:34 DEBUG --- stderr --- 07:10:34 DEBUG 07:11:30 INFO 07:11:30 INFO [loop_until]: kubectl --namespace=xlou top pods 07:11:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:11:30 INFO [loop_until]: OK (rc = 0) 07:11:30 DEBUG --- stdout --- 07:11:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 6Mi am-869fdb5db9-5j69v 8m 4154Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 10m 5296Mi ds-cts-0 5m 405Mi ds-cts-1 6m 376Mi ds-cts-2 10m 381Mi ds-idrepo-0 14m 13753Mi ds-idrepo-1 12m 13746Mi ds-idrepo-2 11m 13569Mi end-user-ui-6845bc78c7-sqnhx 1m 5Mi idm-65858d8c4c-d6c9h 6m 4989Mi idm-65858d8c4c-pt5s9 5m 5134Mi lodemon-66684b7694-c5c6m 2m 69Mi login-ui-74d6fb46c-qcg59 1m 4Mi overseer-0-788b4494cc-bdwtm 562m 142Mi 07:11:30 DEBUG --- stderr --- 07:11:30 DEBUG 07:11:34 INFO 07:11:34 INFO [loop_until]: kubectl --namespace=xlou top node 07:11:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:11:34 INFO [loop_until]: OK (rc = 0) 07:11:34 DEBUG --- stdout --- 07:11:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 62m 0% 4940Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 79m 0% 5630Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 73m 0% 1001Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 65m 0% 6156Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 112m 0% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 60m 0% 5451Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 66m 0% 5779Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 60m 0% 959Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 59m 0% 14267Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 60m 0% 14040Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 55m 0% 956Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 54m 0% 988Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 60m 0% 14251Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 896m 5% 2156Mi 3% 07:11:34 DEBUG --- stderr --- 07:11:34 DEBUG 07:12:30 INFO 07:12:30 INFO [loop_until]: kubectl --namespace=xlou top pods 07:12:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:12:30 INFO [loop_until]: OK (rc = 0) 07:12:30 DEBUG --- stdout --- 07:12:30 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-w2rbv 1m 6Mi am-869fdb5db9-5j69v 7m 4165Mi am-869fdb5db9-8dg94 6m 4641Mi am-869fdb5db9-wt7sg 7m 5309Mi ds-cts-0 5m 406Mi ds-cts-1 5m 375Mi ds-cts-2 6m 381Mi ds-idrepo-0 10m 13753Mi ds-idrepo-1 12m 13748Mi ds-idrepo-2 10m 13568Mi end-user-ui-6845bc78c7-sqnhx 1m 5Mi idm-65858d8c4c-d6c9h 6m 4989Mi idm-65858d8c4c-pt5s9 6m 5134Mi lodemon-66684b7694-c5c6m 1m 69Mi login-ui-74d6fb46c-qcg59 1m 4Mi overseer-0-788b4494cc-bdwtm 785m 366Mi 07:12:30 DEBUG --- stderr --- 07:12:30 DEBUG 07:12:34 INFO 07:12:34 INFO [loop_until]: kubectl --namespace=xlou top node 07:12:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 07:12:34 INFO [loop_until]: OK (rc = 0) 07:12:34 DEBUG --- stdout --- 07:12:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-98b5 60m 0% 4951Mi 8% gke-xlou-cdm-default-pool-f05840a3-jnx6 79m 0% 5625Mi 9% gke-xlou-cdm-default-pool-f05840a3-jqvg 72m 0% 1001Mi 1% gke-xlou-cdm-default-pool-f05840a3-rt14 63m 0% 6166Mi 10% gke-xlou-cdm-default-pool-f05840a3-tnc9 114m 0% 2580Mi 4% gke-xlou-cdm-default-pool-f05840a3-vslq 62m 0% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-zj9v 71m 0% 5781Mi 9% gke-xlou-cdm-ds-32e4dcb1-02kn 54m 0% 963Mi 1% gke-xlou-cdm-ds-32e4dcb1-7x9g 55m 0% 14269Mi 24% gke-xlou-cdm-ds-32e4dcb1-hbvk 63m 0% 14044Mi 23% gke-xlou-cdm-ds-32e4dcb1-l2t2 56m 0% 955Mi 1% gke-xlou-cdm-ds-32e4dcb1-mt7t 52m 0% 989Mi 1% gke-xlou-cdm-ds-32e4dcb1-zmqj 58m 0% 14252Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 769m 4% 2248Mi 3% 07:12:34 DEBUG --- stderr --- 07:12:34 DEBUG 07:13:09 INFO Finished: True 07:13:09 INFO Waiting for threads to register finish flag 07:13:34 INFO Done. Have a nice day! :) 127.0.0.1 - - [16/Aug/2023 07:13:34] "GET /monitoring/stop HTTP/1.1" 200 - 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Cpu_cores_used_per_pod.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Memory_usage_per_pod.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Disk_tps_read_per_pod.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Disk_tps_writes_per_pod.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Cpu_cores_used_per_node.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Memory_usage_used_per_node.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Cpu_iowait_per_node.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Network_receive_per_node.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Network_transmit_per_node.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/am_cts_task_count_token_session.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/am_authentication_rate.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/am_authentication_count_per_pod.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/Cts_reaper_Deletion_count.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/AM_oauth2_authorization_codes.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/ds_pods_replication_delay.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/am_cts_reaper_cache_size.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/node_disk_read_bytes_total.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/node_disk_written_bytes_total.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/ds_backend_entry_count.json does not exist. Skipping... 07:13:38 INFO File /tmp/lodemon_data-23-08-16_02:25:25/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [16/Aug/2023 07:13:40] "GET /monitoring/process HTTP/1.1" 200 -