==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-86d6dfd886-rxdp4 Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sat, 12 Aug 2023 14:28:59 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=86d6dfd886 skaffold.dev/run-id=bc1482aa-3a19-425c-a59a-005670407a99 Annotations: Status: Running IP: 10.106.45.71 IPs: IP: 10.106.45.71 Controlled By: ReplicaSet/lodemon-86d6dfd886 Containers: lodemon: Container ID: containerd://96ebf35e0c473df9a349b4ed9392c7ebcac7beacbbf49306650c92a0d40c281f Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sat, 12 Aug 2023 14:29:00 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d4qcb (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-d4qcb: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 15:29:01 INFO 15:29:01 INFO --------------------- Get expected number of pods --------------------- 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 15:29:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG 3 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO ---------------------------- Get pod list ---------------------------- 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 15:29:01 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG am-55f77847b7-klhnq am-55f77847b7-sgmd6 am-55f77847b7-wq5w5 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO -------------- Check pod am-55f77847b7-klhnq is running -------------- 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-klhnq -o=jsonpath={.status.phase} | grep "Running" 15:29:01 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG Running 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-klhnq -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:01 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG true 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-klhnq --output jsonpath={.status.startTime} 15:29:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG 2023-08-12T14:19:37Z 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO ------- Check pod am-55f77847b7-klhnq filesystem is accessible ------- 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-klhnq --container openam -- ls / | grep "bin" 15:29:01 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO ------------- Check pod am-55f77847b7-klhnq restart count ------------- 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-klhnq --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG 0 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO Pod am-55f77847b7-klhnq has been restarted 0 times. 15:29:01 INFO 15:29:01 INFO -------------- Check pod am-55f77847b7-sgmd6 is running -------------- 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-sgmd6 -o=jsonpath={.status.phase} | grep "Running" 15:29:01 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG Running 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-sgmd6 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:01 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG true 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-sgmd6 --output jsonpath={.status.startTime} 15:29:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:01 INFO [loop_until]: OK (rc = 0) 15:29:01 DEBUG --- stdout --- 15:29:01 DEBUG 2023-08-12T14:19:37Z 15:29:01 DEBUG --- stderr --- 15:29:01 DEBUG 15:29:01 INFO 15:29:01 INFO ------- Check pod am-55f77847b7-sgmd6 filesystem is accessible ------- 15:29:01 INFO 15:29:01 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-sgmd6 --container openam -- ls / | grep "bin" 15:29:01 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO ------------- Check pod am-55f77847b7-sgmd6 restart count ------------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-sgmd6 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG 0 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO Pod am-55f77847b7-sgmd6 has been restarted 0 times. 15:29:02 INFO 15:29:02 INFO -------------- Check pod am-55f77847b7-wq5w5 is running -------------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-wq5w5 -o=jsonpath={.status.phase} | grep "Running" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG Running 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-wq5w5 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG true 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-wq5w5 --output jsonpath={.status.startTime} 15:29:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG 2023-08-12T14:19:37Z 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO ------- Check pod am-55f77847b7-wq5w5 filesystem is accessible ------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-wq5w5 --container openam -- ls / | grep "bin" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO ------------- Check pod am-55f77847b7-wq5w5 restart count ------------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-wq5w5 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG 0 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO Pod am-55f77847b7-wq5w5 has been restarted 0 times. 15:29:02 INFO 15:29:02 INFO --------------------- Get expected number of pods --------------------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 15:29:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG 2 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO ---------------------------- Get pod list ---------------------------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 15:29:02 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG idm-65858d8c4c-5kkq9 idm-65858d8c4c-gdv6b 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO -------------- Check pod idm-65858d8c4c-5kkq9 is running -------------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-5kkq9 -o=jsonpath={.status.phase} | grep "Running" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG Running 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-5kkq9 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG true 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-5kkq9 --output jsonpath={.status.startTime} 15:29:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG 2023-08-12T14:19:37Z 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO ------- Check pod idm-65858d8c4c-5kkq9 filesystem is accessible ------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-5kkq9 --container openidm -- ls / | grep "bin" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO ------------ Check pod idm-65858d8c4c-5kkq9 restart count ------------ 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-5kkq9 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG 0 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO Pod idm-65858d8c4c-5kkq9 has been restarted 0 times. 15:29:02 INFO 15:29:02 INFO -------------- Check pod idm-65858d8c4c-gdv6b is running -------------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-gdv6b -o=jsonpath={.status.phase} | grep "Running" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG Running 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-gdv6b -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG true 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-gdv6b --output jsonpath={.status.startTime} 15:29:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:02 INFO [loop_until]: OK (rc = 0) 15:29:02 DEBUG --- stdout --- 15:29:02 DEBUG 2023-08-12T14:19:37Z 15:29:02 DEBUG --- stderr --- 15:29:02 DEBUG 15:29:02 INFO 15:29:02 INFO ------- Check pod idm-65858d8c4c-gdv6b filesystem is accessible ------- 15:29:02 INFO 15:29:02 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-gdv6b --container openidm -- ls / | grep "bin" 15:29:02 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO ------------ Check pod idm-65858d8c4c-gdv6b restart count ------------ 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-gdv6b --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG 0 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO Pod idm-65858d8c4c-gdv6b has been restarted 0 times. 15:29:03 INFO 15:29:03 INFO --------------------- Get expected number of pods --------------------- 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 15:29:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG 3 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO ---------------------------- Get pod list ---------------------------- 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 15:29:03 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 15:29:03 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG Running 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:03 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG true 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 15:29:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG 2023-08-12T13:46:37Z 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 15:29:03 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG 0 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO Pod ds-idrepo-0 has been restarted 0 times. 15:29:03 INFO 15:29:03 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 15:29:03 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG Running 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:03 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG true 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 15:29:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG 2023-08-12T13:57:39Z 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 15:29:03 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO 15:29:03 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:03 INFO [loop_until]: OK (rc = 0) 15:29:03 DEBUG --- stdout --- 15:29:03 DEBUG 0 15:29:03 DEBUG --- stderr --- 15:29:03 DEBUG 15:29:03 INFO Pod ds-idrepo-1 has been restarted 0 times. 15:29:03 INFO 15:29:03 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 15:29:03 INFO 15:29:03 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 15:29:03 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG Running 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:04 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG true 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 15:29:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG 2023-08-12T14:08:34Z 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 15:29:04 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG 0 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO Pod ds-idrepo-2 has been restarted 0 times. 15:29:04 INFO 15:29:04 INFO --------------------- Get expected number of pods --------------------- 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 15:29:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG 3 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO ---------------------------- Get pod list ---------------------------- 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 15:29:04 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO -------------------- Check pod ds-cts-0 is running -------------------- 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 15:29:04 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG Running 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:04 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG true 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 15:29:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG 2023-08-12T13:46:37Z 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 15:29:04 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG 0 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO Pod ds-cts-0 has been restarted 0 times. 15:29:04 INFO 15:29:04 INFO -------------------- Check pod ds-cts-1 is running -------------------- 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 15:29:04 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG Running 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:04 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG true 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 15:29:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG 2023-08-12T13:47:03Z 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 15:29:04 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:04 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:04 INFO [loop_until]: OK (rc = 0) 15:29:04 DEBUG --- stdout --- 15:29:04 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 15:29:04 DEBUG --- stderr --- 15:29:04 DEBUG 15:29:04 INFO 15:29:04 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 15:29:04 INFO 15:29:04 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:05 INFO [loop_until]: OK (rc = 0) 15:29:05 DEBUG --- stdout --- 15:29:05 DEBUG 0 15:29:05 DEBUG --- stderr --- 15:29:05 DEBUG 15:29:05 INFO Pod ds-cts-1 has been restarted 0 times. 15:29:05 INFO 15:29:05 INFO -------------------- Check pod ds-cts-2 is running -------------------- 15:29:05 INFO 15:29:05 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 15:29:05 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:05 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:05 INFO [loop_until]: OK (rc = 0) 15:29:05 DEBUG --- stdout --- 15:29:05 DEBUG Running 15:29:05 DEBUG --- stderr --- 15:29:05 DEBUG 15:29:05 INFO 15:29:05 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 15:29:05 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:05 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:05 INFO [loop_until]: OK (rc = 0) 15:29:05 DEBUG --- stdout --- 15:29:05 DEBUG true 15:29:05 DEBUG --- stderr --- 15:29:05 DEBUG 15:29:05 INFO 15:29:05 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 15:29:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:05 INFO [loop_until]: OK (rc = 0) 15:29:05 DEBUG --- stdout --- 15:29:05 DEBUG 2023-08-12T13:47:28Z 15:29:05 DEBUG --- stderr --- 15:29:05 DEBUG 15:29:05 INFO 15:29:05 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 15:29:05 INFO 15:29:05 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 15:29:05 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 15:29:05 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 15:29:05 INFO [loop_until]: OK (rc = 0) 15:29:05 DEBUG --- stdout --- 15:29:05 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 15:29:05 DEBUG --- stderr --- 15:29:05 DEBUG 15:29:05 INFO 15:29:05 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 15:29:05 INFO 15:29:05 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 15:29:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:05 INFO [loop_until]: OK (rc = 0) 15:29:05 DEBUG --- stdout --- 15:29:05 DEBUG 0 15:29:05 DEBUG --- stderr --- 15:29:05 DEBUG 15:29:05 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.71:8080 Press CTRL+C to quit 15:29:36 INFO 15:29:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:36 INFO [loop_until]: OK (rc = 0) 15:29:36 DEBUG --- stdout --- 15:29:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:36 DEBUG --- stderr --- 15:29:36 DEBUG 15:29:36 INFO 15:29:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:36 INFO [loop_until]: OK (rc = 0) 15:29:36 DEBUG --- stdout --- 15:29:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:36 DEBUG --- stderr --- 15:29:36 DEBUG 15:29:36 INFO 15:29:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:36 INFO [loop_until]: OK (rc = 0) 15:29:36 DEBUG --- stdout --- 15:29:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:36 DEBUG --- stderr --- 15:29:36 DEBUG 15:29:36 INFO 15:29:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:36 INFO [loop_until]: OK (rc = 0) 15:29:36 DEBUG --- stdout --- 15:29:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:36 DEBUG --- stderr --- 15:29:36 DEBUG 15:29:36 INFO 15:29:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:36 INFO [loop_until]: OK (rc = 0) 15:29:36 DEBUG --- stdout --- 15:29:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:36 DEBUG --- stderr --- 15:29:36 DEBUG 15:29:36 INFO 15:29:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:36 INFO [loop_until]: OK (rc = 0) 15:29:36 DEBUG --- stdout --- 15:29:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:36 DEBUG --- stderr --- 15:29:36 DEBUG 15:29:36 INFO 15:29:36 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:36 INFO [loop_until]: OK (rc = 0) 15:29:36 DEBUG --- stdout --- 15:29:36 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:36 DEBUG --- stderr --- 15:29:36 DEBUG 15:29:37 INFO 15:29:37 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:37 INFO [loop_until]: OK (rc = 0) 15:29:37 DEBUG --- stdout --- 15:29:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:37 DEBUG --- stderr --- 15:29:37 DEBUG 15:29:37 INFO 15:29:37 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:37 INFO [loop_until]: OK (rc = 0) 15:29:37 DEBUG --- stdout --- 15:29:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:37 DEBUG --- stderr --- 15:29:37 DEBUG 15:29:37 INFO 15:29:37 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:37 INFO [loop_until]: OK (rc = 0) 15:29:37 DEBUG --- stdout --- 15:29:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:37 DEBUG --- stderr --- 15:29:37 DEBUG 15:29:37 INFO 15:29:37 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:37 INFO [loop_until]: OK (rc = 0) 15:29:37 DEBUG --- stdout --- 15:29:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:37 DEBUG --- stderr --- 15:29:37 DEBUG 15:29:37 INFO 15:29:37 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:37 INFO [loop_until]: OK (rc = 0) 15:29:37 DEBUG --- stdout --- 15:29:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:37 DEBUG --- stderr --- 15:29:37 DEBUG 15:29:37 INFO 15:29:37 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:37 INFO [loop_until]: OK (rc = 0) 15:29:37 DEBUG --- stdout --- 15:29:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:37 DEBUG --- stderr --- 15:29:37 DEBUG 15:29:37 INFO 15:29:37 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:37 INFO [loop_until]: OK (rc = 0) 15:29:37 DEBUG --- stdout --- 15:29:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:37 DEBUG --- stderr --- 15:29:37 DEBUG 15:29:37 INFO 15:29:37 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:37 INFO [loop_until]: OK (rc = 0) 15:29:37 DEBUG --- stdout --- 15:29:37 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:37 DEBUG --- stderr --- 15:29:37 DEBUG 15:29:38 INFO 15:29:38 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:38 INFO [loop_until]: OK (rc = 0) 15:29:38 DEBUG --- stdout --- 15:29:38 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:38 DEBUG --- stderr --- 15:29:38 DEBUG 15:29:38 INFO 15:29:38 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:38 INFO [loop_until]: OK (rc = 0) 15:29:38 DEBUG --- stdout --- 15:29:38 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:38 DEBUG --- stderr --- 15:29:38 DEBUG 15:29:38 INFO 15:29:38 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:38 INFO [loop_until]: OK (rc = 0) 15:29:38 DEBUG --- stdout --- 15:29:38 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:38 DEBUG --- stderr --- 15:29:38 DEBUG 15:29:38 INFO 15:29:38 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:38 INFO [loop_until]: OK (rc = 0) 15:29:38 DEBUG --- stdout --- 15:29:38 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:38 DEBUG --- stderr --- 15:29:38 DEBUG 15:29:38 INFO 15:29:38 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:38 INFO [loop_until]: OK (rc = 0) 15:29:38 DEBUG --- stdout --- 15:29:38 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:38 DEBUG --- stderr --- 15:29:38 DEBUG 15:29:38 INFO 15:29:38 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:38 INFO [loop_until]: OK (rc = 0) 15:29:38 DEBUG --- stdout --- 15:29:38 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:38 DEBUG --- stderr --- 15:29:38 DEBUG 15:29:38 INFO 15:29:38 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:38 INFO [loop_until]: OK (rc = 0) 15:29:38 DEBUG --- stdout --- 15:29:38 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:38 DEBUG --- stderr --- 15:29:38 DEBUG 15:29:38 INFO 15:29:38 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:39 INFO [loop_until]: OK (rc = 0) 15:29:39 DEBUG --- stdout --- 15:29:39 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:39 DEBUG --- stderr --- 15:29:39 DEBUG 15:29:39 INFO 15:29:39 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:39 INFO [loop_until]: OK (rc = 0) 15:29:39 DEBUG --- stdout --- 15:29:39 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:39 DEBUG --- stderr --- 15:29:39 DEBUG 15:29:39 INFO 15:29:39 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:39 INFO [loop_until]: OK (rc = 0) 15:29:39 DEBUG --- stdout --- 15:29:39 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:39 DEBUG --- stderr --- 15:29:39 DEBUG 15:29:39 INFO 15:29:39 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:39 INFO [loop_until]: OK (rc = 0) 15:29:39 DEBUG --- stdout --- 15:29:39 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:39 DEBUG --- stderr --- 15:29:39 DEBUG 15:29:39 INFO 15:29:39 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 15:29:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:39 INFO [loop_until]: OK (rc = 0) 15:29:39 DEBUG --- stdout --- 15:29:39 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 15:29:39 DEBUG --- stderr --- 15:29:39 DEBUG 15:29:39 INFO Initializing monitoring instance threads 15:29:39 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 15:29:39 INFO Starting instance threads 15:29:39 INFO 15:29:39 INFO Thread started 15:29:39 INFO [loop_until]: kubectl --namespace=xlou top node 15:29:39 INFO 15:29:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:39 INFO Thread started 15:29:39 INFO [loop_until]: kubectl --namespace=xlou top pods 15:29:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579" 15:29:39 INFO Thread started Exception in thread Thread-23: 15:29:39 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-24: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 15:29:39 INFO Thread started Exception in thread Thread-25: 15:29:39 INFO Thread started self.run() 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691850579" Traceback (most recent call last): 15:29:39 INFO Thread started self.run() 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691850579" 15:29:39 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 15:29:39 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 910, in run 15:29:39 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579" 15:29:39 INFO Thread started self._target(*self._args, **self._kwargs) 15:29:39 INFO All threads has been started self.run() self._target(*self._args, **self._kwargs) 127.0.0.1 - - [12/Aug/2023 15:29:39] "GET /monitoring/start HTTP/1.1" 200 - File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run Exception in thread Thread-28: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner if self.prom_data['functions']: self._target(*self._args, **self._kwargs) if self.prom_data['functions']: self.run() KeyError: 'functions' KeyError: 'functions' File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() 15:29:39 INFO [loop_until]: OK (rc = 0) File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run 15:29:39 DEBUG --- stdout --- instance.run() 15:29:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 11m 2352Mi am-55f77847b7-sgmd6 16m 2711Mi am-55f77847b7-wq5w5 15m 4388Mi ds-cts-0 10m 343Mi ds-cts-1 8m 358Mi ds-cts-2 6m 354Mi ds-idrepo-0 21m 10330Mi ds-idrepo-1 18m 10337Mi ds-idrepo-2 35m 10306Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 9m 1431Mi idm-65858d8c4c-gdv6b 7m 1565Mi lodemon-86d6dfd886-rxdp4 684m 60Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 15Mi 15:29:39 DEBUG --- stderr --- 15:29:39 DEBUG if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run KeyError: 'functions' if self.prom_data['functions']: KeyError: 'functions' 15:29:39 INFO [loop_until]: OK (rc = 0) 15:29:39 DEBUG --- stdout --- 15:29:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 342m 2% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 3378Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 5522Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 3876Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 2892Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 134m 0% 2108Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2698Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 71m 0% 10999Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 10980Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 10939Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 64m 0% 1628Mi 2% 15:29:39 DEBUG --- stderr --- 15:29:39 DEBUG 15:29:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:40 WARNING Response is NONE 15:29:40 DEBUG Exception is preset. Setting retry_loop to true 15:29:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:42 WARNING Response is NONE 15:29:42 WARNING Response is NONE 15:29:42 DEBUG Exception is preset. Setting retry_loop to true 15:29:42 DEBUG Exception is preset. Setting retry_loop to true 15:29:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:46 WARNING Response is NONE 15:29:46 WARNING Response is NONE 15:29:46 DEBUG Exception is preset. Setting retry_loop to true 15:29:46 DEBUG Exception is preset. Setting retry_loop to true 15:29:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:46 WARNING Response is NONE 15:29:46 WARNING Response is NONE 15:29:46 DEBUG Exception is preset. Setting retry_loop to true 15:29:46 DEBUG Exception is preset. Setting retry_loop to true 15:29:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:51 WARNING Response is NONE 15:29:51 DEBUG Exception is preset. Setting retry_loop to true 15:29:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:53 WARNING Response is NONE 15:29:53 WARNING Response is NONE 15:29:53 DEBUG Exception is preset. Setting retry_loop to true 15:29:53 DEBUG Exception is preset. Setting retry_loop to true 15:29:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:55 WARNING Response is NONE 15:29:55 WARNING Response is NONE 15:29:55 DEBUG Exception is preset. Setting retry_loop to true 15:29:55 DEBUG Exception is preset. Setting retry_loop to true 15:29:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:57 WARNING Response is NONE 15:29:57 DEBUG Exception is preset. Setting retry_loop to true 15:29:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:59 WARNING Response is NONE 15:29:59 DEBUG Exception is preset. Setting retry_loop to true 15:29:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:29:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:29:59 WARNING Response is NONE 15:29:59 DEBUG Exception is preset. Setting retry_loop to true 15:29:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:02 WARNING Response is NONE 15:30:02 DEBUG Exception is preset. Setting retry_loop to true 15:30:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:04 WARNING Response is NONE 15:30:04 DEBUG Exception is preset. Setting retry_loop to true 15:30:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:06 WARNING Response is NONE 15:30:06 DEBUG Exception is preset. Setting retry_loop to true 15:30:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:06 WARNING Response is NONE 15:30:06 DEBUG Exception is preset. Setting retry_loop to true 15:30:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:08 WARNING Response is NONE 15:30:08 DEBUG Exception is preset. Setting retry_loop to true 15:30:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:10 WARNING Response is NONE 15:30:10 DEBUG Exception is preset. Setting retry_loop to true 15:30:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:12 WARNING Response is NONE 15:30:12 DEBUG Exception is preset. Setting retry_loop to true 15:30:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:13 WARNING Response is NONE 15:30:13 DEBUG Exception is preset. Setting retry_loop to true 15:30:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:15 WARNING Response is NONE 15:30:15 DEBUG Exception is preset. Setting retry_loop to true 15:30:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:17 WARNING Response is NONE 15:30:17 DEBUG Exception is preset. Setting retry_loop to true 15:30:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:19 WARNING Response is NONE 15:30:19 DEBUG Exception is preset. Setting retry_loop to true 15:30:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:20 WARNING Response is NONE 15:30:20 DEBUG Exception is preset. Setting retry_loop to true 15:30:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:21 WARNING Response is NONE 15:30:21 DEBUG Exception is preset. Setting retry_loop to true 15:30:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:23 WARNING Response is NONE 15:30:23 DEBUG Exception is preset. Setting retry_loop to true 15:30:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:24 WARNING Response is NONE 15:30:24 DEBUG Exception is preset. Setting retry_loop to true 15:30:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:26 WARNING Response is NONE 15:30:26 DEBUG Exception is preset. Setting retry_loop to true 15:30:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:28 WARNING Response is NONE 15:30:28 DEBUG Exception is preset. Setting retry_loop to true 15:30:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:31 WARNING Response is NONE 15:30:31 DEBUG Exception is preset. Setting retry_loop to true 15:30:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:32 WARNING Response is NONE 15:30:32 DEBUG Exception is preset. Setting retry_loop to true 15:30:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:34 WARNING Response is NONE 15:30:34 DEBUG Exception is preset. Setting retry_loop to true 15:30:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:35 WARNING Response is NONE 15:30:35 DEBUG Exception is preset. Setting retry_loop to true 15:30:35 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:30:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:37 WARNING Response is NONE 15:30:37 DEBUG Exception is preset. Setting retry_loop to true 15:30:37 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:30:39 INFO 15:30:39 INFO [loop_until]: kubectl --namespace=xlou top node 15:30:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:30:39 INFO 15:30:39 INFO [loop_until]: kubectl --namespace=xlou top pods 15:30:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:30:39 INFO [loop_until]: OK (rc = 0) 15:30:39 DEBUG --- stdout --- 15:30:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 6m 2352Mi am-55f77847b7-sgmd6 21m 2710Mi am-55f77847b7-wq5w5 15m 4389Mi ds-cts-0 63m 347Mi ds-cts-1 8m 365Mi ds-cts-2 146m 355Mi ds-idrepo-0 1029m 10345Mi ds-idrepo-1 96m 10355Mi ds-idrepo-2 129m 10316Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 11m 1432Mi idm-65858d8c4c-gdv6b 10m 1577Mi lodemon-86d6dfd886-rxdp4 3m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 191m 48Mi 15:30:39 DEBUG --- stderr --- 15:30:39 DEBUG 15:30:39 INFO [loop_until]: OK (rc = 0) 15:30:39 DEBUG --- stdout --- 15:30:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 3382Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 5523Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 3875Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2904Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2696Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 131m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 903m 5% 11015Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 83m 0% 10997Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 165m 1% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 97m 0% 10947Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 261m 1% 1630Mi 2% 15:30:39 DEBUG --- stderr --- 15:30:39 DEBUG 15:30:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:40 WARNING Response is NONE 15:30:40 DEBUG Exception is preset. Setting retry_loop to true 15:30:40 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:30:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:41 WARNING Response is NONE 15:30:41 DEBUG Exception is preset. Setting retry_loop to true 15:30:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:42 WARNING Response is NONE 15:30:42 DEBUG Exception is preset. Setting retry_loop to true 15:30:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:43 WARNING Response is NONE 15:30:43 DEBUG Exception is preset. Setting retry_loop to true 15:30:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:45 WARNING Response is NONE 15:30:45 DEBUG Exception is preset. Setting retry_loop to true 15:30:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:46 WARNING Response is NONE 15:30:46 DEBUG Exception is preset. Setting retry_loop to true 15:30:46 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:30:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:47 WARNING Response is NONE 15:30:47 DEBUG Exception is preset. Setting retry_loop to true 15:30:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:52 WARNING Response is NONE 15:30:52 DEBUG Exception is preset. Setting retry_loop to true 15:30:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:53 WARNING Response is NONE 15:30:53 DEBUG Exception is preset. Setting retry_loop to true 15:30:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:54 WARNING Response is NONE 15:30:54 DEBUG Exception is preset. Setting retry_loop to true 15:30:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:30:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:56 WARNING Response is NONE 15:30:56 DEBUG Exception is preset. Setting retry_loop to true 15:30:56 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:30:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:30:58 WARNING Response is NONE 15:30:58 DEBUG Exception is preset. Setting retry_loop to true 15:30:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:01 WARNING Response is NONE 15:31:01 DEBUG Exception is preset. Setting retry_loop to true 15:31:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:03 WARNING Response is NONE 15:31:03 DEBUG Exception is preset. Setting retry_loop to true 15:31:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:05 WARNING Response is NONE 15:31:05 DEBUG Exception is preset. Setting retry_loop to true 15:31:05 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:31:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:05 WARNING Response is NONE 15:31:05 DEBUG Exception is preset. Setting retry_loop to true 15:31:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:10 WARNING Response is NONE 15:31:10 DEBUG Exception is preset. Setting retry_loop to true 15:31:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:12 WARNING Response is NONE 15:31:12 DEBUG Exception is preset. Setting retry_loop to true 15:31:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:31:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:14 WARNING Response is NONE 15:31:14 DEBUG Exception is preset. Setting retry_loop to true 15:31:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:31:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:17 WARNING Response is NONE 15:31:17 DEBUG Exception is preset. Setting retry_loop to true 15:31:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:21 WARNING Response is NONE 15:31:21 DEBUG Exception is preset. Setting retry_loop to true 15:31:21 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:31:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:28 WARNING Response is NONE 15:31:28 DEBUG Exception is preset. Setting retry_loop to true 15:31:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:39 WARNING Response is NONE 15:31:39 DEBUG Exception is preset. Setting retry_loop to true 15:31:39 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:31:39 INFO 15:31:39 INFO [loop_until]: kubectl --namespace=xlou top pods 15:31:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:31:39 INFO 15:31:39 INFO [loop_until]: kubectl --namespace=xlou top node 15:31:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:31:39 INFO [loop_until]: OK (rc = 0) 15:31:39 DEBUG --- stdout --- 15:31:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 6m 2352Mi am-55f77847b7-sgmd6 13m 2710Mi am-55f77847b7-wq5w5 13m 4389Mi ds-cts-0 10m 348Mi ds-cts-1 10m 365Mi ds-cts-2 6m 355Mi ds-idrepo-0 20m 10348Mi ds-idrepo-1 18m 10355Mi ds-idrepo-2 29m 10317Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 10m 1435Mi idm-65858d8c4c-gdv6b 7m 1589Mi lodemon-86d6dfd886-rxdp4 3m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 48Mi 15:31:39 DEBUG --- stderr --- 15:31:39 DEBUG 15:31:39 INFO [loop_until]: OK (rc = 0) 15:31:39 DEBUG --- stdout --- 15:31:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 3381Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 5522Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3872Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2916Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2697Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 11015Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10996Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 82m 0% 10950Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1631Mi 2% 15:31:39 DEBUG --- stderr --- 15:31:39 DEBUG 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:31:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 WARNING Response is NONE 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 DEBUG Exception is preset. Setting retry_loop to true 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:31:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:01 WARNING Response is NONE 15:32:01 DEBUG Exception is preset. Setting retry_loop to true 15:32:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:03 WARNING Response is NONE 15:32:03 WARNING Response is NONE 15:32:03 DEBUG Exception is preset. Setting retry_loop to true 15:32:03 DEBUG Exception is preset. Setting retry_loop to true 15:32:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:08 WARNING Response is NONE 15:32:08 WARNING Response is NONE 15:32:08 WARNING Response is NONE 15:32:08 WARNING Response is NONE 15:32:08 DEBUG Exception is preset. Setting retry_loop to true 15:32:08 DEBUG Exception is preset. Setting retry_loop to true 15:32:08 DEBUG Exception is preset. Setting retry_loop to true 15:32:08 DEBUG Exception is preset. Setting retry_loop to true 15:32:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:13 WARNING Response is NONE 15:32:13 DEBUG Exception is preset. Setting retry_loop to true 15:32:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:15 WARNING Response is NONE 15:32:15 WARNING Response is NONE 15:32:15 DEBUG Exception is preset. Setting retry_loop to true 15:32:15 DEBUG Exception is preset. Setting retry_loop to true 15:32:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:16 WARNING Response is NONE 15:32:16 WARNING Response is NONE 15:32:16 DEBUG Exception is preset. Setting retry_loop to true 15:32:16 DEBUG Exception is preset. Setting retry_loop to true 15:32:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:19 WARNING Response is NONE 15:32:19 DEBUG Exception is preset. Setting retry_loop to true 15:32:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:21 WARNING Response is NONE 15:32:21 WARNING Response is NONE 15:32:21 DEBUG Exception is preset. Setting retry_loop to true 15:32:21 DEBUG Exception is preset. Setting retry_loop to true 15:32:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:24 WARNING Response is NONE 15:32:24 DEBUG Exception is preset. Setting retry_loop to true 15:32:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:26 WARNING Response is NONE 15:32:26 DEBUG Exception is preset. Setting retry_loop to true 15:32:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:27 WARNING Response is NONE 15:32:27 DEBUG Exception is preset. Setting retry_loop to true 15:32:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:28 WARNING Response is NONE 15:32:28 DEBUG Exception is preset. Setting retry_loop to true 15:32:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:30 WARNING Response is NONE 15:32:30 DEBUG Exception is preset. Setting retry_loop to true 15:32:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:32 WARNING Response is NONE 15:32:32 DEBUG Exception is preset. Setting retry_loop to true 15:32:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:33 WARNING Response is NONE 15:32:33 DEBUG Exception is preset. Setting retry_loop to true 15:32:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:35 WARNING Response is NONE 15:32:35 DEBUG Exception is preset. Setting retry_loop to true 15:32:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:37 WARNING Response is NONE 15:32:37 DEBUG Exception is preset. Setting retry_loop to true 15:32:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:39 WARNING Response is NONE 15:32:39 DEBUG Exception is preset. Setting retry_loop to true 15:32:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:39 INFO 15:32:39 INFO [loop_until]: kubectl --namespace=xlou top pods 15:32:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:32:39 INFO [loop_until]: OK (rc = 0) 15:32:39 DEBUG --- stdout --- 15:32:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 7m 2353Mi am-55f77847b7-sgmd6 13m 2710Mi am-55f77847b7-wq5w5 11m 4389Mi ds-cts-0 13m 348Mi ds-cts-1 8m 365Mi ds-cts-2 6m 356Mi ds-idrepo-0 19m 10348Mi ds-idrepo-1 27m 10356Mi ds-idrepo-2 38m 10315Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 1435Mi idm-65858d8c4c-gdv6b 6m 1600Mi lodemon-86d6dfd886-rxdp4 6m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 2m 48Mi 15:32:39 DEBUG --- stderr --- 15:32:39 DEBUG 15:32:39 INFO 15:32:39 INFO [loop_until]: kubectl --namespace=xlou top node 15:32:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:32:39 INFO [loop_until]: OK (rc = 0) 15:32:39 DEBUG --- stdout --- 15:32:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 3376Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5520Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 3886Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2921Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2698Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 69m 0% 11017Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 76m 0% 10994Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 88m 0% 10948Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 165m 1% 1733Mi 2% 15:32:39 DEBUG --- stderr --- 15:32:39 DEBUG 15:32:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:41 WARNING Response is NONE 15:32:41 DEBUG Exception is preset. Setting retry_loop to true 15:32:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:42 WARNING Response is NONE 15:32:42 DEBUG Exception is preset. Setting retry_loop to true 15:32:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:43 WARNING Response is NONE 15:32:43 DEBUG Exception is preset. Setting retry_loop to true 15:32:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:44 WARNING Response is NONE 15:32:44 DEBUG Exception is preset. Setting retry_loop to true 15:32:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:46 WARNING Response is NONE 15:32:46 DEBUG Exception is preset. Setting retry_loop to true 15:32:46 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:32:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:48 WARNING Response is NONE 15:32:48 DEBUG Exception is preset. Setting retry_loop to true 15:32:48 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:32:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:50 WARNING Response is NONE 15:32:50 DEBUG Exception is preset. Setting retry_loop to true 15:32:50 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:32:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:53 WARNING Response is NONE 15:32:53 DEBUG Exception is preset. Setting retry_loop to true 15:32:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:54 WARNING Response is NONE 15:32:54 DEBUG Exception is preset. Setting retry_loop to true 15:32:54 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:32:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:55 WARNING Response is NONE 15:32:55 DEBUG Exception is preset. Setting retry_loop to true 15:32:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:32:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:32:58 WARNING Response is NONE 15:32:58 DEBUG Exception is preset. Setting retry_loop to true 15:32:58 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:33:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:03 WARNING Response is NONE 15:33:03 DEBUG Exception is preset. Setting retry_loop to true 15:33:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:04 WARNING Response is NONE 15:33:04 DEBUG Exception is preset. Setting retry_loop to true 15:33:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:05 WARNING Response is NONE 15:33:05 WARNING Response is NONE 15:33:05 WARNING Response is NONE 15:33:05 WARNING Response is NONE 15:33:05 DEBUG Exception is preset. Setting retry_loop to true 15:33:05 DEBUG Exception is preset. Setting retry_loop to true 15:33:05 DEBUG Exception is preset. Setting retry_loop to true 15:33:05 DEBUG Exception is preset. Setting retry_loop to true 15:33:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:06 WARNING Response is NONE 15:33:06 DEBUG Exception is preset. Setting retry_loop to true 15:33:06 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:33:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:09 WARNING Response is NONE 15:33:09 DEBUG Exception is preset. Setting retry_loop to true 15:33:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:14 WARNING Response is NONE 15:33:14 DEBUG Exception is preset. Setting retry_loop to true 15:33:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:15 WARNING Response is NONE 15:33:15 DEBUG Exception is preset. Setting retry_loop to true 15:33:15 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:33:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:16 WARNING Response is NONE 15:33:16 DEBUG Exception is preset. Setting retry_loop to true 15:33:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:18 WARNING Response is NONE 15:33:18 WARNING Response is NONE 15:33:18 DEBUG Exception is preset. Setting retry_loop to true 15:33:18 DEBUG Exception is preset. Setting retry_loop to true 15:33:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:20 WARNING Response is NONE 15:33:20 DEBUG Exception is preset. Setting retry_loop to true 15:33:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:22 WARNING Response is NONE 15:33:22 DEBUG Exception is preset. Setting retry_loop to true 15:33:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:25 WARNING Response is NONE 15:33:25 DEBUG Exception is preset. Setting retry_loop to true 15:33:25 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:33:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:27 WARNING Response is NONE 15:33:27 DEBUG Exception is preset. Setting retry_loop to true 15:33:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:29 WARNING Response is NONE 15:33:29 WARNING Response is NONE 15:33:29 DEBUG Exception is preset. Setting retry_loop to true 15:33:29 DEBUG Exception is preset. Setting retry_loop to true 15:33:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:31 WARNING Response is NONE 15:33:31 DEBUG Exception is preset. Setting retry_loop to true 15:33:31 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:33:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:33 WARNING Response is NONE 15:33:33 DEBUG Exception is preset. Setting retry_loop to true 15:33:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:38 WARNING Response is NONE 15:33:38 DEBUG Exception is preset. Setting retry_loop to true 15:33:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:39 INFO 15:33:39 INFO [loop_until]: kubectl --namespace=xlou top pods 15:33:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:33:39 INFO 15:33:39 INFO [loop_until]: kubectl --namespace=xlou top node 15:33:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:33:40 INFO [loop_until]: OK (rc = 0) 15:33:40 DEBUG --- stdout --- 15:33:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 14m 2353Mi am-55f77847b7-sgmd6 10m 2721Mi am-55f77847b7-wq5w5 31m 4390Mi ds-cts-0 7m 348Mi ds-cts-1 6m 366Mi ds-cts-2 8m 356Mi ds-idrepo-0 16m 10350Mi ds-idrepo-1 26m 10359Mi ds-idrepo-2 26m 10317Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 7m 1437Mi idm-65858d8c4c-gdv6b 7m 1611Mi lodemon-86d6dfd886-rxdp4 3m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 98Mi 15:33:40 DEBUG --- stderr --- 15:33:40 DEBUG 15:33:40 INFO [loop_until]: OK (rc = 0) 15:33:40 DEBUG --- stdout --- 15:33:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 3380Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 5525Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3886Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 67m 0% 2934Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2702Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 11020Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10998Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 78m 0% 10949Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1627Mi 2% 15:33:40 DEBUG --- stderr --- 15:33:40 DEBUG 15:33:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:40 WARNING Response is NONE 15:33:40 WARNING Response is NONE 15:33:40 DEBUG Exception is preset. Setting retry_loop to true 15:33:40 DEBUG Exception is preset. Setting retry_loop to true 15:33:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:44 WARNING Response is NONE 15:33:44 DEBUG Exception is preset. Setting retry_loop to true 15:33:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 15:33:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:49 WARNING Response is NONE 15:33:49 DEBUG Exception is preset. Setting retry_loop to true 15:33:49 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:33:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:51 WARNING Response is NONE 15:33:51 WARNING Response is NONE 15:33:51 DEBUG Exception is preset. Setting retry_loop to true 15:33:51 DEBUG Exception is preset. Setting retry_loop to true 15:33:51 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): 15:33:51 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): raise FailException('Failed to obtain response from server...') File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable TypeError: 'LodestarLogger' object is not callable 15:33:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691850579 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 15:33:55 WARNING Response is NONE 15:33:55 DEBUG Exception is preset. Setting retry_loop to true 15:33:55 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 15:34:40 INFO 15:34:40 INFO [loop_until]: kubectl --namespace=xlou top pods 15:34:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:34:40 INFO 15:34:40 INFO [loop_until]: kubectl --namespace=xlou top node 15:34:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:34:40 INFO [loop_until]: OK (rc = 0) 15:34:40 DEBUG --- stdout --- 15:34:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 6m 2353Mi am-55f77847b7-sgmd6 12m 2722Mi am-55f77847b7-wq5w5 8m 4391Mi ds-cts-0 78m 349Mi ds-cts-1 7m 365Mi ds-cts-2 8m 356Mi ds-idrepo-0 303m 10354Mi ds-idrepo-1 22m 10362Mi ds-idrepo-2 23m 10317Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 53m 1509Mi idm-65858d8c4c-gdv6b 7m 1624Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 98Mi 15:34:40 DEBUG --- stderr --- 15:34:40 DEBUG 15:34:40 INFO [loop_until]: OK (rc = 0) 15:34:40 DEBUG --- stdout --- 15:34:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 3380Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 196m 1% 3914Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 2943Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2701Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 74m 0% 11022Mi 18% gke-xlou-cdm-ds-32e4dcb1-b374 133m 0% 11005Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 10949Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1630Mi 2% 15:34:40 DEBUG --- stderr --- 15:34:40 DEBUG 15:35:40 INFO 15:35:40 INFO [loop_until]: kubectl --namespace=xlou top pods 15:35:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:35:40 INFO 15:35:40 INFO [loop_until]: kubectl --namespace=xlou top node 15:35:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:35:40 INFO [loop_until]: OK (rc = 0) 15:35:40 DEBUG --- stdout --- 15:35:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 21m 2447Mi am-55f77847b7-sgmd6 20m 2745Mi am-55f77847b7-wq5w5 10m 4399Mi ds-cts-0 367m 350Mi ds-cts-1 215m 367Mi ds-cts-2 186m 359Mi ds-idrepo-0 2921m 12738Mi ds-idrepo-1 84m 10365Mi ds-idrepo-2 172m 10330Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 1484Mi idm-65858d8c4c-gdv6b 11m 1681Mi lodemon-86d6dfd886-rxdp4 1m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1097m 372Mi 15:35:40 DEBUG --- stderr --- 15:35:40 DEBUG 15:35:40 INFO [loop_until]: OK (rc = 0) 15:35:40 DEBUG --- stdout --- 15:35:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 3473Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5529Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3911Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2999Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2749Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 485m 3% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 225m 1% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3071m 19% 13316Mi 22% gke-xlou-cdm-ds-32e4dcb1-b374 142m 0% 11010Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 195m 1% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 225m 1% 10960Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1146m 7% 1899Mi 3% 15:35:40 DEBUG --- stderr --- 15:35:40 DEBUG 15:36:40 INFO 15:36:40 INFO [loop_until]: kubectl --namespace=xlou top pods 15:36:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:36:40 INFO 15:36:40 INFO [loop_until]: kubectl --namespace=xlou top node 15:36:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:36:40 INFO [loop_until]: OK (rc = 0) 15:36:40 DEBUG --- stdout --- 15:36:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 27m 2458Mi am-55f77847b7-sgmd6 11m 2745Mi am-55f77847b7-wq5w5 11m 4399Mi ds-cts-0 9m 353Mi ds-cts-1 7m 369Mi ds-cts-2 6m 358Mi ds-idrepo-0 2649m 13375Mi ds-idrepo-1 31m 10365Mi ds-idrepo-2 31m 10330Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 9m 1487Mi idm-65858d8c4c-gdv6b 7m 1681Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1056m 372Mi 15:36:40 DEBUG --- stderr --- 15:36:40 DEBUG 15:36:40 INFO [loop_until]: OK (rc = 0) 15:36:40 DEBUG --- stdout --- 15:36:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 87m 0% 3484Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 5534Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3910Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3002Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2751Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2732m 17% 13951Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 80m 0% 11008Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 82m 0% 10963Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1134m 7% 1899Mi 3% 15:36:40 DEBUG --- stderr --- 15:36:40 DEBUG 15:37:40 INFO 15:37:40 INFO [loop_until]: kubectl --namespace=xlou top pods 15:37:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:37:40 INFO 15:37:40 INFO [loop_until]: kubectl --namespace=xlou top node 15:37:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:37:40 INFO [loop_until]: OK (rc = 0) 15:37:40 DEBUG --- stdout --- 15:37:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 12m 2468Mi am-55f77847b7-sgmd6 16m 2751Mi am-55f77847b7-wq5w5 10m 4399Mi ds-cts-0 10m 350Mi ds-cts-1 7m 368Mi ds-cts-2 7m 359Mi ds-idrepo-0 2727m 13365Mi ds-idrepo-1 11m 10373Mi ds-idrepo-2 24m 10330Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 10m 1487Mi idm-65858d8c4c-gdv6b 7m 1681Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1171m 373Mi 15:37:40 DEBUG --- stderr --- 15:37:40 DEBUG 15:37:40 INFO [loop_until]: OK (rc = 0) 15:37:40 DEBUG --- stdout --- 15:37:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 3495Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3916Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3002Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2751Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 2715m 17% 13946Mi 23% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 11018Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 10965Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1228m 7% 1898Mi 3% 15:37:40 DEBUG --- stderr --- 15:37:40 DEBUG 15:38:40 INFO 15:38:40 INFO [loop_until]: kubectl --namespace=xlou top pods 15:38:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:38:40 INFO 15:38:40 INFO [loop_until]: kubectl --namespace=xlou top node 15:38:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:38:40 INFO [loop_until]: OK (rc = 0) 15:38:40 DEBUG --- stdout --- 15:38:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 15m 2482Mi am-55f77847b7-sgmd6 12m 2751Mi am-55f77847b7-wq5w5 10m 4399Mi ds-cts-0 7m 350Mi ds-cts-1 6m 368Mi ds-cts-2 5m 359Mi ds-idrepo-0 3067m 13546Mi ds-idrepo-1 23m 10374Mi ds-idrepo-2 24m 10332Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 10m 1487Mi idm-65858d8c4c-gdv6b 8m 1681Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1209m 373Mi 15:38:40 DEBUG --- stderr --- 15:38:40 DEBUG 15:38:41 INFO [loop_until]: OK (rc = 0) 15:38:41 DEBUG --- stdout --- 15:38:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 3509Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3918Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2999Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2751Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3102m 19% 14120Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 70m 0% 11016Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 10964Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1280m 8% 1901Mi 3% 15:38:41 DEBUG --- stderr --- 15:38:41 DEBUG 15:39:41 INFO 15:39:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:39:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:39:41 INFO [loop_until]: OK (rc = 0) 15:39:41 DEBUG --- stdout --- 15:39:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 12m 2492Mi am-55f77847b7-sgmd6 12m 2751Mi am-55f77847b7-wq5w5 11m 4399Mi ds-cts-0 7m 353Mi ds-cts-1 8m 369Mi ds-cts-2 7m 359Mi ds-idrepo-0 2930m 13595Mi ds-idrepo-1 15m 10375Mi ds-idrepo-2 19m 10333Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 10m 1488Mi idm-65858d8c4c-gdv6b 9m 1681Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1207m 373Mi 15:39:41 DEBUG --- stderr --- 15:39:41 DEBUG 15:39:41 INFO 15:39:41 INFO [loop_until]: kubectl --namespace=xlou top node 15:39:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:39:41 INFO [loop_until]: OK (rc = 0) 15:39:41 DEBUG --- stdout --- 15:39:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 3517Mi 5% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5529Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 3929Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 3003Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 2749Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3037m 19% 14165Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 11017Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10966Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1313m 8% 1902Mi 3% 15:39:41 DEBUG --- stderr --- 15:39:41 DEBUG 15:40:41 INFO 15:40:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:40:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:40:41 INFO [loop_until]: OK (rc = 0) 15:40:41 DEBUG --- stdout --- 15:40:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 22m 2500Mi am-55f77847b7-sgmd6 10m 2751Mi am-55f77847b7-wq5w5 9m 4399Mi ds-cts-0 7m 352Mi ds-cts-1 7m 369Mi ds-cts-2 5m 359Mi ds-idrepo-0 13m 13593Mi ds-idrepo-1 16m 10376Mi ds-idrepo-2 24m 10339Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 7m 1488Mi idm-65858d8c4c-gdv6b 9m 1681Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 98Mi 15:40:41 DEBUG --- stderr --- 15:40:41 DEBUG 15:40:41 INFO 15:40:41 INFO [loop_until]: kubectl --namespace=xlou top node 15:40:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:40:41 INFO [loop_until]: OK (rc = 0) 15:40:41 DEBUG --- stdout --- 15:40:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 3530Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5534Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3915Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 3002Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 2756Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14166Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 11021Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 80m 0% 10974Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1629Mi 2% 15:40:41 DEBUG --- stderr --- 15:40:41 DEBUG 15:41:41 INFO 15:41:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:41:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:41:41 INFO [loop_until]: OK (rc = 0) 15:41:41 DEBUG --- stdout --- 15:41:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 12m 2512Mi am-55f77847b7-sgmd6 9m 2752Mi am-55f77847b7-wq5w5 19m 4400Mi ds-cts-0 8m 352Mi ds-cts-1 9m 369Mi ds-cts-2 8m 359Mi ds-idrepo-0 18m 13594Mi ds-idrepo-1 2524m 11950Mi ds-idrepo-2 21m 10340Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 9m 1487Mi idm-65858d8c4c-gdv6b 6m 1685Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 910m 386Mi 15:41:41 DEBUG --- stderr --- 15:41:41 DEBUG 15:41:41 INFO 15:41:41 INFO [loop_until]: kubectl --namespace=xlou top node 15:41:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:41:41 INFO [loop_until]: OK (rc = 0) 15:41:41 DEBUG --- stdout --- 15:41:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 3538Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3916Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 3007Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2124Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2748Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 70m 0% 14166Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2761m 17% 13185Mi 22% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10976Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1096m 6% 1913Mi 3% 15:41:41 DEBUG --- stderr --- 15:41:41 DEBUG 15:42:41 INFO 15:42:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:42:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:42:41 INFO [loop_until]: OK (rc = 0) 15:42:41 DEBUG --- stdout --- 15:42:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 16m 2524Mi am-55f77847b7-sgmd6 11m 2752Mi am-55f77847b7-wq5w5 11m 4400Mi ds-cts-0 8m 352Mi ds-cts-1 7m 369Mi ds-cts-2 8m 359Mi ds-idrepo-0 14m 13594Mi ds-idrepo-1 2713m 13381Mi ds-idrepo-2 21m 10339Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 7m 1487Mi idm-65858d8c4c-gdv6b 6m 1686Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1128m 387Mi 15:42:41 DEBUG --- stderr --- 15:42:41 DEBUG 15:42:41 INFO 15:42:41 INFO [loop_until]: kubectl --namespace=xlou top node 15:42:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:42:41 INFO [loop_until]: OK (rc = 0) 15:42:41 DEBUG --- stdout --- 15:42:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 3550Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3918Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 3006Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2749Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2712m 17% 13943Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 10977Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1201m 7% 1912Mi 3% 15:42:41 DEBUG --- stderr --- 15:42:41 DEBUG 15:43:41 INFO 15:43:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:43:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:43:41 INFO [loop_until]: OK (rc = 0) 15:43:41 DEBUG --- stdout --- 15:43:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 12m 2536Mi am-55f77847b7-sgmd6 20m 2752Mi am-55f77847b7-wq5w5 11m 4400Mi ds-cts-0 11m 353Mi ds-cts-1 6m 369Mi ds-cts-2 7m 359Mi ds-idrepo-0 14m 13594Mi ds-idrepo-1 2823m 13346Mi ds-idrepo-2 34m 10336Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 7m 1488Mi idm-65858d8c4c-gdv6b 14m 1686Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1219m 390Mi 15:43:41 DEBUG --- stderr --- 15:43:41 DEBUG 15:43:41 INFO 15:43:41 INFO [loop_until]: kubectl --namespace=xlou top node 15:43:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:43:41 INFO [loop_until]: OK (rc = 0) 15:43:41 DEBUG --- stdout --- 15:43:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 3561Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 3917Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 81m 0% 3007Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 2751Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2781m 17% 13924Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 91m 0% 10970Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1252m 7% 1915Mi 3% 15:43:41 DEBUG --- stderr --- 15:43:41 DEBUG 15:44:41 INFO 15:44:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:44:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:44:41 INFO [loop_until]: OK (rc = 0) 15:44:41 DEBUG --- stdout --- 15:44:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 12m 2546Mi am-55f77847b7-sgmd6 9m 2752Mi am-55f77847b7-wq5w5 8m 4400Mi ds-cts-0 8m 353Mi ds-cts-1 6m 369Mi ds-cts-2 11m 359Mi ds-idrepo-0 14m 13593Mi ds-idrepo-1 2996m 13436Mi ds-idrepo-2 21m 10338Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 12m 1489Mi idm-65858d8c4c-gdv6b 10m 1686Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1283m 390Mi 15:44:41 DEBUG --- stderr --- 15:44:41 DEBUG 15:44:41 INFO 15:44:41 INFO [loop_until]: kubectl --namespace=xlou top node 15:44:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:44:41 INFO [loop_until]: OK (rc = 0) 15:44:41 DEBUG --- stdout --- 15:44:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 3573Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5536Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3916Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 82m 0% 3005Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2751Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2969m 18% 13936Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10974Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1372m 8% 1916Mi 3% 15:44:41 DEBUG --- stderr --- 15:44:41 DEBUG 15:45:41 INFO 15:45:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:45:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:45:41 INFO [loop_until]: OK (rc = 0) 15:45:41 DEBUG --- stdout --- 15:45:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 16m 2557Mi am-55f77847b7-sgmd6 10m 2752Mi am-55f77847b7-wq5w5 9m 4401Mi ds-cts-0 6m 352Mi ds-cts-1 7m 369Mi ds-cts-2 10m 363Mi ds-idrepo-0 14m 13594Mi ds-idrepo-1 2853m 13598Mi ds-idrepo-2 27m 10338Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 6m 1488Mi idm-65858d8c4c-gdv6b 6m 1687Mi lodemon-86d6dfd886-rxdp4 1m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1275m 390Mi 15:45:41 DEBUG --- stderr --- 15:45:41 DEBUG 15:45:41 INFO 15:45:41 INFO [loop_until]: kubectl --namespace=xlou top node 15:45:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:45:41 INFO [loop_until]: OK (rc = 0) 15:45:41 DEBUG --- stdout --- 15:45:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 3584Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3913Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 3009Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2751Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14167Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3029m 19% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 80m 0% 10976Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1403m 8% 1916Mi 3% 15:45:41 DEBUG --- stderr --- 15:45:41 DEBUG 15:46:41 INFO 15:46:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:46:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:46:41 INFO [loop_until]: OK (rc = 0) 15:46:41 DEBUG --- stdout --- 15:46:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 10m 2568Mi am-55f77847b7-sgmd6 9m 2752Mi am-55f77847b7-wq5w5 8m 4401Mi ds-cts-0 7m 352Mi ds-cts-1 8m 369Mi ds-cts-2 6m 363Mi ds-idrepo-0 14m 13594Mi ds-idrepo-1 30m 13608Mi ds-idrepo-2 18m 10340Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 7m 1489Mi idm-65858d8c4c-gdv6b 6m 1686Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 98Mi 15:46:41 DEBUG --- stderr --- 15:46:41 DEBUG 15:46:41 INFO 15:46:41 INFO [loop_until]: kubectl --namespace=xlou top node 15:46:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:46:41 INFO [loop_until]: OK (rc = 0) 15:46:41 DEBUG --- stdout --- 15:46:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 68m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 3593Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5533Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 3917Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3008Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2124Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2751Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14169Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 14160Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 10974Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1624Mi 2% 15:46:41 DEBUG --- stderr --- 15:46:41 DEBUG 15:47:41 INFO 15:47:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:47:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:47:41 INFO [loop_until]: OK (rc = 0) 15:47:41 DEBUG --- stdout --- 15:47:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 11m 2580Mi am-55f77847b7-sgmd6 12m 2752Mi am-55f77847b7-wq5w5 9m 4401Mi ds-cts-0 8m 353Mi ds-cts-1 7m 369Mi ds-cts-2 5m 363Mi ds-idrepo-0 23m 13594Mi ds-idrepo-1 11m 13608Mi ds-idrepo-2 2441m 11736Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 9m 1489Mi idm-65858d8c4c-gdv6b 5m 1687Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1051m 354Mi 15:47:41 DEBUG --- stderr --- 15:47:41 DEBUG 15:47:42 INFO 15:47:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:47:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:47:42 INFO [loop_until]: OK (rc = 0) 15:47:42 DEBUG --- stdout --- 15:47:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 3605Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5533Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3919Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 3008Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2749Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 14169Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14163Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2525m 15% 12304Mi 20% gke-xlou-cdm-frontend-a8771548-k40m 1342m 8% 1880Mi 3% 15:47:42 DEBUG --- stderr --- 15:47:42 DEBUG 15:48:41 INFO 15:48:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:48:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:48:41 INFO [loop_until]: OK (rc = 0) 15:48:41 DEBUG --- stdout --- 15:48:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 10m 2592Mi am-55f77847b7-sgmd6 8m 2752Mi am-55f77847b7-wq5w5 8m 4402Mi ds-cts-0 7m 353Mi ds-cts-1 6m 369Mi ds-cts-2 5m 363Mi ds-idrepo-0 13m 13595Mi ds-idrepo-1 10m 13608Mi ds-idrepo-2 2792m 13377Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 7m 1489Mi idm-65858d8c4c-gdv6b 5m 1687Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1045m 354Mi 15:48:41 DEBUG --- stderr --- 15:48:41 DEBUG 15:48:42 INFO 15:48:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:48:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:48:42 INFO [loop_until]: OK (rc = 0) 15:48:42 DEBUG --- stdout --- 15:48:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 3615Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5537Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 3919Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 3006Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2751Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14169Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2820m 17% 13926Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1122m 7% 1876Mi 3% 15:48:42 DEBUG --- stderr --- 15:48:42 DEBUG 15:49:41 INFO 15:49:41 INFO [loop_until]: kubectl --namespace=xlou top pods 15:49:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:49:41 INFO [loop_until]: OK (rc = 0) 15:49:41 DEBUG --- stdout --- 15:49:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 14m 2601Mi am-55f77847b7-sgmd6 11m 2752Mi am-55f77847b7-wq5w5 9m 4408Mi ds-cts-0 8m 354Mi ds-cts-1 5m 369Mi ds-cts-2 6m 365Mi ds-idrepo-0 13m 13594Mi ds-idrepo-1 26m 13609Mi ds-idrepo-2 2852m 13373Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 11m 1489Mi idm-65858d8c4c-gdv6b 5m 1687Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1108m 355Mi 15:49:41 DEBUG --- stderr --- 15:49:41 DEBUG 15:49:42 INFO 15:49:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:49:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:49:42 INFO [loop_until]: OK (rc = 0) 15:49:42 DEBUG --- stdout --- 15:49:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 3627Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3916Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 3007Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2750Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 14171Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 14165Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3038m 19% 13925Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1155m 7% 1880Mi 3% 15:49:42 DEBUG --- stderr --- 15:49:42 DEBUG 15:50:42 INFO 15:50:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:50:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:50:42 INFO [loop_until]: OK (rc = 0) 15:50:42 DEBUG --- stdout --- 15:50:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 12m 2613Mi am-55f77847b7-sgmd6 8m 2752Mi am-55f77847b7-wq5w5 27m 4411Mi ds-cts-0 24m 365Mi ds-cts-1 6m 369Mi ds-cts-2 11m 363Mi ds-idrepo-0 13m 13595Mi ds-idrepo-1 9m 13608Mi ds-idrepo-2 2823m 13411Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 1489Mi idm-65858d8c4c-gdv6b 6m 1687Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1189m 355Mi 15:50:42 DEBUG --- stderr --- 15:50:42 DEBUG 15:50:42 INFO 15:50:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:50:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:50:42 INFO [loop_until]: OK (rc = 0) 15:50:42 DEBUG --- stdout --- 15:50:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 3638Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 83m 0% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3917Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 3012Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2752Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14171Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2972m 18% 13960Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1275m 8% 1881Mi 3% 15:50:42 DEBUG --- stderr --- 15:50:42 DEBUG 15:51:42 INFO 15:51:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:51:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:51:42 INFO [loop_until]: OK (rc = 0) 15:51:42 DEBUG --- stdout --- 15:51:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 11m 2627Mi am-55f77847b7-sgmd6 11m 2752Mi am-55f77847b7-wq5w5 11m 4411Mi ds-cts-0 8m 365Mi ds-cts-1 7m 371Mi ds-cts-2 6m 364Mi ds-idrepo-0 13m 13595Mi ds-idrepo-1 10m 13608Mi ds-idrepo-2 3181m 13626Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 6m 1490Mi idm-65858d8c4c-gdv6b 7m 1687Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1205m 355Mi 15:51:42 DEBUG --- stderr --- 15:51:42 DEBUG 15:51:42 INFO 15:51:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:51:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:51:42 INFO [loop_until]: OK (rc = 0) 15:51:42 DEBUG --- stdout --- 15:51:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 3650Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3920Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 3008Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2753Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14172Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14165Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3206m 20% 14164Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1308m 8% 1882Mi 3% 15:51:42 DEBUG --- stderr --- 15:51:42 DEBUG 15:52:42 INFO 15:52:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:52:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:52:42 INFO [loop_until]: OK (rc = 0) 15:52:42 DEBUG --- stdout --- 15:52:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 10m 2637Mi am-55f77847b7-sgmd6 8m 2758Mi am-55f77847b7-wq5w5 8m 4411Mi ds-cts-0 7m 365Mi ds-cts-1 7m 372Mi ds-cts-2 5m 364Mi ds-idrepo-0 13m 13595Mi ds-idrepo-1 12m 13609Mi ds-idrepo-2 1034m 13673Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 9m 1490Mi idm-65858d8c4c-gdv6b 7m 1688Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 98Mi 15:52:42 DEBUG --- stderr --- 15:52:42 DEBUG 15:52:42 INFO 15:52:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:52:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:52:42 INFO [loop_until]: OK (rc = 0) 15:52:42 DEBUG --- stdout --- 15:52:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 3662Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3925Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 3007Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2756Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 14172Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 253m 1% 14211Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1629Mi 2% 15:52:42 DEBUG --- stderr --- 15:52:42 DEBUG 15:53:42 INFO 15:53:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:53:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:53:42 INFO [loop_until]: OK (rc = 0) 15:53:42 DEBUG --- stdout --- 15:53:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 9m 2647Mi am-55f77847b7-sgmd6 9m 2759Mi am-55f77847b7-wq5w5 8m 4411Mi ds-cts-0 6m 365Mi ds-cts-1 7m 372Mi ds-cts-2 5m 364Mi ds-idrepo-0 14m 13595Mi ds-idrepo-1 15m 13608Mi ds-idrepo-2 19m 13674Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 12m 1502Mi idm-65858d8c4c-gdv6b 6m 1688Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 2224m 389Mi 15:53:42 DEBUG --- stderr --- 15:53:42 DEBUG 15:53:42 INFO 15:53:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:53:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:53:42 INFO [loop_until]: OK (rc = 0) 15:53:42 DEBUG --- stdout --- 15:53:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 3675Mi 6% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3923Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 3008Mi 5% gke-xlou-cdm-default-pool-f05840a3-h81k 134m 0% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 2765Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14173Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 14209Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1827m 11% 1916Mi 3% 15:53:42 DEBUG --- stderr --- 15:53:42 DEBUG 15:54:42 INFO 15:54:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:54:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:54:42 INFO [loop_until]: OK (rc = 0) 15:54:42 DEBUG --- stdout --- 15:54:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 94m 3468Mi am-55f77847b7-sgmd6 78m 3328Mi am-55f77847b7-wq5w5 71m 4445Mi ds-cts-0 14m 369Mi ds-cts-1 5m 374Mi ds-cts-2 6m 365Mi ds-idrepo-0 4536m 13598Mi ds-idrepo-1 981m 13615Mi ds-idrepo-2 1002m 13686Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 5324m 3739Mi idm-65858d8c4c-gdv6b 5396m 3798Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 828m 520Mi 15:54:42 DEBUG --- stderr --- 15:54:42 DEBUG 15:54:42 INFO 15:54:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:54:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:54:42 INFO [loop_until]: OK (rc = 0) 15:54:42 DEBUG --- stdout --- 15:54:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 4587Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 5576Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 4678Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 5568m 35% 5109Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1392m 8% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5280m 33% 4995Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4538m 28% 14167Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1060m 6% 14164Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1077m 6% 14220Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 793m 4% 2042Mi 3% 15:54:42 DEBUG --- stderr --- 15:54:42 DEBUG 15:55:42 INFO 15:55:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:55:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:55:42 INFO [loop_until]: OK (rc = 0) 15:55:42 DEBUG --- stdout --- 15:55:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 80m 4523Mi am-55f77847b7-sgmd6 75m 4606Mi am-55f77847b7-wq5w5 101m 4606Mi ds-cts-0 6m 369Mi ds-cts-1 7m 373Mi ds-cts-2 7m 365Mi ds-idrepo-0 5008m 13744Mi ds-idrepo-1 1353m 13811Mi ds-idrepo-2 1379m 13697Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4578m 3747Mi idm-65858d8c4c-gdv6b 4754m 3810Mi lodemon-86d6dfd886-rxdp4 1m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 663m 521Mi 15:55:42 DEBUG --- stderr --- 15:55:42 DEBUG 15:55:42 INFO 15:55:42 INFO [loop_until]: kubectl --namespace=xlou top node 15:55:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:55:42 INFO [loop_until]: OK (rc = 0) 15:55:42 DEBUG --- stdout --- 15:55:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 5613Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 157m 0% 5735Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 5703Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 5014m 31% 5124Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1312m 8% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4772m 30% 5003Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4916m 30% 14304Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1543m 9% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1470m 9% 14348Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 746m 4% 2043Mi 3% 15:55:42 DEBUG --- stderr --- 15:55:42 DEBUG 15:56:42 INFO 15:56:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:56:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:56:42 INFO [loop_until]: OK (rc = 0) 15:56:42 DEBUG --- stdout --- 15:56:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 80m 5507Mi am-55f77847b7-sgmd6 80m 5380Mi am-55f77847b7-wq5w5 60m 4606Mi ds-cts-0 8m 370Mi ds-cts-1 7m 373Mi ds-cts-2 6m 366Mi ds-idrepo-0 4857m 13818Mi ds-idrepo-1 942m 13755Mi ds-idrepo-2 1163m 13745Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4646m 3759Mi idm-65858d8c4c-gdv6b 4722m 3817Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 708m 521Mi 15:56:42 DEBUG --- stderr --- 15:56:42 DEBUG 15:56:43 INFO 15:56:43 INFO [loop_until]: kubectl --namespace=xlou top node 15:56:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:56:43 INFO [loop_until]: OK (rc = 0) 15:56:43 DEBUG --- stdout --- 15:56:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 142m 0% 6650Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 116m 0% 5739Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 133m 0% 6736Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5084m 31% 5133Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1364m 8% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4892m 30% 5017Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5050m 31% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1080m 6% 14293Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1129m 7% 14268Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 734m 4% 2044Mi 3% 15:56:43 DEBUG --- stderr --- 15:56:43 DEBUG 15:57:42 INFO 15:57:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:57:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:57:42 INFO [loop_until]: OK (rc = 0) 15:57:42 DEBUG --- stdout --- 15:57:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 66m 5720Mi am-55f77847b7-sgmd6 63m 5659Mi am-55f77847b7-wq5w5 64m 4607Mi ds-cts-0 8m 369Mi ds-cts-1 7m 373Mi ds-cts-2 7m 365Mi ds-idrepo-0 5045m 13822Mi ds-idrepo-1 1020m 13823Mi ds-idrepo-2 1089m 13747Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4608m 3772Mi idm-65858d8c4c-gdv6b 5025m 3823Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 673m 522Mi 15:57:42 DEBUG --- stderr --- 15:57:42 DEBUG 15:57:43 INFO 15:57:43 INFO [loop_until]: kubectl --namespace=xlou top node 15:57:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:57:43 INFO [loop_until]: OK (rc = 0) 15:57:43 DEBUG --- stdout --- 15:57:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 126m 0% 6736Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 126m 0% 5738Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 116m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5107m 32% 5136Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1318m 8% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4871m 30% 5030Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5078m 31% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1064m 6% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1150m 7% 14272Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 760m 4% 2046Mi 3% 15:57:43 DEBUG --- stderr --- 15:57:43 DEBUG 15:58:42 INFO 15:58:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:58:42 INFO [loop_until]: OK (rc = 0) 15:58:42 DEBUG --- stdout --- 15:58:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 68m 5737Mi am-55f77847b7-sgmd6 81m 5660Mi am-55f77847b7-wq5w5 64m 4631Mi ds-cts-0 7m 369Mi ds-cts-1 12m 374Mi ds-cts-2 7m 365Mi ds-idrepo-0 5580m 13819Mi ds-idrepo-1 1707m 13815Mi ds-idrepo-2 1686m 13803Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4496m 3785Mi idm-65858d8c4c-gdv6b 4937m 3830Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 676m 522Mi 15:58:42 DEBUG --- stderr --- 15:58:42 DEBUG 15:58:43 INFO 15:58:43 INFO [loop_until]: kubectl --namespace=xlou top node 15:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:58:43 INFO [loop_until]: OK (rc = 0) 15:58:43 DEBUG --- stdout --- 15:58:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6756Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 122m 0% 5741Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5110m 32% 5138Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1356m 8% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4785m 30% 5040Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5628m 35% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1818m 11% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1740m 10% 14345Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 759m 4% 2047Mi 3% 15:58:43 DEBUG --- stderr --- 15:58:43 DEBUG 15:59:42 INFO 15:59:42 INFO [loop_until]: kubectl --namespace=xlou top pods 15:59:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:59:42 INFO [loop_until]: OK (rc = 0) 15:59:42 DEBUG --- stdout --- 15:59:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5740Mi am-55f77847b7-sgmd6 67m 5665Mi am-55f77847b7-wq5w5 87m 5611Mi ds-cts-0 7m 369Mi ds-cts-1 6m 373Mi ds-cts-2 6m 365Mi ds-idrepo-0 5311m 13822Mi ds-idrepo-1 1243m 13823Mi ds-idrepo-2 1299m 13821Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4766m 3796Mi idm-65858d8c4c-gdv6b 4949m 3836Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 698m 521Mi 15:59:42 DEBUG --- stderr --- 15:59:42 DEBUG 15:59:43 INFO 15:59:43 INFO [loop_until]: kubectl --namespace=xlou top node 15:59:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 15:59:43 INFO [loop_until]: OK (rc = 0) 15:59:43 DEBUG --- stdout --- 15:59:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 125m 0% 6772Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6718Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 125m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5065m 31% 5146Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1329m 8% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4996m 31% 5048Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5437m 34% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1314m 8% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1335m 8% 14338Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 756m 4% 2046Mi 3% 15:59:43 DEBUG --- stderr --- 15:59:43 DEBUG 16:00:43 INFO 16:00:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:00:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:00:43 INFO [loop_until]: OK (rc = 0) 16:00:43 DEBUG --- stdout --- 16:00:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 66m 5740Mi am-55f77847b7-sgmd6 67m 5666Mi am-55f77847b7-wq5w5 62m 5707Mi ds-cts-0 7m 370Mi ds-cts-1 9m 374Mi ds-cts-2 9m 365Mi ds-idrepo-0 5561m 13821Mi ds-idrepo-1 1467m 13822Mi ds-idrepo-2 1925m 13828Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4541m 3810Mi idm-65858d8c4c-gdv6b 4875m 3843Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 688m 522Mi 16:00:43 DEBUG --- stderr --- 16:00:43 DEBUG 16:00:43 INFO 16:00:43 INFO [loop_until]: kubectl --namespace=xlou top node 16:00:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:00:43 INFO [loop_until]: OK (rc = 0) 16:00:43 DEBUG --- stdout --- 16:00:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6762Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 122m 0% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 125m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5062m 31% 5152Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1344m 8% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4780m 30% 5062Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6009m 37% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1754m 11% 14340Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1852m 11% 14335Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 762m 4% 2044Mi 3% 16:00:43 DEBUG --- stderr --- 16:00:43 DEBUG 16:01:43 INFO 16:01:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:01:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:01:43 INFO [loop_until]: OK (rc = 0) 16:01:43 DEBUG --- stdout --- 16:01:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 79m 5777Mi am-55f77847b7-sgmd6 63m 5703Mi am-55f77847b7-wq5w5 61m 5707Mi ds-cts-0 7m 369Mi ds-cts-1 5m 374Mi ds-cts-2 9m 366Mi ds-idrepo-0 5471m 13833Mi ds-idrepo-1 1196m 13794Mi ds-idrepo-2 1323m 13801Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4638m 3823Mi idm-65858d8c4c-gdv6b 4751m 3858Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 710m 523Mi 16:01:43 DEBUG --- stderr --- 16:01:43 DEBUG 16:01:43 INFO 16:01:43 INFO [loop_until]: kubectl --namespace=xlou top node 16:01:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:01:43 INFO [loop_until]: OK (rc = 0) 16:01:43 DEBUG --- stdout --- 16:01:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 120m 0% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 116m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5057m 31% 5170Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1310m 8% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4944m 31% 5085Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5536m 34% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1301m 8% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1359m 8% 14329Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 818m 5% 2046Mi 3% 16:01:43 DEBUG --- stderr --- 16:01:43 DEBUG 16:02:43 INFO 16:02:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:02:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:02:43 INFO [loop_until]: OK (rc = 0) 16:02:43 DEBUG --- stdout --- 16:02:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 62m 5777Mi am-55f77847b7-sgmd6 63m 5704Mi am-55f77847b7-wq5w5 79m 5707Mi ds-cts-0 6m 369Mi ds-cts-1 6m 374Mi ds-cts-2 7m 366Mi ds-idrepo-0 5320m 13825Mi ds-idrepo-1 1316m 13829Mi ds-idrepo-2 1645m 13823Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4601m 3835Mi idm-65858d8c4c-gdv6b 4840m 3870Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 720m 523Mi 16:02:43 DEBUG --- stderr --- 16:02:43 DEBUG 16:02:43 INFO 16:02:43 INFO [loop_until]: kubectl --namespace=xlou top node 16:02:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:02:43 INFO [loop_until]: OK (rc = 0) 16:02:43 DEBUG --- stdout --- 16:02:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 122m 0% 6797Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 136m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 117m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5073m 31% 5186Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1343m 8% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4858m 30% 5086Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5823m 36% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1344m 8% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1724m 10% 14340Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 765m 4% 2046Mi 3% 16:02:43 DEBUG --- stderr --- 16:02:43 DEBUG 16:03:43 INFO 16:03:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:03:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:03:43 INFO [loop_until]: OK (rc = 0) 16:03:43 DEBUG --- stdout --- 16:03:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5777Mi am-55f77847b7-sgmd6 63m 5704Mi am-55f77847b7-wq5w5 65m 5712Mi ds-cts-0 8m 369Mi ds-cts-1 7m 374Mi ds-cts-2 6m 366Mi ds-idrepo-0 5661m 13822Mi ds-idrepo-1 1496m 13829Mi ds-idrepo-2 1483m 13832Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4656m 3846Mi idm-65858d8c4c-gdv6b 4901m 3883Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 716m 524Mi 16:03:43 DEBUG --- stderr --- 16:03:43 DEBUG 16:03:43 INFO 16:03:43 INFO [loop_until]: kubectl --namespace=xlou top node 16:03:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:03:43 INFO [loop_until]: OK (rc = 0) 16:03:43 DEBUG --- stdout --- 16:03:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 126m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 124m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 120m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5192m 32% 5199Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1337m 8% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4865m 30% 5097Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5910m 37% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1624m 10% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1548m 9% 14340Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 770m 4% 2044Mi 3% 16:03:43 DEBUG --- stderr --- 16:03:43 DEBUG 16:04:43 INFO 16:04:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:04:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:04:43 INFO [loop_until]: OK (rc = 0) 16:04:43 DEBUG --- stdout --- 16:04:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 69m 5777Mi am-55f77847b7-sgmd6 68m 5710Mi am-55f77847b7-wq5w5 64m 5717Mi ds-cts-0 8m 369Mi ds-cts-1 9m 374Mi ds-cts-2 8m 365Mi ds-idrepo-0 5544m 13821Mi ds-idrepo-1 1246m 13843Mi ds-idrepo-2 1485m 13843Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4555m 3856Mi idm-65858d8c4c-gdv6b 4898m 3902Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 671m 524Mi 16:04:43 DEBUG --- stderr --- 16:04:43 DEBUG 16:04:43 INFO 16:04:43 INFO [loop_until]: kubectl --namespace=xlou top node 16:04:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:04:44 INFO [loop_until]: OK (rc = 0) 16:04:44 DEBUG --- stdout --- 16:04:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 125m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6869Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4942m 31% 5216Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1363m 8% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4651m 29% 5114Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5857m 36% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1344m 8% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1525m 9% 14332Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 753m 4% 2047Mi 3% 16:04:44 DEBUG --- stderr --- 16:04:44 DEBUG 16:05:43 INFO 16:05:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:05:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:05:43 INFO [loop_until]: OK (rc = 0) 16:05:43 DEBUG --- stdout --- 16:05:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 83m 5786Mi am-55f77847b7-sgmd6 67m 5710Mi am-55f77847b7-wq5w5 65m 5717Mi ds-cts-0 7m 370Mi ds-cts-1 8m 375Mi ds-cts-2 6m 365Mi ds-idrepo-0 5605m 13834Mi ds-idrepo-1 1329m 13834Mi ds-idrepo-2 1330m 13838Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4534m 3872Mi idm-65858d8c4c-gdv6b 5179m 3913Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 790m 524Mi 16:05:43 DEBUG --- stderr --- 16:05:43 DEBUG 16:05:44 INFO 16:05:44 INFO [loop_until]: kubectl --namespace=xlou top node 16:05:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:05:44 INFO [loop_until]: OK (rc = 0) 16:05:44 DEBUG --- stdout --- 16:05:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6871Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5105m 32% 5231Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1312m 8% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4793m 30% 5129Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5560m 34% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1533m 9% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1569m 9% 14359Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 846m 5% 2045Mi 3% 16:05:44 DEBUG --- stderr --- 16:05:44 DEBUG 16:06:43 INFO 16:06:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:06:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:06:43 INFO [loop_until]: OK (rc = 0) 16:06:43 DEBUG --- stdout --- 16:06:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 62m 5786Mi am-55f77847b7-sgmd6 63m 5714Mi am-55f77847b7-wq5w5 65m 5717Mi ds-cts-0 8m 370Mi ds-cts-1 7m 375Mi ds-cts-2 6m 365Mi ds-idrepo-0 6289m 13807Mi ds-idrepo-1 1897m 13853Mi ds-idrepo-2 2015m 13839Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4675m 3890Mi idm-65858d8c4c-gdv6b 4957m 3930Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 828m 524Mi 16:06:43 DEBUG --- stderr --- 16:06:43 DEBUG 16:06:44 INFO 16:06:44 INFO [loop_until]: kubectl --namespace=xlou top node 16:06:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:06:44 INFO [loop_until]: OK (rc = 0) 16:06:44 DEBUG --- stdout --- 16:06:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 123m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 126m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 122m 0% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5195m 32% 5243Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1373m 8% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4914m 30% 5144Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6378m 40% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2076m 13% 14351Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2423m 15% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 925m 5% 2044Mi 3% 16:06:44 DEBUG --- stderr --- 16:06:44 DEBUG 16:07:43 INFO 16:07:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:07:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:07:43 INFO [loop_until]: OK (rc = 0) 16:07:43 DEBUG --- stdout --- 16:07:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 66m 5791Mi am-55f77847b7-sgmd6 64m 5715Mi am-55f77847b7-wq5w5 65m 5717Mi ds-cts-0 7m 370Mi ds-cts-1 13m 377Mi ds-cts-2 7m 366Mi ds-idrepo-0 5759m 13822Mi ds-idrepo-1 1366m 13857Mi ds-idrepo-2 1700m 13822Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4575m 3903Mi idm-65858d8c4c-gdv6b 4892m 3944Mi lodemon-86d6dfd886-rxdp4 1m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 844m 525Mi 16:07:43 DEBUG --- stderr --- 16:07:43 DEBUG 16:07:44 INFO 16:07:44 INFO [loop_until]: kubectl --namespace=xlou top node 16:07:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:07:44 INFO [loop_until]: OK (rc = 0) 16:07:44 DEBUG --- stdout --- 16:07:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 126m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 124m 0% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 122m 0% 6876Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5095m 32% 5255Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1328m 8% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4918m 30% 5153Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5820m 36% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1489m 9% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1741m 10% 14325Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 893m 5% 2047Mi 3% 16:07:44 DEBUG --- stderr --- 16:07:44 DEBUG 16:08:43 INFO 16:08:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:08:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:08:43 INFO [loop_until]: OK (rc = 0) 16:08:43 DEBUG --- stdout --- 16:08:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 71m 5796Mi am-55f77847b7-sgmd6 65m 5716Mi am-55f77847b7-wq5w5 65m 5717Mi ds-cts-0 7m 370Mi ds-cts-1 11m 378Mi ds-cts-2 6m 366Mi ds-idrepo-0 5943m 13815Mi ds-idrepo-1 1838m 13855Mi ds-idrepo-2 1570m 13823Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4548m 3914Mi idm-65858d8c4c-gdv6b 4944m 3954Mi lodemon-86d6dfd886-rxdp4 6m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 847m 526Mi 16:08:43 DEBUG --- stderr --- 16:08:43 DEBUG 16:08:44 INFO 16:08:44 INFO [loop_until]: kubectl --namespace=xlou top node 16:08:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:08:44 INFO [loop_until]: OK (rc = 0) 16:08:44 DEBUG --- stdout --- 16:08:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 131m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 125m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 121m 0% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5154m 32% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1369m 8% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4792m 30% 5168Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6194m 38% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1823m 11% 14344Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1685m 10% 14362Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 888m 5% 2046Mi 3% 16:08:44 DEBUG --- stderr --- 16:08:44 DEBUG 16:09:43 INFO 16:09:43 INFO [loop_until]: kubectl --namespace=xlou top pods 16:09:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:09:43 INFO [loop_until]: OK (rc = 0) 16:09:43 DEBUG --- stdout --- 16:09:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 70m 5797Mi am-55f77847b7-sgmd6 72m 5716Mi am-55f77847b7-wq5w5 58m 5719Mi ds-cts-0 7m 370Mi ds-cts-1 7m 378Mi ds-cts-2 6m 365Mi ds-idrepo-0 5925m 13822Mi ds-idrepo-1 1685m 13829Mi ds-idrepo-2 1560m 13822Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4661m 3925Mi idm-65858d8c4c-gdv6b 4953m 3969Mi lodemon-86d6dfd886-rxdp4 7m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 832m 526Mi 16:09:43 DEBUG --- stderr --- 16:09:43 DEBUG 16:09:44 INFO 16:09:44 INFO [loop_until]: kubectl --namespace=xlou top node 16:09:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:09:44 INFO [loop_until]: OK (rc = 0) 16:09:44 DEBUG --- stdout --- 16:09:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 124m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 119m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 128m 0% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5093m 32% 5286Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1320m 8% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4883m 30% 5180Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5984m 37% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1636m 10% 14336Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1524m 9% 14319Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 910m 5% 2047Mi 3% 16:09:44 DEBUG --- stderr --- 16:09:44 DEBUG 16:10:44 INFO 16:10:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:10:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:10:44 INFO [loop_until]: OK (rc = 0) 16:10:44 DEBUG --- stdout --- 16:10:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 63m 5799Mi am-55f77847b7-sgmd6 61m 5717Mi am-55f77847b7-wq5w5 64m 5719Mi ds-cts-0 7m 370Mi ds-cts-1 7m 378Mi ds-cts-2 7m 366Mi ds-idrepo-0 5878m 13803Mi ds-idrepo-1 1679m 13866Mi ds-idrepo-2 2131m 13854Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4498m 3939Mi idm-65858d8c4c-gdv6b 4754m 3982Mi lodemon-86d6dfd886-rxdp4 1m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 805m 526Mi 16:10:44 DEBUG --- stderr --- 16:10:44 DEBUG 16:10:44 INFO 16:10:44 INFO [loop_until]: kubectl --namespace=xlou top node 16:10:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:10:44 INFO [loop_until]: OK (rc = 0) 16:10:44 DEBUG --- stdout --- 16:10:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 121m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 124m 0% 6853Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 117m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5106m 32% 5298Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1354m 8% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4787m 30% 5193Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6163m 38% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1655m 10% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2049m 12% 14345Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 885m 5% 2046Mi 3% 16:10:44 DEBUG --- stderr --- 16:10:44 DEBUG 16:11:44 INFO 16:11:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:11:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:11:44 INFO [loop_until]: OK (rc = 0) 16:11:44 DEBUG --- stdout --- 16:11:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 61m 5799Mi am-55f77847b7-sgmd6 61m 5718Mi am-55f77847b7-wq5w5 60m 5719Mi ds-cts-0 6m 370Mi ds-cts-1 9m 378Mi ds-cts-2 6m 365Mi ds-idrepo-0 5719m 13808Mi ds-idrepo-1 1505m 13816Mi ds-idrepo-2 1586m 13841Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4585m 3950Mi idm-65858d8c4c-gdv6b 4871m 3999Mi lodemon-86d6dfd886-rxdp4 5m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 817m 527Mi 16:11:44 DEBUG --- stderr --- 16:11:44 DEBUG 16:11:44 INFO 16:11:44 INFO [loop_until]: kubectl --namespace=xlou top node 16:11:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:11:44 INFO [loop_until]: OK (rc = 0) 16:11:44 DEBUG --- stdout --- 16:11:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 122m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 121m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5004m 31% 5315Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1358m 8% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4614m 29% 5208Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6002m 37% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1530m 9% 14350Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1686m 10% 14338Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 865m 5% 2046Mi 3% 16:11:44 DEBUG --- stderr --- 16:11:44 DEBUG 16:12:44 INFO 16:12:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:12:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:12:44 INFO [loop_until]: OK (rc = 0) 16:12:44 DEBUG --- stdout --- 16:12:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5799Mi am-55f77847b7-sgmd6 68m 5718Mi am-55f77847b7-wq5w5 63m 5719Mi ds-cts-0 8m 370Mi ds-cts-1 11m 376Mi ds-cts-2 6m 366Mi ds-idrepo-0 5655m 13823Mi ds-idrepo-1 1295m 13844Mi ds-idrepo-2 1343m 13846Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4529m 3964Mi idm-65858d8c4c-gdv6b 4762m 4012Mi lodemon-86d6dfd886-rxdp4 6m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 826m 527Mi 16:12:44 DEBUG --- stderr --- 16:12:44 DEBUG 16:12:45 INFO 16:12:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:12:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:12:45 INFO [loop_until]: OK (rc = 0) 16:12:45 DEBUG --- stdout --- 16:12:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 126m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 121m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 122m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5096m 32% 5323Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1344m 8% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4757m 29% 5222Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5562m 35% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1276m 8% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1407m 8% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 903m 5% 2049Mi 3% 16:12:45 DEBUG --- stderr --- 16:12:45 DEBUG 16:13:44 INFO 16:13:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:13:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:13:44 INFO [loop_until]: OK (rc = 0) 16:13:44 DEBUG --- stdout --- 16:13:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 66m 5799Mi am-55f77847b7-sgmd6 66m 5718Mi am-55f77847b7-wq5w5 60m 5719Mi ds-cts-0 7m 370Mi ds-cts-1 6m 375Mi ds-cts-2 7m 366Mi ds-idrepo-0 5606m 13827Mi ds-idrepo-1 1519m 13852Mi ds-idrepo-2 1430m 13836Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4585m 3974Mi idm-65858d8c4c-gdv6b 5216m 4022Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 804m 527Mi 16:13:44 DEBUG --- stderr --- 16:13:44 DEBUG 16:13:45 INFO 16:13:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:13:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:13:45 INFO [loop_until]: OK (rc = 0) 16:13:45 DEBUG --- stdout --- 16:13:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 131m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 121m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 124m 0% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5268m 33% 5339Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1371m 8% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4801m 30% 5232Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5705m 35% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1457m 9% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1675m 10% 14363Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 897m 5% 2045Mi 3% 16:13:45 DEBUG --- stderr --- 16:13:45 DEBUG 16:14:44 INFO 16:14:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:14:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:14:44 INFO [loop_until]: OK (rc = 0) 16:14:44 DEBUG --- stdout --- 16:14:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5799Mi am-55f77847b7-sgmd6 63m 5718Mi am-55f77847b7-wq5w5 60m 5719Mi ds-cts-0 7m 370Mi ds-cts-1 5m 375Mi ds-cts-2 8m 367Mi ds-idrepo-0 6824m 13815Mi ds-idrepo-1 1935m 13827Mi ds-idrepo-2 1893m 13813Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4353m 3987Mi idm-65858d8c4c-gdv6b 4957m 4038Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 807m 528Mi 16:14:44 DEBUG --- stderr --- 16:14:44 DEBUG 16:14:45 INFO 16:14:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:14:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:14:45 INFO [loop_until]: OK (rc = 0) 16:14:45 DEBUG --- stdout --- 16:14:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 127m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 116m 0% 6853Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 120m 0% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4937m 31% 5346Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1353m 8% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4826m 30% 5244Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6834m 43% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1909m 12% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1777m 11% 14352Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 858m 5% 2047Mi 3% 16:14:45 DEBUG --- stderr --- 16:14:45 DEBUG 16:15:44 INFO 16:15:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:15:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:15:44 INFO [loop_until]: OK (rc = 0) 16:15:44 DEBUG --- stdout --- 16:15:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 65m 5799Mi am-55f77847b7-sgmd6 63m 5718Mi am-55f77847b7-wq5w5 60m 5719Mi ds-cts-0 8m 370Mi ds-cts-1 6m 375Mi ds-cts-2 6m 366Mi ds-idrepo-0 5406m 13825Mi ds-idrepo-1 1461m 13826Mi ds-idrepo-2 1249m 13830Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4631m 4003Mi idm-65858d8c4c-gdv6b 4740m 4054Mi lodemon-86d6dfd886-rxdp4 6m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 697m 528Mi 16:15:44 DEBUG --- stderr --- 16:15:44 DEBUG 16:15:45 INFO 16:15:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:15:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:15:45 INFO [loop_until]: OK (rc = 0) 16:15:45 DEBUG --- stdout --- 16:15:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 127m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 119m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 122m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5075m 31% 5363Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1352m 8% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4778m 30% 5254Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5326m 33% 14414Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1423m 8% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1473m 9% 14355Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 759m 4% 2047Mi 3% 16:15:45 DEBUG --- stderr --- 16:15:45 DEBUG 16:16:44 INFO 16:16:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:16:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:16:44 INFO [loop_until]: OK (rc = 0) 16:16:44 DEBUG --- stdout --- 16:16:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 65m 5799Mi am-55f77847b7-sgmd6 59m 5718Mi am-55f77847b7-wq5w5 63m 5719Mi ds-cts-0 7m 370Mi ds-cts-1 7m 376Mi ds-cts-2 7m 366Mi ds-idrepo-0 6449m 13814Mi ds-idrepo-1 1494m 13830Mi ds-idrepo-2 1820m 13817Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4561m 4018Mi idm-65858d8c4c-gdv6b 4884m 4067Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 747m 528Mi 16:16:44 DEBUG --- stderr --- 16:16:44 DEBUG 16:16:45 INFO 16:16:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:16:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:16:45 INFO [loop_until]: OK (rc = 0) 16:16:45 DEBUG --- stdout --- 16:16:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 124m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6880Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5188m 32% 5377Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1353m 8% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4804m 30% 5272Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6409m 40% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1487m 9% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1928m 12% 14336Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 778m 4% 2045Mi 3% 16:16:45 DEBUG --- stderr --- 16:16:45 DEBUG 16:17:44 INFO 16:17:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:17:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:17:44 INFO [loop_until]: OK (rc = 0) 16:17:44 DEBUG --- stdout --- 16:17:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5799Mi am-55f77847b7-sgmd6 64m 5718Mi am-55f77847b7-wq5w5 67m 5721Mi ds-cts-0 7m 371Mi ds-cts-1 10m 375Mi ds-cts-2 8m 367Mi ds-idrepo-0 5772m 13825Mi ds-idrepo-1 2036m 13854Mi ds-idrepo-2 1335m 13844Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4639m 4028Mi idm-65858d8c4c-gdv6b 4885m 4082Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 703m 528Mi 16:17:44 DEBUG --- stderr --- 16:17:44 DEBUG 16:17:45 INFO 16:17:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:17:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:17:45 INFO [loop_until]: OK (rc = 0) 16:17:45 DEBUG --- stdout --- 16:17:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 122m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 126m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5238m 32% 5403Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1376m 8% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4908m 30% 5284Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5910m 37% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1784m 11% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1426m 8% 14373Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 755m 4% 2047Mi 3% 16:17:45 DEBUG --- stderr --- 16:17:45 DEBUG 16:18:44 INFO 16:18:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:18:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:18:44 INFO [loop_until]: OK (rc = 0) 16:18:44 DEBUG --- stdout --- 16:18:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5799Mi am-55f77847b7-sgmd6 62m 5718Mi am-55f77847b7-wq5w5 62m 5720Mi ds-cts-0 7m 371Mi ds-cts-1 5m 375Mi ds-cts-2 7m 367Mi ds-idrepo-0 6025m 13819Mi ds-idrepo-1 1686m 13822Mi ds-idrepo-2 1489m 13844Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4640m 4040Mi idm-65858d8c4c-gdv6b 5013m 4095Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 687m 529Mi 16:18:44 DEBUG --- stderr --- 16:18:44 DEBUG 16:18:45 INFO 16:18:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:18:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:18:45 INFO [loop_until]: OK (rc = 0) 16:18:45 DEBUG --- stdout --- 16:18:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 125m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 120m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 119m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5191m 32% 5405Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1375m 8% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4664m 29% 5295Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5826m 36% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1792m 11% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1550m 9% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 763m 4% 2058Mi 3% 16:18:45 DEBUG --- stderr --- 16:18:45 DEBUG 16:19:44 INFO 16:19:44 INFO [loop_until]: kubectl --namespace=xlou top pods 16:19:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:19:45 INFO [loop_until]: OK (rc = 0) 16:19:45 DEBUG --- stdout --- 16:19:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5799Mi am-55f77847b7-sgmd6 63m 5719Mi am-55f77847b7-wq5w5 62m 5720Mi ds-cts-0 7m 371Mi ds-cts-1 5m 375Mi ds-cts-2 6m 366Mi ds-idrepo-0 6434m 13817Mi ds-idrepo-1 1349m 13850Mi ds-idrepo-2 1736m 13812Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4594m 4055Mi idm-65858d8c4c-gdv6b 5018m 4111Mi lodemon-86d6dfd886-rxdp4 1m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 683m 529Mi 16:19:45 DEBUG --- stderr --- 16:19:45 DEBUG 16:19:45 INFO 16:19:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:19:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:19:45 INFO [loop_until]: OK (rc = 0) 16:19:45 DEBUG --- stdout --- 16:19:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 125m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 122m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5119m 32% 5421Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1346m 8% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4780m 30% 5307Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6357m 40% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1460m 9% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2152m 13% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 774m 4% 2048Mi 3% 16:19:45 DEBUG --- stderr --- 16:19:45 DEBUG 16:20:45 INFO 16:20:45 INFO [loop_until]: kubectl --namespace=xlou top pods 16:20:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:20:45 INFO [loop_until]: OK (rc = 0) 16:20:45 DEBUG --- stdout --- 16:20:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 66m 5799Mi am-55f77847b7-sgmd6 66m 5719Mi am-55f77847b7-wq5w5 64m 5721Mi ds-cts-0 7m 370Mi ds-cts-1 6m 376Mi ds-cts-2 6m 366Mi ds-idrepo-0 5784m 13801Mi ds-idrepo-1 1934m 13812Mi ds-idrepo-2 1430m 13813Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4607m 4067Mi idm-65858d8c4c-gdv6b 4926m 4131Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 688m 529Mi 16:20:45 DEBUG --- stderr --- 16:20:45 DEBUG 16:20:45 INFO 16:20:45 INFO [loop_until]: kubectl --namespace=xlou top node 16:20:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:20:46 INFO [loop_until]: OK (rc = 0) 16:20:46 DEBUG --- stdout --- 16:20:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 125m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 127m 0% 6853Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6880Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5157m 32% 5441Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1358m 8% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4893m 30% 5324Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5930m 37% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1633m 10% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1600m 10% 14328Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 773m 4% 2047Mi 3% 16:20:46 DEBUG --- stderr --- 16:20:46 DEBUG 16:21:45 INFO 16:21:45 INFO [loop_until]: kubectl --namespace=xlou top pods 16:21:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:21:45 INFO [loop_until]: OK (rc = 0) 16:21:45 DEBUG --- stdout --- 16:21:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 66m 5800Mi am-55f77847b7-sgmd6 62m 5719Mi am-55f77847b7-wq5w5 63m 5721Mi ds-cts-0 6m 371Mi ds-cts-1 6m 375Mi ds-cts-2 6m 367Mi ds-idrepo-0 6385m 13846Mi ds-idrepo-1 1437m 13861Mi ds-idrepo-2 2105m 13810Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4689m 4077Mi idm-65858d8c4c-gdv6b 4839m 4143Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 677m 530Mi 16:21:45 DEBUG --- stderr --- 16:21:45 DEBUG 16:21:46 INFO 16:21:46 INFO [loop_until]: kubectl --namespace=xlou top node 16:21:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:21:46 INFO [loop_until]: OK (rc = 0) 16:21:46 DEBUG --- stdout --- 16:21:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 124m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 121m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 121m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4976m 31% 5458Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1369m 8% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4776m 30% 5333Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6130m 38% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1361m 8% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2370m 14% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 767m 4% 2045Mi 3% 16:21:46 DEBUG --- stderr --- 16:21:46 DEBUG 16:22:45 INFO 16:22:45 INFO [loop_until]: kubectl --namespace=xlou top pods 16:22:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:45 INFO [loop_until]: OK (rc = 0) 16:22:45 DEBUG --- stdout --- 16:22:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5799Mi am-55f77847b7-sgmd6 64m 5720Mi am-55f77847b7-wq5w5 58m 5721Mi ds-cts-0 11m 370Mi ds-cts-1 8m 375Mi ds-cts-2 6m 366Mi ds-idrepo-0 5625m 13823Mi ds-idrepo-1 1436m 13861Mi ds-idrepo-2 1337m 13810Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4495m 4089Mi idm-65858d8c4c-gdv6b 4840m 4157Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 702m 530Mi 16:22:45 DEBUG --- stderr --- 16:22:45 DEBUG 16:22:46 INFO 16:22:46 INFO [loop_until]: kubectl --namespace=xlou top node 16:22:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:22:46 INFO [loop_until]: OK (rc = 0) 16:22:46 DEBUG --- stdout --- 16:22:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 128m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 118m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6880Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5102m 32% 5470Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1351m 8% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4790m 30% 5347Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5729m 36% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1344m 8% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1593m 10% 14338Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 767m 4% 2047Mi 3% 16:22:46 DEBUG --- stderr --- 16:22:46 DEBUG 16:23:45 INFO 16:23:45 INFO [loop_until]: kubectl --namespace=xlou top pods 16:23:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:45 INFO [loop_until]: OK (rc = 0) 16:23:45 DEBUG --- stdout --- 16:23:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5801Mi am-55f77847b7-sgmd6 53m 5720Mi am-55f77847b7-wq5w5 46m 5721Mi ds-cts-0 9m 370Mi ds-cts-1 5m 375Mi ds-cts-2 6m 366Mi ds-idrepo-0 5380m 13823Mi ds-idrepo-1 1339m 13856Mi ds-idrepo-2 1002m 13853Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 3495m 4099Mi idm-65858d8c4c-gdv6b 3451m 4167Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 618m 530Mi 16:23:45 DEBUG --- stderr --- 16:23:45 DEBUG 16:23:46 INFO 16:23:46 INFO [loop_until]: kubectl --namespace=xlou top node 16:23:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:23:46 INFO [loop_until]: OK (rc = 0) 16:23:46 DEBUG --- stdout --- 16:23:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 119m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 116m 0% 6881Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4210m 26% 5481Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1168m 7% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3443m 21% 5357Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4759m 29% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1348m 8% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1124m 7% 14389Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 600m 3% 2048Mi 3% 16:23:46 DEBUG --- stderr --- 16:23:46 DEBUG 16:24:45 INFO 16:24:45 INFO [loop_until]: kubectl --namespace=xlou top pods 16:24:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:24:45 INFO [loop_until]: OK (rc = 0) 16:24:45 DEBUG --- stdout --- 16:24:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 12m 5801Mi am-55f77847b7-sgmd6 6m 5720Mi am-55f77847b7-wq5w5 7m 5721Mi ds-cts-0 8m 371Mi ds-cts-1 5m 375Mi ds-cts-2 5m 366Mi ds-idrepo-0 14m 13847Mi ds-idrepo-1 35m 13832Mi ds-idrepo-2 11m 13845Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 7m 4099Mi idm-65858d8c4c-gdv6b 6m 4167Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 107Mi 16:24:45 DEBUG --- stderr --- 16:24:45 DEBUG 16:24:46 INFO 16:24:46 INFO [loop_until]: kubectl --namespace=xlou top node 16:24:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:24:46 INFO [loop_until]: OK (rc = 0) 16:24:46 DEBUG --- stdout --- 16:24:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5483Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 133m 0% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 5356Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 80m 0% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14383Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1634Mi 2% 16:24:46 DEBUG --- stderr --- 16:24:46 DEBUG 127.0.0.1 - - [12/Aug/2023 16:25:28] "GET /monitoring/average?start_time=23-08-12_14:54:58&stop_time=23-08-12_15:23:28 HTTP/1.1" 200 - 16:25:45 INFO 16:25:45 INFO [loop_until]: kubectl --namespace=xlou top pods 16:25:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:25:45 INFO [loop_until]: OK (rc = 0) 16:25:45 DEBUG --- stdout --- 16:25:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 10m 5801Mi am-55f77847b7-sgmd6 6m 5720Mi am-55f77847b7-wq5w5 6m 5721Mi ds-cts-0 8m 371Mi ds-cts-1 7m 375Mi ds-cts-2 8m 367Mi ds-idrepo-0 12m 13847Mi ds-idrepo-1 10m 13832Mi ds-idrepo-2 13m 13845Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 5m 4098Mi idm-65858d8c4c-gdv6b 5m 4166Mi lodemon-86d6dfd886-rxdp4 2m 65Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 107Mi 16:25:45 DEBUG --- stderr --- 16:25:45 DEBUG 16:25:46 INFO 16:25:46 INFO [loop_until]: kubectl --namespace=xlou top node 16:25:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:25:46 INFO [loop_until]: OK (rc = 0) 16:25:46 DEBUG --- stdout --- 16:25:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6882Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 5482Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 63m 0% 5356Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14383Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1630Mi 2% 16:25:46 DEBUG --- stderr --- 16:25:46 DEBUG 16:26:45 INFO 16:26:45 INFO [loop_until]: kubectl --namespace=xlou top pods 16:26:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:26:45 INFO [loop_until]: OK (rc = 0) 16:26:45 DEBUG --- stdout --- 16:26:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 44m 5802Mi am-55f77847b7-sgmd6 43m 5722Mi am-55f77847b7-wq5w5 21m 5721Mi ds-cts-0 9m 371Mi ds-cts-1 13m 375Mi ds-cts-2 10m 367Mi ds-idrepo-0 3745m 13838Mi ds-idrepo-1 758m 13864Mi ds-idrepo-2 696m 13851Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2507m 4129Mi idm-65858d8c4c-gdv6b 2113m 4181Mi lodemon-86d6dfd886-rxdp4 5m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1072m 519Mi 16:26:45 DEBUG --- stderr --- 16:26:45 DEBUG 16:26:46 INFO 16:26:46 INFO [loop_until]: kubectl --namespace=xlou top node 16:26:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:26:46 INFO [loop_until]: OK (rc = 0) 16:26:46 DEBUG --- stdout --- 16:26:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 107m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6881Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3127m 19% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 846m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3485m 21% 5385Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 3609m 22% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 629m 3% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 767m 4% 14389Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1219m 7% 2052Mi 3% 16:26:46 DEBUG --- stderr --- 16:26:46 DEBUG 16:27:45 INFO 16:27:45 INFO [loop_until]: kubectl --namespace=xlou top pods 16:27:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:27:45 INFO [loop_until]: OK (rc = 0) 16:27:45 DEBUG --- stdout --- 16:27:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 69m 5801Mi am-55f77847b7-sgmd6 67m 5722Mi am-55f77847b7-wq5w5 67m 5721Mi ds-cts-0 7m 371Mi ds-cts-1 6m 375Mi ds-cts-2 8m 366Mi ds-idrepo-0 6104m 13833Mi ds-idrepo-1 1337m 13864Mi ds-idrepo-2 1537m 13847Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4637m 4144Mi idm-65858d8c4c-gdv6b 4968m 4206Mi lodemon-86d6dfd886-rxdp4 1m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 831m 533Mi 16:27:45 DEBUG --- stderr --- 16:27:45 DEBUG 16:27:46 INFO 16:27:46 INFO [loop_until]: kubectl --namespace=xlou top node 16:27:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:27:46 INFO [loop_until]: OK (rc = 0) 16:27:46 DEBUG --- stdout --- 16:27:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 125m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5223m 32% 5521Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1494m 9% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4899m 30% 5402Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6369m 40% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1300m 8% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1821m 11% 14387Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 912m 5% 2051Mi 3% 16:27:46 DEBUG --- stderr --- 16:27:46 DEBUG 16:28:46 INFO 16:28:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:28:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:28:46 INFO [loop_until]: OK (rc = 0) 16:28:46 DEBUG --- stdout --- 16:28:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 75m 5802Mi am-55f77847b7-sgmd6 67m 5722Mi am-55f77847b7-wq5w5 72m 5721Mi ds-cts-0 8m 371Mi ds-cts-1 8m 375Mi ds-cts-2 9m 367Mi ds-idrepo-0 5904m 13852Mi ds-idrepo-1 1920m 13850Mi ds-idrepo-2 1387m 13855Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4736m 4158Mi idm-65858d8c4c-gdv6b 4753m 4220Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 810m 535Mi 16:28:46 DEBUG --- stderr --- 16:28:46 DEBUG 16:28:46 INFO 16:28:46 INFO [loop_until]: kubectl --namespace=xlou top node 16:28:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:28:47 INFO [loop_until]: OK (rc = 0) 16:28:47 DEBUG --- stdout --- 16:28:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 128m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 131m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6882Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5130m 32% 5536Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1448m 9% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4899m 30% 5410Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5570m 35% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2176m 13% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1469m 9% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 910m 5% 2057Mi 3% 16:28:47 DEBUG --- stderr --- 16:28:47 DEBUG 16:29:46 INFO 16:29:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:29:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:29:46 INFO [loop_until]: OK (rc = 0) 16:29:46 DEBUG --- stdout --- 16:29:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 72m 5801Mi am-55f77847b7-sgmd6 71m 5722Mi am-55f77847b7-wq5w5 65m 5722Mi ds-cts-0 9m 371Mi ds-cts-1 5m 375Mi ds-cts-2 6m 367Mi ds-idrepo-0 6838m 13822Mi ds-idrepo-1 1316m 13550Mi ds-idrepo-2 2228m 13797Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4652m 4174Mi idm-65858d8c4c-gdv6b 5007m 4236Mi lodemon-86d6dfd886-rxdp4 4m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 823m 540Mi 16:29:46 DEBUG --- stderr --- 16:29:46 DEBUG 16:29:47 INFO 16:29:47 INFO [loop_until]: kubectl --namespace=xlou top node 16:29:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:29:47 INFO [loop_until]: OK (rc = 0) 16:29:47 DEBUG --- stdout --- 16:29:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5221m 32% 5555Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1494m 9% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4981m 31% 5430Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7331m 46% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1794m 11% 14108Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2223m 13% 14339Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 852m 5% 2063Mi 3% 16:29:47 DEBUG --- stderr --- 16:29:47 DEBUG 16:30:46 INFO 16:30:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:30:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:30:46 INFO [loop_until]: OK (rc = 0) 16:30:46 DEBUG --- stdout --- 16:30:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 76m 5802Mi am-55f77847b7-sgmd6 68m 5722Mi am-55f77847b7-wq5w5 67m 5722Mi ds-cts-0 7m 372Mi ds-cts-1 5m 375Mi ds-cts-2 6m 368Mi ds-idrepo-0 5433m 13824Mi ds-idrepo-1 1480m 13371Mi ds-idrepo-2 1421m 13795Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4701m 4184Mi idm-65858d8c4c-gdv6b 5063m 4252Mi lodemon-86d6dfd886-rxdp4 5m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 817m 543Mi 16:30:46 DEBUG --- stderr --- 16:30:46 DEBUG 16:30:47 INFO 16:30:47 INFO [loop_until]: kubectl --namespace=xlou top node 16:30:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:30:47 INFO [loop_until]: OK (rc = 0) 16:30:47 DEBUG --- stdout --- 16:30:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 128m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6882Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5208m 32% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1448m 9% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4881m 30% 5440Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5709m 35% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1497m 9% 13912Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1328m 8% 14322Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 907m 5% 2067Mi 3% 16:30:47 DEBUG --- stderr --- 16:30:47 DEBUG 16:31:46 INFO 16:31:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:31:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:31:46 INFO [loop_until]: OK (rc = 0) 16:31:46 DEBUG --- stdout --- 16:31:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5802Mi am-55f77847b7-sgmd6 67m 5722Mi am-55f77847b7-wq5w5 67m 5722Mi ds-cts-0 7m 371Mi ds-cts-1 5m 375Mi ds-cts-2 6m 367Mi ds-idrepo-0 6726m 13690Mi ds-idrepo-1 1465m 13456Mi ds-idrepo-2 1702m 13831Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4622m 4197Mi idm-65858d8c4c-gdv6b 5002m 4265Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 787m 547Mi 16:31:46 DEBUG --- stderr --- 16:31:46 DEBUG 16:31:47 INFO 16:31:47 INFO [loop_until]: kubectl --namespace=xlou top node 16:31:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:31:47 INFO [loop_until]: OK (rc = 0) 16:31:47 DEBUG --- stdout --- 16:31:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 128m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 125m 0% 6882Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5008m 31% 5582Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1481m 9% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4904m 30% 5450Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6576m 41% 14285Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1383m 8% 13985Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1648m 10% 14394Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 868m 5% 2070Mi 3% 16:31:47 DEBUG --- stderr --- 16:31:47 DEBUG 16:32:46 INFO 16:32:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:32:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:32:46 INFO [loop_until]: OK (rc = 0) 16:32:46 DEBUG --- stdout --- 16:32:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 69m 5802Mi am-55f77847b7-sgmd6 65m 5722Mi am-55f77847b7-wq5w5 69m 5722Mi ds-cts-0 6m 372Mi ds-cts-1 7m 376Mi ds-cts-2 8m 367Mi ds-idrepo-0 5413m 13778Mi ds-idrepo-1 1192m 13556Mi ds-idrepo-2 1873m 13646Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4589m 4211Mi idm-65858d8c4c-gdv6b 4794m 4279Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 804m 549Mi 16:32:46 DEBUG --- stderr --- 16:32:46 DEBUG 16:32:47 INFO 16:32:47 INFO [loop_until]: kubectl --namespace=xlou top node 16:32:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:32:47 INFO [loop_until]: OK (rc = 0) 16:32:47 DEBUG --- stdout --- 16:32:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 127m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5132m 32% 5593Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1455m 9% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4551m 28% 5465Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5561m 34% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1327m 8% 14107Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1836m 11% 14197Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 851m 5% 2072Mi 3% 16:32:47 DEBUG --- stderr --- 16:32:47 DEBUG 16:33:46 INFO 16:33:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:33:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:33:46 INFO [loop_until]: OK (rc = 0) 16:33:46 DEBUG --- stdout --- 16:33:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 71m 5802Mi am-55f77847b7-sgmd6 66m 5722Mi am-55f77847b7-wq5w5 68m 5723Mi ds-cts-0 7m 372Mi ds-cts-1 7m 375Mi ds-cts-2 10m 367Mi ds-idrepo-0 6338m 13841Mi ds-idrepo-1 2088m 13488Mi ds-idrepo-2 1975m 13696Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4746m 4225Mi idm-65858d8c4c-gdv6b 4994m 4293Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 799m 553Mi 16:33:46 DEBUG --- stderr --- 16:33:46 DEBUG 16:33:47 INFO 16:33:47 INFO [loop_until]: kubectl --namespace=xlou top node 16:33:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:33:47 INFO [loop_until]: OK (rc = 0) 16:33:47 DEBUG --- stdout --- 16:33:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 126m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5257m 33% 5611Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1504m 9% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4893m 30% 5475Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6449m 40% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2038m 12% 13987Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1740m 10% 14225Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 882m 5% 2074Mi 3% 16:33:47 DEBUG --- stderr --- 16:33:47 DEBUG 16:34:46 INFO 16:34:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:34:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:34:46 INFO [loop_until]: OK (rc = 0) 16:34:46 DEBUG --- stdout --- 16:34:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 72m 5802Mi am-55f77847b7-sgmd6 70m 5724Mi am-55f77847b7-wq5w5 68m 5723Mi ds-cts-0 9m 371Mi ds-cts-1 9m 375Mi ds-cts-2 7m 368Mi ds-idrepo-0 5619m 13859Mi ds-idrepo-1 1332m 13500Mi ds-idrepo-2 1379m 13732Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4645m 4234Mi idm-65858d8c4c-gdv6b 5070m 4305Mi lodemon-86d6dfd886-rxdp4 4m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 766m 556Mi 16:34:46 DEBUG --- stderr --- 16:34:46 DEBUG 16:34:47 INFO 16:34:47 INFO [loop_until]: kubectl --namespace=xlou top node 16:34:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:34:47 INFO [loop_until]: OK (rc = 0) 16:34:47 DEBUG --- stdout --- 16:34:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 122m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5084m 31% 5622Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1495m 9% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4833m 30% 5491Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5682m 35% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1326m 8% 14067Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1494m 9% 14301Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 860m 5% 2076Mi 3% 16:34:47 DEBUG --- stderr --- 16:34:47 DEBUG 16:35:46 INFO 16:35:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:35:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:35:46 INFO [loop_until]: OK (rc = 0) 16:35:46 DEBUG --- stdout --- 16:35:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 71m 5802Mi am-55f77847b7-sgmd6 65m 5724Mi am-55f77847b7-wq5w5 69m 5723Mi ds-cts-0 8m 372Mi ds-cts-1 5m 376Mi ds-cts-2 6m 367Mi ds-idrepo-0 6672m 13760Mi ds-idrepo-1 1434m 13348Mi ds-idrepo-2 1780m 13721Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4646m 4252Mi idm-65858d8c4c-gdv6b 4951m 4324Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 812m 558Mi 16:35:46 DEBUG --- stderr --- 16:35:46 DEBUG 16:35:47 INFO 16:35:47 INFO [loop_until]: kubectl --namespace=xlou top node 16:35:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:35:47 INFO [loop_until]: OK (rc = 0) 16:35:47 DEBUG --- stdout --- 16:35:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 125m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 124m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5226m 32% 5640Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1490m 9% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4686m 29% 5506Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6400m 40% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1324m 8% 13894Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1814m 11% 14286Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 877m 5% 2081Mi 3% 16:35:47 DEBUG --- stderr --- 16:35:47 DEBUG 16:36:46 INFO 16:36:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:36:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:36:46 INFO [loop_until]: OK (rc = 0) 16:36:46 DEBUG --- stdout --- 16:36:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 71m 5802Mi am-55f77847b7-sgmd6 70m 5724Mi am-55f77847b7-wq5w5 71m 5723Mi ds-cts-0 7m 371Mi ds-cts-1 5m 376Mi ds-cts-2 7m 368Mi ds-idrepo-0 6000m 13842Mi ds-idrepo-1 1419m 13205Mi ds-idrepo-2 1797m 13776Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4677m 4264Mi idm-65858d8c4c-gdv6b 4956m 4337Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 830m 562Mi 16:36:46 DEBUG --- stderr --- 16:36:46 DEBUG 16:36:47 INFO 16:36:47 INFO [loop_until]: kubectl --namespace=xlou top node 16:36:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:36:47 INFO [loop_until]: OK (rc = 0) 16:36:47 DEBUG --- stdout --- 16:36:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5154m 32% 5652Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1494m 9% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4995m 31% 5532Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6443m 40% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1405m 8% 13761Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1971m 12% 14184Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 895m 5% 2082Mi 3% 16:36:47 DEBUG --- stderr --- 16:36:47 DEBUG 16:37:46 INFO 16:37:46 INFO [loop_until]: kubectl --namespace=xlou top pods 16:37:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:37:47 INFO [loop_until]: OK (rc = 0) 16:37:47 DEBUG --- stdout --- 16:37:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 72m 5802Mi am-55f77847b7-sgmd6 69m 5724Mi am-55f77847b7-wq5w5 70m 5723Mi ds-cts-0 13m 371Mi ds-cts-1 5m 376Mi ds-cts-2 6m 367Mi ds-idrepo-0 5407m 13713Mi ds-idrepo-1 1310m 13326Mi ds-idrepo-2 1224m 13689Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4783m 4278Mi idm-65858d8c4c-gdv6b 4930m 4351Mi lodemon-86d6dfd886-rxdp4 4m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 852m 566Mi 16:37:47 DEBUG --- stderr --- 16:37:47 DEBUG 16:37:48 INFO 16:37:48 INFO [loop_until]: kubectl --namespace=xlou top node 16:37:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:37:48 INFO [loop_until]: OK (rc = 0) 16:37:48 DEBUG --- stdout --- 16:37:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5169m 32% 5667Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1500m 9% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4933m 31% 5535Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5601m 35% 14310Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1370m 8% 13883Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1179m 7% 14252Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 911m 5% 2086Mi 3% 16:37:48 DEBUG --- stderr --- 16:37:48 DEBUG 16:38:47 INFO 16:38:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:38:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:38:47 INFO [loop_until]: OK (rc = 0) 16:38:47 DEBUG --- stdout --- 16:38:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 73m 5802Mi am-55f77847b7-sgmd6 66m 5724Mi am-55f77847b7-wq5w5 69m 5723Mi ds-cts-0 7m 371Mi ds-cts-1 7m 376Mi ds-cts-2 10m 367Mi ds-idrepo-0 5929m 13748Mi ds-idrepo-1 1585m 13317Mi ds-idrepo-2 1751m 13800Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4624m 4292Mi idm-65858d8c4c-gdv6b 5010m 4364Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 823m 569Mi 16:38:47 DEBUG --- stderr --- 16:38:47 DEBUG 16:38:48 INFO 16:38:48 INFO [loop_until]: kubectl --namespace=xlou top node 16:38:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:38:48 INFO [loop_until]: OK (rc = 0) 16:38:48 DEBUG --- stdout --- 16:38:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 131m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5177m 32% 5677Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1487m 9% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4989m 31% 5546Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5870m 36% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1674m 10% 13885Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1532m 9% 14361Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 875m 5% 2091Mi 3% 16:38:48 DEBUG --- stderr --- 16:38:48 DEBUG 16:39:47 INFO 16:39:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:39:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:39:47 INFO [loop_until]: OK (rc = 0) 16:39:47 DEBUG --- stdout --- 16:39:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 68m 5803Mi am-55f77847b7-sgmd6 69m 5724Mi am-55f77847b7-wq5w5 66m 5723Mi ds-cts-0 6m 371Mi ds-cts-1 8m 376Mi ds-cts-2 6m 367Mi ds-idrepo-0 5557m 13829Mi ds-idrepo-1 1468m 13257Mi ds-idrepo-2 1652m 13761Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4664m 4303Mi idm-65858d8c4c-gdv6b 4802m 4376Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 784m 571Mi 16:39:47 DEBUG --- stderr --- 16:39:47 DEBUG 16:39:48 INFO 16:39:48 INFO [loop_until]: kubectl --namespace=xlou top node 16:39:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:39:48 INFO [loop_until]: OK (rc = 0) 16:39:48 DEBUG --- stdout --- 16:39:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 127m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 122m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5142m 32% 5693Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1479m 9% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4857m 30% 5556Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6018m 37% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1580m 9% 13823Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1955m 12% 14336Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 881m 5% 2094Mi 3% 16:39:48 DEBUG --- stderr --- 16:39:48 DEBUG 16:40:47 INFO 16:40:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:40:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:40:47 INFO [loop_until]: OK (rc = 0) 16:40:47 DEBUG --- stdout --- 16:40:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5803Mi am-55f77847b7-sgmd6 69m 5724Mi am-55f77847b7-wq5w5 71m 5723Mi ds-cts-0 6m 372Mi ds-cts-1 8m 376Mi ds-cts-2 12m 365Mi ds-idrepo-0 5490m 13860Mi ds-idrepo-1 1354m 13310Mi ds-idrepo-2 1244m 13827Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4564m 4314Mi idm-65858d8c4c-gdv6b 4818m 4390Mi lodemon-86d6dfd886-rxdp4 1m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 789m 575Mi 16:40:47 DEBUG --- stderr --- 16:40:47 DEBUG 16:40:48 INFO 16:40:48 INFO [loop_until]: kubectl --namespace=xlou top node 16:40:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:40:48 INFO [loop_until]: OK (rc = 0) 16:40:48 DEBUG --- stdout --- 16:40:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5024m 31% 5703Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1469m 9% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4828m 30% 5566Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5310m 33% 14461Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1240m 7% 13895Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1357m 8% 14391Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 880m 5% 2096Mi 3% 16:40:48 DEBUG --- stderr --- 16:40:48 DEBUG 16:41:47 INFO 16:41:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:41:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:41:47 INFO [loop_until]: OK (rc = 0) 16:41:47 DEBUG --- stdout --- 16:41:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 78m 5803Mi am-55f77847b7-sgmd6 77m 5723Mi am-55f77847b7-wq5w5 70m 5723Mi ds-cts-0 6m 371Mi ds-cts-1 9m 376Mi ds-cts-2 6m 366Mi ds-idrepo-0 5589m 13859Mi ds-idrepo-1 1423m 13369Mi ds-idrepo-2 1815m 13856Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4700m 4325Mi idm-65858d8c4c-gdv6b 4870m 4404Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 808m 578Mi 16:41:47 DEBUG --- stderr --- 16:41:47 DEBUG 16:41:48 INFO 16:41:48 INFO [loop_until]: kubectl --namespace=xlou top node 16:41:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:41:48 INFO [loop_until]: OK (rc = 0) 16:41:48 DEBUG --- stdout --- 16:41:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 124m 0% 6880Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5196m 32% 5716Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1491m 9% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4937m 31% 5581Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5906m 37% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1382m 8% 13933Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1689m 10% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 899m 5% 2100Mi 3% 16:41:48 DEBUG --- stderr --- 16:41:48 DEBUG 16:42:47 INFO 16:42:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:42:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:42:47 INFO [loop_until]: OK (rc = 0) 16:42:47 DEBUG --- stdout --- 16:42:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 72m 5803Mi am-55f77847b7-sgmd6 69m 5723Mi am-55f77847b7-wq5w5 66m 5723Mi ds-cts-0 7m 371Mi ds-cts-1 12m 376Mi ds-cts-2 6m 366Mi ds-idrepo-0 5689m 13861Mi ds-idrepo-1 1225m 13421Mi ds-idrepo-2 1341m 13857Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4699m 4338Mi idm-65858d8c4c-gdv6b 5100m 4418Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 827m 581Mi 16:42:47 DEBUG --- stderr --- 16:42:47 DEBUG 16:42:48 INFO 16:42:48 INFO [loop_until]: kubectl --namespace=xlou top node 16:42:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:42:48 INFO [loop_until]: OK (rc = 0) 16:42:48 DEBUG --- stdout --- 16:42:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 126m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5223m 32% 5729Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1492m 9% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4923m 30% 5591Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5671m 35% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1334m 8% 13990Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1346m 8% 14419Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 905m 5% 2102Mi 3% 16:42:48 DEBUG --- stderr --- 16:42:48 DEBUG 16:43:47 INFO 16:43:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:43:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:43:47 INFO [loop_until]: OK (rc = 0) 16:43:47 DEBUG --- stdout --- 16:43:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 66m 5803Mi am-55f77847b7-sgmd6 65m 5724Mi am-55f77847b7-wq5w5 67m 5723Mi ds-cts-0 10m 371Mi ds-cts-1 10m 376Mi ds-cts-2 6m 367Mi ds-idrepo-0 5889m 13860Mi ds-idrepo-1 1555m 13472Mi ds-idrepo-2 1741m 13857Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4607m 4352Mi idm-65858d8c4c-gdv6b 5149m 4427Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 827m 585Mi 16:43:47 DEBUG --- stderr --- 16:43:47 DEBUG 16:43:48 INFO 16:43:48 INFO [loop_until]: kubectl --namespace=xlou top node 16:43:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:43:48 INFO [loop_until]: OK (rc = 0) 16:43:48 DEBUG --- stdout --- 16:43:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 127m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 120m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5279m 33% 5742Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1514m 9% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4923m 30% 5607Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6079m 38% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1484m 9% 14046Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1485m 9% 14427Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 895m 5% 2106Mi 3% 16:43:48 DEBUG --- stderr --- 16:43:48 DEBUG 16:44:47 INFO 16:44:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:44:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:44:47 INFO [loop_until]: OK (rc = 0) 16:44:47 DEBUG --- stdout --- 16:44:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 69m 5803Mi am-55f77847b7-sgmd6 65m 5723Mi am-55f77847b7-wq5w5 76m 5723Mi ds-cts-0 6m 371Mi ds-cts-1 6m 376Mi ds-cts-2 6m 366Mi ds-idrepo-0 5346m 13861Mi ds-idrepo-1 1697m 13513Mi ds-idrepo-2 1783m 13825Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4637m 4361Mi idm-65858d8c4c-gdv6b 4875m 4444Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 795m 588Mi 16:44:47 DEBUG --- stderr --- 16:44:47 DEBUG 16:44:48 INFO 16:44:48 INFO [loop_until]: kubectl --namespace=xlou top node 16:44:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:44:49 INFO [loop_until]: OK (rc = 0) 16:44:49 DEBUG --- stdout --- 16:44:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 127m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 137m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 125m 0% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4943m 31% 5754Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1498m 9% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4929m 31% 5621Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5563m 35% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1406m 8% 14094Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2018m 12% 14398Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 887m 5% 2107Mi 3% 16:44:49 DEBUG --- stderr --- 16:44:49 DEBUG 16:45:47 INFO 16:45:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:45:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:45:47 INFO [loop_until]: OK (rc = 0) 16:45:47 DEBUG --- stdout --- 16:45:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 73m 5803Mi am-55f77847b7-sgmd6 66m 5724Mi am-55f77847b7-wq5w5 69m 5724Mi ds-cts-0 6m 371Mi ds-cts-1 6m 376Mi ds-cts-2 9m 366Mi ds-idrepo-0 6424m 13851Mi ds-idrepo-1 1531m 13627Mi ds-idrepo-2 1285m 13848Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4693m 4383Mi idm-65858d8c4c-gdv6b 4790m 4458Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 820m 591Mi 16:45:47 DEBUG --- stderr --- 16:45:47 DEBUG 16:45:49 INFO 16:45:49 INFO [loop_until]: kubectl --namespace=xlou top node 16:45:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:45:49 INFO [loop_until]: OK (rc = 0) 16:45:49 DEBUG --- stdout --- 16:45:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 120m 0% 6882Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5176m 32% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1449m 9% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4943m 31% 5638Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6531m 41% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1664m 10% 14206Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1351m 8% 14423Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 900m 5% 2111Mi 3% 16:45:49 DEBUG --- stderr --- 16:45:49 DEBUG 16:46:47 INFO 16:46:47 INFO [loop_until]: kubectl --namespace=xlou top pods 16:46:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:46:48 INFO [loop_until]: OK (rc = 0) 16:46:48 DEBUG --- stdout --- 16:46:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 76m 5803Mi am-55f77847b7-sgmd6 70m 5724Mi am-55f77847b7-wq5w5 68m 5724Mi ds-cts-0 7m 371Mi ds-cts-1 6m 376Mi ds-cts-2 5m 366Mi ds-idrepo-0 5404m 13861Mi ds-idrepo-1 1354m 13678Mi ds-idrepo-2 1474m 13853Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4669m 4394Mi idm-65858d8c4c-gdv6b 4961m 4471Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 809m 594Mi 16:46:48 DEBUG --- stderr --- 16:46:48 DEBUG 16:46:49 INFO 16:46:49 INFO [loop_until]: kubectl --namespace=xlou top node 16:46:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:46:49 INFO [loop_until]: OK (rc = 0) 16:46:49 DEBUG --- stdout --- 16:46:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 128m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6882Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5240m 32% 5785Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1500m 9% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4738m 29% 5648Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5586m 35% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1386m 8% 14254Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1881m 11% 14431Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 869m 5% 2113Mi 3% 16:46:49 DEBUG --- stderr --- 16:46:49 DEBUG 16:47:48 INFO 16:47:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:47:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:47:48 INFO [loop_until]: OK (rc = 0) 16:47:48 DEBUG --- stdout --- 16:47:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 68m 5803Mi am-55f77847b7-sgmd6 64m 5724Mi am-55f77847b7-wq5w5 68m 5724Mi ds-cts-0 7m 371Mi ds-cts-1 5m 377Mi ds-cts-2 5m 365Mi ds-idrepo-0 5310m 13860Mi ds-idrepo-1 1323m 13710Mi ds-idrepo-2 1110m 13858Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4552m 4403Mi idm-65858d8c4c-gdv6b 4939m 4484Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 743m 598Mi 16:47:48 DEBUG --- stderr --- 16:47:48 DEBUG 16:47:49 INFO 16:47:49 INFO [loop_until]: kubectl --namespace=xlou top node 16:47:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:47:49 INFO [loop_until]: OK (rc = 0) 16:47:49 DEBUG --- stdout --- 16:47:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 126m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5192m 32% 5795Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1442m 9% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4840m 30% 5657Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5469m 34% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1326m 8% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1248m 7% 14430Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 829m 5% 2118Mi 3% 16:47:49 DEBUG --- stderr --- 16:47:49 DEBUG 16:48:48 INFO 16:48:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:48:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:48:48 INFO [loop_until]: OK (rc = 0) 16:48:48 DEBUG --- stdout --- 16:48:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 70m 5803Mi am-55f77847b7-sgmd6 66m 5724Mi am-55f77847b7-wq5w5 67m 5724Mi ds-cts-0 6m 371Mi ds-cts-1 5m 376Mi ds-cts-2 10m 365Mi ds-idrepo-0 5652m 13860Mi ds-idrepo-1 1460m 13751Mi ds-idrepo-2 1867m 13841Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4780m 4417Mi idm-65858d8c4c-gdv6b 4815m 4495Mi lodemon-86d6dfd886-rxdp4 5m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 756m 600Mi 16:48:48 DEBUG --- stderr --- 16:48:48 DEBUG 16:48:49 INFO 16:48:49 INFO [loop_until]: kubectl --namespace=xlou top node 16:48:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:48:49 INFO [loop_until]: OK (rc = 0) 16:48:49 DEBUG --- stdout --- 16:48:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 131m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 126m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 122m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5020m 31% 5811Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1483m 9% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4938m 31% 5671Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5608m 35% 14476Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1534m 9% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1714m 10% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 817m 5% 2118Mi 3% 16:48:49 DEBUG --- stderr --- 16:48:49 DEBUG 16:49:48 INFO 16:49:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:49:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:49:48 INFO [loop_until]: OK (rc = 0) 16:49:48 DEBUG --- stdout --- 16:49:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 72m 5803Mi am-55f77847b7-sgmd6 70m 5724Mi am-55f77847b7-wq5w5 68m 5724Mi ds-cts-0 7m 371Mi ds-cts-1 5m 376Mi ds-cts-2 6m 365Mi ds-idrepo-0 5335m 13660Mi ds-idrepo-1 1208m 13847Mi ds-idrepo-2 1103m 13857Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4526m 4426Mi idm-65858d8c4c-gdv6b 4769m 4508Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 763m 604Mi 16:49:48 DEBUG --- stderr --- 16:49:48 DEBUG 16:49:49 INFO 16:49:49 INFO [loop_until]: kubectl --namespace=xlou top node 16:49:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:49:49 INFO [loop_until]: OK (rc = 0) 16:49:49 DEBUG --- stdout --- 16:49:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 135m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 135m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5096m 32% 5816Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1437m 9% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4864m 30% 5681Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5358m 33% 14270Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1318m 8% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1348m 8% 14430Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 826m 5% 2131Mi 3% 16:49:49 DEBUG --- stderr --- 16:49:49 DEBUG 16:50:48 INFO 16:50:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:50:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:50:48 INFO [loop_until]: OK (rc = 0) 16:50:48 DEBUG --- stdout --- 16:50:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 70m 5803Mi am-55f77847b7-sgmd6 65m 5724Mi am-55f77847b7-wq5w5 63m 5724Mi ds-cts-0 6m 371Mi ds-cts-1 6m 376Mi ds-cts-2 7m 366Mi ds-idrepo-0 6135m 13722Mi ds-idrepo-1 1714m 13733Mi ds-idrepo-2 1421m 13788Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4486m 4439Mi idm-65858d8c4c-gdv6b 4856m 4523Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 730m 607Mi 16:50:48 DEBUG --- stderr --- 16:50:48 DEBUG 16:50:49 INFO 16:50:49 INFO [loop_until]: kubectl --namespace=xlou top node 16:50:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:50:49 INFO [loop_until]: OK (rc = 0) 16:50:49 DEBUG --- stdout --- 16:50:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 133m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5079m 31% 5848Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1472m 9% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4814m 30% 5693Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5993m 37% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1492m 9% 14328Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1766m 11% 14368Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 804m 5% 2124Mi 3% 16:50:49 DEBUG --- stderr --- 16:50:49 DEBUG 16:51:48 INFO 16:51:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:51:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:51:48 INFO [loop_until]: OK (rc = 0) 16:51:48 DEBUG --- stdout --- 16:51:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 67m 5803Mi am-55f77847b7-sgmd6 66m 5724Mi am-55f77847b7-wq5w5 73m 5724Mi ds-cts-0 6m 371Mi ds-cts-1 6m 376Mi ds-cts-2 6m 365Mi ds-idrepo-0 5419m 13775Mi ds-idrepo-1 1194m 13768Mi ds-idrepo-2 1159m 13822Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4650m 4452Mi idm-65858d8c4c-gdv6b 4906m 4538Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 747m 610Mi 16:51:48 DEBUG --- stderr --- 16:51:48 DEBUG 16:51:49 INFO 16:51:49 INFO [loop_until]: kubectl --namespace=xlou top node 16:51:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:51:49 INFO [loop_until]: OK (rc = 0) 16:51:49 DEBUG --- stdout --- 16:51:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 127m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 139m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6881Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5250m 33% 5846Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1436m 9% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4811m 30% 5705Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5628m 35% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1390m 8% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1109m 6% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 832m 5% 2129Mi 3% 16:51:49 DEBUG --- stderr --- 16:51:49 DEBUG 16:52:48 INFO 16:52:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:52:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:52:48 INFO [loop_until]: OK (rc = 0) 16:52:48 DEBUG --- stdout --- 16:52:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 71m 5803Mi am-55f77847b7-sgmd6 67m 5724Mi am-55f77847b7-wq5w5 68m 5724Mi ds-cts-0 20m 371Mi ds-cts-1 24m 377Mi ds-cts-2 5m 365Mi ds-idrepo-0 6001m 13835Mi ds-idrepo-1 1356m 13815Mi ds-idrepo-2 1662m 13836Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4451m 4464Mi idm-65858d8c4c-gdv6b 5026m 4547Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 763m 613Mi 16:52:48 DEBUG --- stderr --- 16:52:48 DEBUG 16:52:49 INFO 16:52:49 INFO [loop_until]: kubectl --namespace=xlou top node 16:52:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:52:50 INFO [loop_until]: OK (rc = 0) 16:52:50 DEBUG --- stdout --- 16:52:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 124m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5170m 32% 5863Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1478m 9% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4823m 30% 5719Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6132m 38% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1367m 8% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1793m 11% 14418Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 827m 5% 2131Mi 3% 16:52:50 DEBUG --- stderr --- 16:52:50 DEBUG 16:53:48 INFO 16:53:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:53:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:53:48 INFO [loop_until]: OK (rc = 0) 16:53:48 DEBUG --- stdout --- 16:53:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 71m 5803Mi am-55f77847b7-sgmd6 64m 5724Mi am-55f77847b7-wq5w5 70m 5724Mi ds-cts-0 6m 372Mi ds-cts-1 5m 377Mi ds-cts-2 6m 366Mi ds-idrepo-0 5652m 13861Mi ds-idrepo-1 1775m 13823Mi ds-idrepo-2 1838m 13857Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4670m 4476Mi idm-65858d8c4c-gdv6b 4894m 4562Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 748m 615Mi 16:53:48 DEBUG --- stderr --- 16:53:48 DEBUG 16:53:50 INFO 16:53:50 INFO [loop_until]: kubectl --namespace=xlou top node 16:53:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:53:50 INFO [loop_until]: OK (rc = 0) 16:53:50 DEBUG --- stdout --- 16:53:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 125m 0% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5142m 32% 5875Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1443m 9% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4751m 29% 5733Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5788m 36% 14485Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1908m 12% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1988m 12% 14451Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 819m 5% 2133Mi 3% 16:53:50 DEBUG --- stderr --- 16:53:50 DEBUG 16:54:48 INFO 16:54:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:54:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:54:48 INFO [loop_until]: OK (rc = 0) 16:54:48 DEBUG --- stdout --- 16:54:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 72m 5803Mi am-55f77847b7-sgmd6 67m 5724Mi am-55f77847b7-wq5w5 69m 5724Mi ds-cts-0 6m 372Mi ds-cts-1 5m 377Mi ds-cts-2 6m 367Mi ds-idrepo-0 5466m 13820Mi ds-idrepo-1 1325m 13823Mi ds-idrepo-2 1389m 13801Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4473m 4489Mi idm-65858d8c4c-gdv6b 4945m 4577Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 730m 616Mi 16:54:48 DEBUG --- stderr --- 16:54:48 DEBUG 16:54:50 INFO 16:54:50 INFO [loop_until]: kubectl --namespace=xlou top node 16:54:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:54:50 INFO [loop_until]: OK (rc = 0) 16:54:50 DEBUG --- stdout --- 16:54:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 126m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 4939m 31% 5888Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1478m 9% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4843m 30% 5744Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5655m 35% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1389m 8% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1443m 9% 14397Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 815m 5% 2133Mi 3% 16:54:50 DEBUG --- stderr --- 16:54:50 DEBUG 16:55:48 INFO 16:55:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:55:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:55:48 INFO [loop_until]: OK (rc = 0) 16:55:48 DEBUG --- stdout --- 16:55:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 68m 5803Mi am-55f77847b7-sgmd6 67m 5724Mi am-55f77847b7-wq5w5 66m 5724Mi ds-cts-0 7m 372Mi ds-cts-1 15m 380Mi ds-cts-2 6m 367Mi ds-idrepo-0 5474m 13823Mi ds-idrepo-1 1802m 13808Mi ds-idrepo-2 1390m 13823Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 4574m 4502Mi idm-65858d8c4c-gdv6b 4955m 4588Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 744m 616Mi 16:55:48 DEBUG --- stderr --- 16:55:48 DEBUG 16:55:50 INFO 16:55:50 INFO [loop_until]: kubectl --namespace=xlou top node 16:55:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:55:50 INFO [loop_until]: OK (rc = 0) 16:55:50 DEBUG --- stdout --- 16:55:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 129m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5152m 32% 5904Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1431m 9% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 4895m 30% 5758Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5509m 34% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1567m 9% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1237m 7% 14425Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 814m 5% 2134Mi 3% 16:55:50 DEBUG --- stderr --- 16:55:50 DEBUG 16:56:48 INFO 16:56:48 INFO [loop_until]: kubectl --namespace=xlou top pods 16:56:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:56:49 INFO [loop_until]: OK (rc = 0) 16:56:49 DEBUG --- stdout --- 16:56:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 7m 5803Mi am-55f77847b7-sgmd6 7m 5724Mi am-55f77847b7-wq5w5 20m 5724Mi ds-cts-0 7m 372Mi ds-cts-1 9m 380Mi ds-cts-2 5m 366Mi ds-idrepo-0 147m 13803Mi ds-idrepo-1 313m 13822Mi ds-idrepo-2 406m 13789Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 67m 4507Mi idm-65858d8c4c-gdv6b 590m 4595Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 161m 616Mi 16:56:49 DEBUG --- stderr --- 16:56:49 DEBUG 16:56:50 INFO 16:56:50 INFO [loop_until]: kubectl --namespace=xlou top node 16:56:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:56:50 INFO [loop_until]: OK (rc = 0) 16:56:50 DEBUG --- stdout --- 16:56:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 5911Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 5764Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 220m 1% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 362m 2% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 447m 2% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 195m 1% 1713Mi 2% 16:56:50 DEBUG --- stderr --- 16:56:50 DEBUG 16:57:49 INFO 16:57:49 INFO [loop_until]: kubectl --namespace=xlou top pods 16:57:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:57:49 INFO [loop_until]: OK (rc = 0) 16:57:49 DEBUG --- stdout --- 16:57:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 6m 5803Mi am-55f77847b7-sgmd6 6m 5724Mi am-55f77847b7-wq5w5 8m 5724Mi ds-cts-0 6m 372Mi ds-cts-1 6m 379Mi ds-cts-2 5m 367Mi ds-idrepo-0 11m 13803Mi ds-idrepo-1 8m 13823Mi ds-idrepo-2 12m 13788Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 4506Mi idm-65858d8c4c-gdv6b 8m 4595Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 189Mi 16:57:49 DEBUG --- stderr --- 16:57:49 DEBUG 16:57:50 INFO 16:57:50 INFO [loop_until]: kubectl --namespace=xlou top node 16:57:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:57:50 INFO [loop_until]: OK (rc = 0) 16:57:50 DEBUG --- stdout --- 16:57:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 5913Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5765Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 72m 0% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 55m 0% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14398Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1713Mi 2% 16:57:50 DEBUG --- stderr --- 16:57:50 DEBUG 127.0.0.1 - - [12/Aug/2023 16:58:00] "GET /monitoring/average?start_time=23-08-12_15:27:29&stop_time=23-08-12_15:55:59 HTTP/1.1" 200 - 16:58:49 INFO 16:58:49 INFO [loop_until]: kubectl --namespace=xlou top pods 16:58:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:58:49 INFO [loop_until]: OK (rc = 0) 16:58:49 DEBUG --- stdout --- 16:58:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 8m 5804Mi am-55f77847b7-sgmd6 7m 5724Mi am-55f77847b7-wq5w5 8m 5724Mi ds-cts-0 6m 372Mi ds-cts-1 5m 379Mi ds-cts-2 4m 367Mi ds-idrepo-0 13m 13802Mi ds-idrepo-1 9m 13822Mi ds-idrepo-2 11m 13791Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 4506Mi idm-65858d8c4c-gdv6b 8m 4595Mi lodemon-86d6dfd886-rxdp4 3m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1569m 377Mi 16:58:49 DEBUG --- stderr --- 16:58:49 DEBUG 16:58:50 INFO 16:58:50 INFO [loop_until]: kubectl --namespace=xlou top node 16:58:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:58:50 INFO [loop_until]: OK (rc = 0) 16:58:50 DEBUG --- stdout --- 16:58:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 5914Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5765Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 54m 0% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 49m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14398Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1874m 11% 2014Mi 3% 16:58:50 DEBUG --- stderr --- 16:58:50 DEBUG 16:59:49 INFO 16:59:49 INFO [loop_until]: kubectl --namespace=xlou top pods 16:59:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:59:49 INFO [loop_until]: OK (rc = 0) 16:59:49 DEBUG --- stdout --- 16:59:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 80m 5803Mi am-55f77847b7-sgmd6 77m 5726Mi am-55f77847b7-wq5w5 81m 5725Mi ds-cts-0 6m 372Mi ds-cts-1 5m 380Mi ds-cts-2 6m 366Mi ds-idrepo-0 4352m 13810Mi ds-idrepo-1 3223m 13823Mi ds-idrepo-2 2789m 13810Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2275m 4569Mi idm-65858d8c4c-gdv6b 2286m 4735Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 704m 755Mi 16:59:49 DEBUG --- stderr --- 16:59:49 DEBUG 16:59:50 INFO 16:59:50 INFO [loop_until]: kubectl --namespace=xlou top node 16:59:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 16:59:50 INFO [loop_until]: OK (rc = 0) 16:59:50 DEBUG --- stdout --- 16:59:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2494m 15% 6047Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1137m 7% 2325Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2548m 16% 5851Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4325m 27% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3917m 24% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2909m 18% 14449Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 774m 4% 2261Mi 3% 16:59:50 DEBUG --- stderr --- 16:59:50 DEBUG 17:00:49 INFO 17:00:49 INFO [loop_until]: kubectl --namespace=xlou top pods 17:00:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:00:49 INFO [loop_until]: OK (rc = 0) 17:00:49 DEBUG --- stdout --- 17:00:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 90m 5803Mi am-55f77847b7-sgmd6 90m 5726Mi am-55f77847b7-wq5w5 87m 5725Mi ds-cts-0 6m 372Mi ds-cts-1 6m 381Mi ds-cts-2 6m 367Mi ds-idrepo-0 5344m 13814Mi ds-idrepo-1 4166m 13838Mi ds-idrepo-2 3993m 13808Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2560m 4641Mi idm-65858d8c4c-gdv6b 2542m 4803Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 581m 854Mi 17:00:49 DEBUG --- stderr --- 17:00:49 DEBUG 17:00:50 INFO 17:00:50 INFO [loop_until]: kubectl --namespace=xlou top node 17:00:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:00:51 INFO [loop_until]: OK (rc = 0) 17:00:51 DEBUG --- stdout --- 17:00:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 153m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2837m 17% 6117Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1213m 7% 2434Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2777m 17% 5913Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5661m 35% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4269m 26% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4023m 25% 14417Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 901m 5% 2415Mi 4% 17:00:51 DEBUG --- stderr --- 17:00:51 DEBUG 17:01:49 INFO 17:01:49 INFO [loop_until]: kubectl --namespace=xlou top pods 17:01:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:01:49 INFO [loop_until]: OK (rc = 0) 17:01:49 DEBUG --- stdout --- 17:01:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 94m 5804Mi am-55f77847b7-sgmd6 95m 5727Mi am-55f77847b7-wq5w5 92m 5725Mi ds-cts-0 7m 373Mi ds-cts-1 6m 380Mi ds-cts-2 6m 366Mi ds-idrepo-0 6251m 13823Mi ds-idrepo-1 4492m 13823Mi ds-idrepo-2 4935m 13807Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2624m 4674Mi idm-65858d8c4c-gdv6b 2777m 4849Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 577m 896Mi 17:01:49 DEBUG --- stderr --- 17:01:49 DEBUG 17:01:51 INFO 17:01:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:01:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:01:51 INFO [loop_until]: OK (rc = 0) 17:01:51 DEBUG --- stdout --- 17:01:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3031m 19% 6162Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1133m 7% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2840m 17% 5928Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6234m 39% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4846m 30% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4739m 29% 14444Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 654m 4% 2412Mi 4% 17:01:51 DEBUG --- stderr --- 17:01:51 DEBUG 17:02:49 INFO 17:02:49 INFO [loop_until]: kubectl --namespace=xlou top pods 17:02:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:02:49 INFO [loop_until]: OK (rc = 0) 17:02:49 DEBUG --- stdout --- 17:02:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 97m 5803Mi am-55f77847b7-sgmd6 89m 5727Mi am-55f77847b7-wq5w5 97m 5725Mi ds-cts-0 6m 372Mi ds-cts-1 7m 376Mi ds-cts-2 7m 366Mi ds-idrepo-0 6625m 13823Mi ds-idrepo-1 5414m 13823Mi ds-idrepo-2 5325m 13818Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2662m 4684Mi idm-65858d8c4c-gdv6b 2798m 4856Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 559m 897Mi 17:02:49 DEBUG --- stderr --- 17:02:49 DEBUG 17:02:51 INFO 17:02:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:02:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:02:51 INFO [loop_until]: OK (rc = 0) 17:02:51 DEBUG --- stdout --- 17:02:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 158m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3004m 18% 6172Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1127m 7% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2854m 17% 5939Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7027m 44% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5478m 34% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5425m 34% 14440Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 632m 3% 2407Mi 4% 17:02:51 DEBUG --- stderr --- 17:02:51 DEBUG 17:03:49 INFO 17:03:49 INFO [loop_until]: kubectl --namespace=xlou top pods 17:03:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:03:49 INFO [loop_until]: OK (rc = 0) 17:03:49 DEBUG --- stdout --- 17:03:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 95m 5803Mi am-55f77847b7-sgmd6 89m 5727Mi am-55f77847b7-wq5w5 95m 5727Mi ds-cts-0 6m 372Mi ds-cts-1 6m 377Mi ds-cts-2 6m 366Mi ds-idrepo-0 6444m 13823Mi ds-idrepo-1 3618m 13822Mi ds-idrepo-2 3781m 13812Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2635m 4693Mi idm-65858d8c4c-gdv6b 2758m 4866Mi lodemon-86d6dfd886-rxdp4 8m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 559m 898Mi 17:03:49 DEBUG --- stderr --- 17:03:49 DEBUG 17:03:51 INFO 17:03:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:03:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:03:51 INFO [loop_until]: OK (rc = 0) 17:03:51 DEBUG --- stdout --- 17:03:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 156m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2985m 18% 6182Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1130m 7% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2728m 17% 5943Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6642m 41% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3881m 24% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4110m 25% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 620m 3% 2411Mi 4% 17:03:51 DEBUG --- stderr --- 17:03:51 DEBUG 17:04:49 INFO 17:04:49 INFO [loop_until]: kubectl --namespace=xlou top pods 17:04:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:04:49 INFO [loop_until]: OK (rc = 0) 17:04:49 DEBUG --- stdout --- 17:04:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 92m 5804Mi am-55f77847b7-sgmd6 96m 5728Mi am-55f77847b7-wq5w5 89m 5727Mi ds-cts-0 6m 372Mi ds-cts-1 5m 377Mi ds-cts-2 6m 366Mi ds-idrepo-0 5715m 13735Mi ds-idrepo-1 4317m 13793Mi ds-idrepo-2 4132m 13828Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2570m 4700Mi idm-65858d8c4c-gdv6b 2717m 4875Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 570m 898Mi 17:04:49 DEBUG --- stderr --- 17:04:49 DEBUG 17:04:51 INFO 17:04:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:04:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:04:51 INFO [loop_until]: OK (rc = 0) 17:04:51 DEBUG --- stdout --- 17:04:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 155m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2983m 18% 6188Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1133m 7% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2772m 17% 5955Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5890m 37% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3842m 24% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4466m 28% 14440Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 630m 3% 2412Mi 4% 17:04:51 DEBUG --- stderr --- 17:04:51 DEBUG 17:05:49 INFO 17:05:49 INFO [loop_until]: kubectl --namespace=xlou top pods 17:05:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:05:49 INFO [loop_until]: OK (rc = 0) 17:05:49 DEBUG --- stdout --- 17:05:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 93m 5804Mi am-55f77847b7-sgmd6 88m 5728Mi am-55f77847b7-wq5w5 92m 5727Mi ds-cts-0 6m 372Mi ds-cts-1 5m 377Mi ds-cts-2 6m 366Mi ds-idrepo-0 7157m 13866Mi ds-idrepo-1 3886m 13850Mi ds-idrepo-2 5264m 13869Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2595m 4711Mi idm-65858d8c4c-gdv6b 2648m 4883Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 556m 898Mi 17:05:49 DEBUG --- stderr --- 17:05:49 DEBUG 17:05:51 INFO 17:05:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:05:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:05:51 INFO [loop_until]: OK (rc = 0) 17:05:51 DEBUG --- stdout --- 17:05:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2930m 18% 6195Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1109m 6% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2826m 17% 5961Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7275m 45% 14505Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4190m 26% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5097m 32% 14359Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 623m 3% 2413Mi 4% 17:05:51 DEBUG --- stderr --- 17:05:51 DEBUG 17:06:49 INFO 17:06:49 INFO [loop_until]: kubectl --namespace=xlou top pods 17:06:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:06:49 INFO [loop_until]: OK (rc = 0) 17:06:49 DEBUG --- stdout --- 17:06:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 103m 5804Mi am-55f77847b7-sgmd6 92m 5728Mi am-55f77847b7-wq5w5 91m 5727Mi ds-cts-0 6m 372Mi ds-cts-1 5m 377Mi ds-cts-2 7m 366Mi ds-idrepo-0 5463m 13788Mi ds-idrepo-1 3681m 13826Mi ds-idrepo-2 3503m 13846Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2494m 4717Mi idm-65858d8c4c-gdv6b 2709m 4892Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 538m 899Mi 17:06:49 DEBUG --- stderr --- 17:06:49 DEBUG 17:06:51 INFO 17:06:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:06:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:06:51 INFO [loop_until]: OK (rc = 0) 17:06:51 DEBUG --- stdout --- 17:06:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2958m 18% 6207Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1106m 6% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2765m 17% 5970Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5680m 35% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3588m 22% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3429m 21% 14442Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 599m 3% 2413Mi 4% 17:06:51 DEBUG --- stderr --- 17:06:51 DEBUG 17:07:49 INFO 17:07:49 INFO [loop_until]: kubectl --namespace=xlou top pods 17:07:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:07:50 INFO [loop_until]: OK (rc = 0) 17:07:50 DEBUG --- stdout --- 17:07:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 92m 5805Mi am-55f77847b7-sgmd6 89m 5728Mi am-55f77847b7-wq5w5 89m 5727Mi ds-cts-0 7m 373Mi ds-cts-1 6m 377Mi ds-cts-2 7m 366Mi ds-idrepo-0 4650m 13823Mi ds-idrepo-1 4546m 13822Mi ds-idrepo-2 3601m 13870Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2605m 4727Mi idm-65858d8c4c-gdv6b 2969m 4902Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 560m 901Mi 17:07:50 DEBUG --- stderr --- 17:07:50 DEBUG 17:07:51 INFO 17:07:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:07:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:07:51 INFO [loop_until]: OK (rc = 0) 17:07:51 DEBUG --- stdout --- 17:07:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3145m 19% 6214Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1110m 6% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2804m 17% 5977Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5098m 32% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4263m 26% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3511m 22% 14435Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 643m 4% 2414Mi 4% 17:07:51 DEBUG --- stderr --- 17:07:51 DEBUG 17:08:50 INFO 17:08:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:08:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:08:50 INFO [loop_until]: OK (rc = 0) 17:08:50 DEBUG --- stdout --- 17:08:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 96m 5805Mi am-55f77847b7-sgmd6 90m 5728Mi am-55f77847b7-wq5w5 91m 5727Mi ds-cts-0 11m 373Mi ds-cts-1 6m 377Mi ds-cts-2 7m 366Mi ds-idrepo-0 5564m 13807Mi ds-idrepo-1 4946m 13661Mi ds-idrepo-2 3521m 13833Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2623m 4735Mi idm-65858d8c4c-gdv6b 2788m 4909Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 547m 902Mi 17:08:50 DEBUG --- stderr --- 17:08:50 DEBUG 17:08:51 INFO 17:08:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:08:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:08:51 INFO [loop_until]: OK (rc = 0) 17:08:51 DEBUG --- stdout --- 17:08:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3014m 18% 6221Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1121m 7% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2846m 17% 5987Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5483m 34% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4563m 28% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3760m 23% 14472Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 615m 3% 2416Mi 4% 17:08:51 DEBUG --- stderr --- 17:08:51 DEBUG 17:09:50 INFO 17:09:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:09:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:09:50 INFO [loop_until]: OK (rc = 0) 17:09:50 DEBUG --- stdout --- 17:09:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 99m 5805Mi am-55f77847b7-sgmd6 92m 5728Mi am-55f77847b7-wq5w5 89m 5727Mi ds-cts-0 6m 374Mi ds-cts-1 6m 377Mi ds-cts-2 6m 366Mi ds-idrepo-0 5603m 13867Mi ds-idrepo-1 4230m 13862Mi ds-idrepo-2 3316m 13823Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2633m 4745Mi idm-65858d8c4c-gdv6b 2775m 4920Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 540m 902Mi 17:09:50 DEBUG --- stderr --- 17:09:50 DEBUG 17:09:51 INFO 17:09:51 INFO [loop_until]: kubectl --namespace=xlou top node 17:09:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:09:52 INFO [loop_until]: OK (rc = 0) 17:09:52 DEBUG --- stdout --- 17:09:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 160m 1% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2919m 18% 6233Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1117m 7% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2800m 17% 5996Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5606m 35% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4257m 26% 14440Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3505m 22% 14433Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 607m 3% 2414Mi 4% 17:09:52 DEBUG --- stderr --- 17:09:52 DEBUG 17:10:50 INFO 17:10:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:10:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:10:50 INFO [loop_until]: OK (rc = 0) 17:10:50 DEBUG --- stdout --- 17:10:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 97m 5805Mi am-55f77847b7-sgmd6 85m 5728Mi am-55f77847b7-wq5w5 98m 5727Mi ds-cts-0 10m 372Mi ds-cts-1 5m 377Mi ds-cts-2 7m 366Mi ds-idrepo-0 5478m 13679Mi ds-idrepo-1 3605m 13833Mi ds-idrepo-2 4299m 13790Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2624m 4755Mi idm-65858d8c4c-gdv6b 2610m 4928Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 560m 902Mi 17:10:50 DEBUG --- stderr --- 17:10:50 DEBUG 17:10:52 INFO 17:10:52 INFO [loop_until]: kubectl --namespace=xlou top node 17:10:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:10:52 INFO [loop_until]: OK (rc = 0) 17:10:52 DEBUG --- stdout --- 17:10:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 6889Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2924m 18% 6244Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1107m 6% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2739m 17% 6004Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5543m 34% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3540m 22% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4357m 27% 14438Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 618m 3% 2414Mi 4% 17:10:52 DEBUG --- stderr --- 17:10:52 DEBUG 17:11:50 INFO 17:11:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:11:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:11:50 INFO [loop_until]: OK (rc = 0) 17:11:50 DEBUG --- stdout --- 17:11:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 92m 5805Mi am-55f77847b7-sgmd6 88m 5728Mi am-55f77847b7-wq5w5 91m 5727Mi ds-cts-0 8m 372Mi ds-cts-1 5m 377Mi ds-cts-2 11m 367Mi ds-idrepo-0 6487m 13854Mi ds-idrepo-1 3199m 13886Mi ds-idrepo-2 4656m 13841Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2446m 4764Mi idm-65858d8c4c-gdv6b 2839m 4940Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 556m 902Mi 17:11:50 DEBUG --- stderr --- 17:11:50 DEBUG 17:11:52 INFO 17:11:52 INFO [loop_until]: kubectl --namespace=xlou top node 17:11:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:11:52 INFO [loop_until]: OK (rc = 0) 17:11:52 DEBUG --- stdout --- 17:11:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6889Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3050m 19% 6255Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1110m 6% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2737m 17% 6012Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6051m 38% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3456m 21% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4067m 25% 14455Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 617m 3% 2415Mi 4% 17:11:52 DEBUG --- stderr --- 17:11:52 DEBUG 17:12:50 INFO 17:12:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:12:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:12:50 INFO [loop_until]: OK (rc = 0) 17:12:50 DEBUG --- stdout --- 17:12:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 98m 5805Mi am-55f77847b7-sgmd6 93m 5728Mi am-55f77847b7-wq5w5 92m 5728Mi ds-cts-0 8m 372Mi ds-cts-1 5m 377Mi ds-cts-2 6m 367Mi ds-idrepo-0 5249m 13831Mi ds-idrepo-1 2722m 13822Mi ds-idrepo-2 3962m 13671Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2610m 4771Mi idm-65858d8c4c-gdv6b 2733m 4947Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 541m 902Mi 17:12:50 DEBUG --- stderr --- 17:12:50 DEBUG 17:12:52 INFO 17:12:52 INFO [loop_until]: kubectl --namespace=xlou top node 17:12:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:12:52 INFO [loop_until]: OK (rc = 0) 17:12:52 DEBUG --- stdout --- 17:12:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 161m 1% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2930m 18% 6264Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1115m 7% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2752m 17% 6019Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5771m 36% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3196m 20% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3644m 22% 14409Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 623m 3% 2416Mi 4% 17:12:52 DEBUG --- stderr --- 17:12:52 DEBUG 17:13:50 INFO 17:13:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:13:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:13:50 INFO [loop_until]: OK (rc = 0) 17:13:50 DEBUG --- stdout --- 17:13:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 91m 5805Mi am-55f77847b7-sgmd6 89m 5728Mi am-55f77847b7-wq5w5 94m 5728Mi ds-cts-0 7m 372Mi ds-cts-1 6m 377Mi ds-cts-2 10m 368Mi ds-idrepo-0 5267m 13768Mi ds-idrepo-1 2989m 13823Mi ds-idrepo-2 3359m 13811Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2546m 4779Mi idm-65858d8c4c-gdv6b 2654m 4957Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 543m 902Mi 17:13:50 DEBUG --- stderr --- 17:13:50 DEBUG 17:13:52 INFO 17:13:52 INFO [loop_until]: kubectl --namespace=xlou top node 17:13:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:13:52 INFO [loop_until]: OK (rc = 0) 17:13:52 DEBUG --- stdout --- 17:13:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2883m 18% 6267Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1126m 7% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2686m 16% 6030Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5307m 33% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3398m 21% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4272m 26% 14423Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 611m 3% 2415Mi 4% 17:13:52 DEBUG --- stderr --- 17:13:52 DEBUG 17:14:50 INFO 17:14:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:14:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:14:50 INFO [loop_until]: OK (rc = 0) 17:14:50 DEBUG --- stdout --- 17:14:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 94m 5805Mi am-55f77847b7-sgmd6 93m 5728Mi am-55f77847b7-wq5w5 98m 5728Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 8m 367Mi ds-idrepo-0 6684m 13811Mi ds-idrepo-1 3805m 13829Mi ds-idrepo-2 3604m 13824Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2589m 4785Mi idm-65858d8c4c-gdv6b 2755m 4966Mi lodemon-86d6dfd886-rxdp4 5m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 525m 902Mi 17:14:50 DEBUG --- stderr --- 17:14:50 DEBUG 17:14:52 INFO 17:14:52 INFO [loop_until]: kubectl --namespace=xlou top node 17:14:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:14:52 INFO [loop_until]: OK (rc = 0) 17:14:52 DEBUG --- stdout --- 17:14:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 159m 1% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2975m 18% 6275Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1120m 7% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2749m 17% 6041Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6610m 41% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3882m 24% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3042m 19% 14489Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 610m 3% 2414Mi 4% 17:14:52 DEBUG --- stderr --- 17:14:52 DEBUG 17:15:50 INFO 17:15:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:15:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:15:50 INFO [loop_until]: OK (rc = 0) 17:15:50 DEBUG --- stdout --- 17:15:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 97m 5805Mi am-55f77847b7-sgmd6 94m 5728Mi am-55f77847b7-wq5w5 90m 5728Mi ds-cts-0 6m 372Mi ds-cts-1 6m 377Mi ds-cts-2 8m 367Mi ds-idrepo-0 7460m 13681Mi ds-idrepo-1 3857m 13809Mi ds-idrepo-2 5210m 13826Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2566m 4798Mi idm-65858d8c4c-gdv6b 2755m 4976Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 545m 903Mi 17:15:50 DEBUG --- stderr --- 17:15:50 DEBUG 17:15:52 INFO 17:15:52 INFO [loop_until]: kubectl --namespace=xlou top node 17:15:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:15:52 INFO [loop_until]: OK (rc = 0) 17:15:52 DEBUG --- stdout --- 17:15:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 155m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6889Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2959m 18% 6289Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1124m 7% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2853m 17% 6051Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6725m 42% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3811m 23% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5080m 31% 14439Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 606m 3% 2414Mi 4% 17:15:52 DEBUG --- stderr --- 17:15:52 DEBUG 17:16:50 INFO 17:16:50 INFO [loop_until]: kubectl --namespace=xlou top pods 17:16:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:16:51 INFO [loop_until]: OK (rc = 0) 17:16:51 DEBUG --- stdout --- 17:16:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 90m 5805Mi am-55f77847b7-sgmd6 88m 5729Mi am-55f77847b7-wq5w5 94m 5728Mi ds-cts-0 5m 374Mi ds-cts-1 5m 378Mi ds-cts-2 6m 367Mi ds-idrepo-0 6076m 13825Mi ds-idrepo-1 2783m 13829Mi ds-idrepo-2 3261m 13768Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2570m 4807Mi idm-65858d8c4c-gdv6b 2640m 4982Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 534m 903Mi 17:16:51 DEBUG --- stderr --- 17:16:51 DEBUG 17:16:52 INFO 17:16:52 INFO [loop_until]: kubectl --namespace=xlou top node 17:16:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:16:52 INFO [loop_until]: OK (rc = 0) 17:16:52 DEBUG --- stdout --- 17:16:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 152m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2771m 17% 6296Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1120m 7% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2818m 17% 6058Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6346m 39% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2959m 18% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3363m 21% 14430Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 603m 3% 2415Mi 4% 17:16:52 DEBUG --- stderr --- 17:16:52 DEBUG 17:17:51 INFO 17:17:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:17:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:17:51 INFO [loop_until]: OK (rc = 0) 17:17:51 DEBUG --- stdout --- 17:17:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 94m 5805Mi am-55f77847b7-sgmd6 89m 5730Mi am-55f77847b7-wq5w5 94m 5728Mi ds-cts-0 6m 373Mi ds-cts-1 5m 379Mi ds-cts-2 10m 369Mi ds-idrepo-0 5757m 13826Mi ds-idrepo-1 2939m 13820Mi ds-idrepo-2 3805m 13796Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2627m 4815Mi idm-65858d8c4c-gdv6b 2735m 4994Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 534m 903Mi 17:17:51 DEBUG --- stderr --- 17:17:51 DEBUG 17:17:53 INFO 17:17:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:17:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:17:53 INFO [loop_until]: OK (rc = 0) 17:17:53 DEBUG --- stdout --- 17:17:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 155m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2906m 18% 6310Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1140m 7% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2802m 17% 6063Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5812m 36% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2855m 17% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3843m 24% 14444Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 591m 3% 2415Mi 4% 17:17:53 DEBUG --- stderr --- 17:17:53 DEBUG 17:18:51 INFO 17:18:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:18:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:18:51 INFO [loop_until]: OK (rc = 0) 17:18:51 DEBUG --- stdout --- 17:18:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 97m 5805Mi am-55f77847b7-sgmd6 92m 5730Mi am-55f77847b7-wq5w5 91m 5728Mi ds-cts-0 5m 373Mi ds-cts-1 5m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 5705m 13825Mi ds-idrepo-1 3258m 13840Mi ds-idrepo-2 4895m 13794Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2692m 4821Mi idm-65858d8c4c-gdv6b 2671m 5000Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 549m 904Mi 17:18:51 DEBUG --- stderr --- 17:18:51 DEBUG 17:18:53 INFO 17:18:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:18:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:18:53 INFO [loop_until]: OK (rc = 0) 17:18:53 DEBUG --- stdout --- 17:18:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2955m 18% 6312Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1082m 6% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2760m 17% 6074Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6029m 37% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3287m 20% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4973m 31% 14423Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 600m 3% 2415Mi 4% 17:18:53 DEBUG --- stderr --- 17:18:53 DEBUG 17:19:51 INFO 17:19:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:19:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:19:51 INFO [loop_until]: OK (rc = 0) 17:19:51 DEBUG --- stdout --- 17:19:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 93m 5805Mi am-55f77847b7-sgmd6 91m 5730Mi am-55f77847b7-wq5w5 89m 5728Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 5031m 13678Mi ds-idrepo-1 3315m 13591Mi ds-idrepo-2 4320m 13794Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2572m 4832Mi idm-65858d8c4c-gdv6b 2657m 5012Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 540m 904Mi 17:19:51 DEBUG --- stderr --- 17:19:51 DEBUG 17:19:53 INFO 17:19:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:19:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:19:53 INFO [loop_until]: OK (rc = 0) 17:19:53 DEBUG --- stdout --- 17:19:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2831m 17% 6328Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1108m 6% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2799m 17% 6084Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5658m 35% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3289m 20% 14232Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4598m 28% 14413Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 607m 3% 2415Mi 4% 17:19:53 DEBUG --- stderr --- 17:19:53 DEBUG 17:20:51 INFO 17:20:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:20:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:20:51 INFO [loop_until]: OK (rc = 0) 17:20:51 DEBUG --- stdout --- 17:20:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 102m 5806Mi am-55f77847b7-sgmd6 100m 5731Mi am-55f77847b7-wq5w5 92m 5728Mi ds-cts-0 6m 372Mi ds-cts-1 6m 377Mi ds-cts-2 6m 369Mi ds-idrepo-0 4982m 13823Mi ds-idrepo-1 3233m 13822Mi ds-idrepo-2 4213m 13785Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2544m 4840Mi idm-65858d8c4c-gdv6b 2850m 5019Mi lodemon-86d6dfd886-rxdp4 4m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 543m 904Mi 17:20:51 DEBUG --- stderr --- 17:20:51 DEBUG 17:20:53 INFO 17:20:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:20:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:20:53 INFO [loop_until]: OK (rc = 0) 17:20:53 DEBUG --- stdout --- 17:20:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 162m 1% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 158m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2980m 18% 6335Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1078m 6% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2669m 16% 6092Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4921m 30% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3237m 20% 14428Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4312m 27% 14445Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 612m 3% 2413Mi 4% 17:20:53 DEBUG --- stderr --- 17:20:53 DEBUG 17:21:51 INFO 17:21:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:21:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:21:51 INFO [loop_until]: OK (rc = 0) 17:21:51 DEBUG --- stdout --- 17:21:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 95m 5806Mi am-55f77847b7-sgmd6 93m 5730Mi am-55f77847b7-wq5w5 89m 5728Mi ds-cts-0 6m 372Mi ds-cts-1 6m 377Mi ds-cts-2 6m 369Mi ds-idrepo-0 4717m 13850Mi ds-idrepo-1 3190m 13855Mi ds-idrepo-2 3387m 13828Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2533m 4850Mi idm-65858d8c4c-gdv6b 2737m 5029Mi lodemon-86d6dfd886-rxdp4 1m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 524m 904Mi 17:21:51 DEBUG --- stderr --- 17:21:51 DEBUG 17:21:53 INFO 17:21:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:21:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:21:53 INFO [loop_until]: OK (rc = 0) 17:21:53 DEBUG --- stdout --- 17:21:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 156m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2925m 18% 6347Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1098m 6% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2750m 17% 6099Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4904m 30% 14519Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2956m 18% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3559m 22% 14448Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 583m 3% 2417Mi 4% 17:21:53 DEBUG --- stderr --- 17:21:53 DEBUG 17:22:51 INFO 17:22:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:22:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:22:51 INFO [loop_until]: OK (rc = 0) 17:22:51 DEBUG --- stdout --- 17:22:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 90m 5806Mi am-55f77847b7-sgmd6 89m 5731Mi am-55f77847b7-wq5w5 95m 5728Mi ds-cts-0 6m 373Mi ds-cts-1 5m 377Mi ds-cts-2 6m 369Mi ds-idrepo-0 6434m 13834Mi ds-idrepo-1 5258m 13826Mi ds-idrepo-2 5711m 13850Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2657m 4859Mi idm-65858d8c4c-gdv6b 2727m 5038Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 541m 904Mi 17:22:51 DEBUG --- stderr --- 17:22:51 DEBUG 17:22:53 INFO 17:22:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:22:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:22:53 INFO [loop_until]: OK (rc = 0) 17:22:53 DEBUG --- stdout --- 17:22:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 152m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2905m 18% 6348Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1072m 6% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2782m 17% 6110Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7308m 45% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4840m 30% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5573m 35% 14333Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 614m 3% 2415Mi 4% 17:22:53 DEBUG --- stderr --- 17:22:53 DEBUG 17:23:51 INFO 17:23:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:23:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:23:51 INFO [loop_until]: OK (rc = 0) 17:23:51 DEBUG --- stdout --- 17:23:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 95m 5806Mi am-55f77847b7-sgmd6 95m 5731Mi am-55f77847b7-wq5w5 98m 5728Mi ds-cts-0 6m 373Mi ds-cts-1 7m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 4958m 13753Mi ds-idrepo-1 2987m 13823Mi ds-idrepo-2 3215m 13739Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2625m 4869Mi idm-65858d8c4c-gdv6b 2661m 5051Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 525m 905Mi 17:23:51 DEBUG --- stderr --- 17:23:51 DEBUG 17:23:53 INFO 17:23:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:23:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:23:53 INFO [loop_until]: OK (rc = 0) 17:23:53 DEBUG --- stdout --- 17:23:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 162m 1% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 157m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2936m 18% 6362Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1113m 7% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2704m 17% 6121Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 4889m 30% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3053m 19% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3073m 19% 14427Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 606m 3% 2418Mi 4% 17:23:53 DEBUG --- stderr --- 17:23:53 DEBUG 17:24:51 INFO 17:24:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:24:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:24:51 INFO [loop_until]: OK (rc = 0) 17:24:51 DEBUG --- stdout --- 17:24:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 94m 5806Mi am-55f77847b7-sgmd6 94m 5732Mi am-55f77847b7-wq5w5 89m 5729Mi ds-cts-0 6m 374Mi ds-cts-1 5m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 6806m 13811Mi ds-idrepo-1 3147m 13853Mi ds-idrepo-2 4943m 13822Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2492m 4878Mi idm-65858d8c4c-gdv6b 2765m 5062Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 526m 905Mi 17:24:51 DEBUG --- stderr --- 17:24:51 DEBUG 17:24:53 INFO 17:24:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:24:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:24:53 INFO [loop_until]: OK (rc = 0) 17:24:53 DEBUG --- stdout --- 17:24:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 158m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 147m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2964m 18% 6372Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1071m 6% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2726m 17% 6129Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7126m 44% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4052m 25% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4620m 29% 14453Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 576m 3% 2415Mi 4% 17:24:53 DEBUG --- stderr --- 17:24:53 DEBUG 17:25:51 INFO 17:25:51 INFO [loop_until]: kubectl --namespace=xlou top pods 17:25:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:25:51 INFO [loop_until]: OK (rc = 0) 17:25:51 DEBUG --- stdout --- 17:25:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 93m 5807Mi am-55f77847b7-sgmd6 88m 5731Mi am-55f77847b7-wq5w5 91m 5729Mi ds-cts-0 5m 373Mi ds-cts-1 5m 377Mi ds-cts-2 6m 370Mi ds-idrepo-0 5814m 13743Mi ds-idrepo-1 3011m 13825Mi ds-idrepo-2 4007m 13787Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2642m 4886Mi idm-65858d8c4c-gdv6b 2781m 5069Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 544m 906Mi 17:25:51 DEBUG --- stderr --- 17:25:51 DEBUG 17:25:53 INFO 17:25:53 INFO [loop_until]: kubectl --namespace=xlou top node 17:25:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:25:54 INFO [loop_until]: OK (rc = 0) 17:25:54 DEBUG --- stdout --- 17:25:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 154m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 154m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 146m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2824m 17% 6381Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1120m 7% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2814m 17% 6136Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5678m 35% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3046m 19% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3908m 24% 14383Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 607m 3% 2417Mi 4% 17:25:54 DEBUG --- stderr --- 17:25:54 DEBUG 17:26:52 INFO 17:26:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:26:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:26:52 INFO [loop_until]: OK (rc = 0) 17:26:52 DEBUG --- stdout --- 17:26:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 94m 5807Mi am-55f77847b7-sgmd6 93m 5732Mi am-55f77847b7-wq5w5 95m 5729Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 6966m 13717Mi ds-idrepo-1 3429m 13805Mi ds-idrepo-2 4549m 13829Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2607m 4897Mi idm-65858d8c4c-gdv6b 2846m 5078Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 520m 906Mi 17:26:52 DEBUG --- stderr --- 17:26:52 DEBUG 17:26:54 INFO 17:26:54 INFO [loop_until]: kubectl --namespace=xlou top node 17:26:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:26:54 INFO [loop_until]: OK (rc = 0) 17:26:54 DEBUG --- stdout --- 17:26:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 153m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3003m 18% 6390Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1114m 7% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2767m 17% 6147Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6855m 43% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4502m 28% 14413Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4688m 29% 14478Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 595m 3% 2417Mi 4% 17:26:54 DEBUG --- stderr --- 17:26:54 DEBUG 17:27:52 INFO 17:27:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:27:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:27:52 INFO [loop_until]: OK (rc = 0) 17:27:52 DEBUG --- stdout --- 17:27:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 101m 5807Mi am-55f77847b7-sgmd6 92m 5732Mi am-55f77847b7-wq5w5 90m 5729Mi ds-cts-0 7m 373Mi ds-cts-1 6m 378Mi ds-cts-2 7m 370Mi ds-idrepo-0 6025m 13815Mi ds-idrepo-1 3966m 13810Mi ds-idrepo-2 3822m 13760Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2622m 4905Mi idm-65858d8c4c-gdv6b 2729m 5088Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 532m 906Mi 17:27:52 DEBUG --- stderr --- 17:27:52 DEBUG 17:27:54 INFO 17:27:54 INFO [loop_until]: kubectl --namespace=xlou top node 17:27:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:27:54 INFO [loop_until]: OK (rc = 0) 17:27:54 DEBUG --- stdout --- 17:27:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 161m 1% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 151m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 151m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2909m 18% 6403Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1122m 7% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2849m 17% 6159Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6029m 37% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4492m 28% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4475m 28% 14374Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 590m 3% 2417Mi 4% 17:27:54 DEBUG --- stderr --- 17:27:54 DEBUG 17:28:52 INFO 17:28:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:28:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:28:52 INFO [loop_until]: OK (rc = 0) 17:28:52 DEBUG --- stdout --- 17:28:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 92m 5807Mi am-55f77847b7-sgmd6 77m 5732Mi am-55f77847b7-wq5w5 88m 5729Mi ds-cts-0 6m 372Mi ds-cts-1 5m 379Mi ds-cts-2 7m 369Mi ds-idrepo-0 5642m 13848Mi ds-idrepo-1 3658m 13788Mi ds-idrepo-2 4175m 13831Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 2325m 4911Mi idm-65858d8c4c-gdv6b 2244m 5096Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 545m 906Mi 17:28:52 DEBUG --- stderr --- 17:28:52 DEBUG 17:28:54 INFO 17:28:54 INFO [loop_until]: kubectl --namespace=xlou top node 17:28:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:28:54 INFO [loop_until]: OK (rc = 0) 17:28:54 DEBUG --- stdout --- 17:28:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 148m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2506m 15% 6407Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 945m 5% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2268m 14% 6162Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6337m 39% 14491Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3486m 21% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4118m 25% 14455Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 509m 3% 2417Mi 4% 17:28:54 DEBUG --- stderr --- 17:28:54 DEBUG 17:29:52 INFO 17:29:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:29:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:29:52 INFO [loop_until]: OK (rc = 0) 17:29:52 DEBUG --- stdout --- 17:29:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 9m 5807Mi am-55f77847b7-sgmd6 8m 5732Mi am-55f77847b7-wq5w5 8m 5729Mi ds-cts-0 6m 373Mi ds-cts-1 5m 379Mi ds-cts-2 4m 369Mi ds-idrepo-0 866m 13795Mi ds-idrepo-1 9m 13621Mi ds-idrepo-2 2510m 13638Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 4911Mi idm-65858d8c4c-gdv6b 6m 5096Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 198Mi 17:29:52 DEBUG --- stderr --- 17:29:52 DEBUG 17:29:54 INFO 17:29:54 INFO [loop_until]: kubectl --namespace=xlou top node 17:29:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:29:54 INFO [loop_until]: OK (rc = 0) 17:29:54 DEBUG --- stdout --- 17:29:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6412Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 6164Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 863m 5% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14240Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1689m 10% 14260Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1715Mi 2% 17:29:54 DEBUG --- stderr --- 17:29:54 DEBUG 127.0.0.1 - - [12/Aug/2023 17:30:34] "GET /monitoring/average?start_time=23-08-12_16:00:00&stop_time=23-08-12_16:28:33 HTTP/1.1" 200 - 17:30:52 INFO 17:30:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:30:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:30:52 INFO [loop_until]: OK (rc = 0) 17:30:52 DEBUG --- stdout --- 17:30:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 9m 5807Mi am-55f77847b7-sgmd6 6m 5731Mi am-55f77847b7-wq5w5 8m 5729Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 4m 369Mi ds-idrepo-0 12m 13795Mi ds-idrepo-1 8m 13621Mi ds-idrepo-2 18m 13635Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 4911Mi idm-65858d8c4c-gdv6b 6m 5096Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 198Mi 17:30:52 DEBUG --- stderr --- 17:30:52 DEBUG 17:30:54 INFO 17:30:54 INFO [loop_until]: kubectl --namespace=xlou top node 17:30:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:30:54 INFO [loop_until]: OK (rc = 0) 17:30:54 DEBUG --- stdout --- 17:30:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 6411Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 6165Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 55m 0% 14241Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 48m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14256Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1716Mi 2% 17:30:54 DEBUG --- stderr --- 17:30:54 DEBUG 17:31:52 INFO 17:31:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:31:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:31:52 INFO [loop_until]: OK (rc = 0) 17:31:52 DEBUG --- stdout --- 17:31:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 158m 5816Mi am-55f77847b7-sgmd6 185m 5740Mi am-55f77847b7-wq5w5 87m 5730Mi ds-cts-0 8m 373Mi ds-cts-1 7m 378Mi ds-cts-2 7m 369Mi ds-idrepo-0 2230m 13876Mi ds-idrepo-1 1209m 13648Mi ds-idrepo-2 331m 13662Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 933m 4923Mi idm-65858d8c4c-gdv6b 1090m 5102Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 736m 688Mi 17:31:52 DEBUG --- stderr --- 17:31:52 DEBUG 17:31:54 INFO 17:31:54 INFO [loop_until]: kubectl --namespace=xlou top node 17:31:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:31:54 INFO [loop_until]: OK (rc = 0) 17:31:54 DEBUG --- stdout --- 17:31:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 257m 1% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 291m 1% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 223m 1% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1042m 6% 6417Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 558m 3% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1153m 7% 6175Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1816m 11% 14519Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1776m 11% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1055m 6% 14335Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 899m 5% 2218Mi 3% 17:31:54 DEBUG --- stderr --- 17:31:54 DEBUG 17:32:52 INFO 17:32:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:32:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:32:52 INFO [loop_until]: OK (rc = 0) 17:32:52 DEBUG --- stdout --- 17:32:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 238m 5816Mi am-55f77847b7-sgmd6 226m 5740Mi am-55f77847b7-wq5w5 245m 5731Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 8m 371Mi ds-idrepo-0 7678m 13816Mi ds-idrepo-1 3414m 13863Mi ds-idrepo-2 3860m 13823Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1880m 4944Mi idm-65858d8c4c-gdv6b 1874m 5120Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 611m 727Mi 17:32:52 DEBUG --- stderr --- 17:32:52 DEBUG 17:32:54 INFO 17:32:54 INFO [loop_until]: kubectl --namespace=xlou top node 17:32:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:32:54 INFO [loop_until]: OK (rc = 0) 17:32:54 DEBUG --- stdout --- 17:32:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 295m 1% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 305m 1% 6866Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 280m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2044m 12% 6437Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1113m 7% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2015m 12% 6198Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7410m 46% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3168m 19% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4319m 27% 14464Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 684m 4% 2233Mi 3% 17:32:54 DEBUG --- stderr --- 17:32:54 DEBUG 17:33:52 INFO 17:33:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:33:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:33:52 INFO [loop_until]: OK (rc = 0) 17:33:52 DEBUG --- stdout --- 17:33:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 285m 5818Mi am-55f77847b7-sgmd6 252m 5743Mi am-55f77847b7-wq5w5 233m 5731Mi ds-cts-0 6m 373Mi ds-cts-1 5m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 7882m 13838Mi ds-idrepo-1 2802m 13822Mi ds-idrepo-2 4819m 13725Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1816m 4950Mi idm-65858d8c4c-gdv6b 1913m 5129Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 589m 728Mi 17:33:52 DEBUG --- stderr --- 17:33:52 DEBUG 17:33:54 INFO 17:33:54 INFO [loop_until]: kubectl --namespace=xlou top node 17:33:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:33:55 INFO [loop_until]: OK (rc = 0) 17:33:55 DEBUG --- stdout --- 17:33:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 325m 2% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 293m 1% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 320m 2% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2119m 13% 6442Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1117m 7% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2009m 12% 6206Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8578m 53% 14510Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3238m 20% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4777m 30% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 688m 4% 2237Mi 3% 17:33:55 DEBUG --- stderr --- 17:33:55 DEBUG 17:34:52 INFO 17:34:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:34:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:34:52 INFO [loop_until]: OK (rc = 0) 17:34:52 DEBUG --- stdout --- 17:34:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 225m 5819Mi am-55f77847b7-sgmd6 225m 5743Mi am-55f77847b7-wq5w5 230m 5731Mi ds-cts-0 6m 373Mi ds-cts-1 5m 378Mi ds-cts-2 7m 369Mi ds-idrepo-0 7302m 13800Mi ds-idrepo-1 3837m 13814Mi ds-idrepo-2 3793m 13841Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1821m 4956Mi idm-65858d8c4c-gdv6b 1907m 5134Mi lodemon-86d6dfd886-rxdp4 9m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 588m 729Mi 17:34:52 DEBUG --- stderr --- 17:34:52 DEBUG 17:34:55 INFO 17:34:55 INFO [loop_until]: kubectl --namespace=xlou top node 17:34:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:34:55 INFO [loop_until]: OK (rc = 0) 17:34:55 DEBUG --- stdout --- 17:34:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 292m 1% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 291m 1% 6866Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 283m 1% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2079m 13% 6448Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1109m 6% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2009m 12% 6212Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7027m 44% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4069m 25% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3837m 24% 14488Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 629m 3% 2237Mi 3% 17:34:55 DEBUG --- stderr --- 17:34:55 DEBUG 17:35:52 INFO 17:35:52 INFO [loop_until]: kubectl --namespace=xlou top pods 17:35:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:35:53 INFO [loop_until]: OK (rc = 0) 17:35:53 DEBUG --- stdout --- 17:35:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 234m 5819Mi am-55f77847b7-sgmd6 231m 5743Mi am-55f77847b7-wq5w5 263m 5746Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 6069m 13823Mi ds-idrepo-1 2654m 13824Mi ds-idrepo-2 3203m 13845Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1876m 4963Mi idm-65858d8c4c-gdv6b 1903m 5141Mi lodemon-86d6dfd886-rxdp4 5m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 583m 730Mi 17:35:53 DEBUG --- stderr --- 17:35:53 DEBUG 17:35:55 INFO 17:35:55 INFO [loop_until]: kubectl --namespace=xlou top node 17:35:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:35:55 INFO [loop_until]: OK (rc = 0) 17:35:55 DEBUG --- stdout --- 17:35:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 299m 1% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 342m 2% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 292m 1% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2085m 13% 6453Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1121m 7% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2043m 12% 6218Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 52m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6540m 41% 14501Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3049m 19% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3227m 20% 14488Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 648m 4% 2239Mi 3% 17:35:55 DEBUG --- stderr --- 17:35:55 DEBUG 17:36:53 INFO 17:36:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:36:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:36:53 INFO [loop_until]: OK (rc = 0) 17:36:53 DEBUG --- stdout --- 17:36:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 286m 5822Mi am-55f77847b7-sgmd6 269m 5746Mi am-55f77847b7-wq5w5 229m 5745Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 370Mi ds-idrepo-0 7654m 13823Mi ds-idrepo-1 4410m 13823Mi ds-idrepo-2 3863m 13834Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1814m 4972Mi idm-65858d8c4c-gdv6b 1850m 5150Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 587m 731Mi 17:36:53 DEBUG --- stderr --- 17:36:53 DEBUG 17:36:55 INFO 17:36:55 INFO [loop_until]: kubectl --namespace=xlou top node 17:36:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:36:55 INFO [loop_until]: OK (rc = 0) 17:36:55 DEBUG --- stdout --- 17:36:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 327m 2% 6845Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 280m 1% 6880Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 339m 2% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2114m 13% 6459Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1121m 7% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2045m 12% 6223Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7969m 50% 14499Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4322m 27% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4183m 26% 14506Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 671m 4% 2238Mi 3% 17:36:55 DEBUG --- stderr --- 17:36:55 DEBUG 17:37:53 INFO 17:37:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:37:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:37:53 INFO [loop_until]: OK (rc = 0) 17:37:53 DEBUG --- stdout --- 17:37:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 235m 5822Mi am-55f77847b7-sgmd6 228m 5746Mi am-55f77847b7-wq5w5 229m 5745Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 7195m 13772Mi ds-idrepo-1 2854m 13827Mi ds-idrepo-2 5897m 13820Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1845m 4977Mi idm-65858d8c4c-gdv6b 1951m 5156Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 577m 731Mi 17:37:53 DEBUG --- stderr --- 17:37:53 DEBUG 17:37:55 INFO 17:37:55 INFO [loop_until]: kubectl --namespace=xlou top node 17:37:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:37:55 INFO [loop_until]: OK (rc = 0) 17:37:55 DEBUG --- stdout --- 17:37:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 290m 1% 6843Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 289m 1% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 291m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2119m 13% 6469Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1116m 7% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1937m 12% 6233Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7344m 46% 14486Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2715m 17% 14485Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 5596m 35% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 643m 4% 2240Mi 3% 17:37:55 DEBUG --- stderr --- 17:37:55 DEBUG 17:38:53 INFO 17:38:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:38:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:38:53 INFO [loop_until]: OK (rc = 0) 17:38:53 DEBUG --- stdout --- 17:38:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 236m 5822Mi am-55f77847b7-sgmd6 231m 5747Mi am-55f77847b7-wq5w5 275m 5748Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 7881m 13550Mi ds-idrepo-1 3475m 13814Mi ds-idrepo-2 3047m 13821Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1804m 4986Mi idm-65858d8c4c-gdv6b 1881m 5164Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 572m 731Mi 17:38:53 DEBUG --- stderr --- 17:38:53 DEBUG 17:38:55 INFO 17:38:55 INFO [loop_until]: kubectl --namespace=xlou top node 17:38:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:38:55 INFO [loop_until]: OK (rc = 0) 17:38:55 DEBUG --- stdout --- 17:38:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 301m 1% 6844Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 334m 2% 6882Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 295m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2077m 13% 6477Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1119m 7% 2190Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2083m 13% 6242Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7696m 48% 14255Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3857m 24% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3131m 19% 14469Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 650m 4% 2241Mi 3% 17:38:55 DEBUG --- stderr --- 17:38:55 DEBUG 17:39:53 INFO 17:39:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:39:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:39:53 INFO [loop_until]: OK (rc = 0) 17:39:53 DEBUG --- stdout --- 17:39:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 290m 5824Mi am-55f77847b7-sgmd6 270m 5749Mi am-55f77847b7-wq5w5 231m 5748Mi ds-cts-0 5m 373Mi ds-cts-1 5m 378Mi ds-cts-2 6m 370Mi ds-idrepo-0 6078m 13829Mi ds-idrepo-1 3602m 13824Mi ds-idrepo-2 2843m 13819Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1777m 5005Mi idm-65858d8c4c-gdv6b 1907m 5185Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 583m 744Mi 17:39:53 DEBUG --- stderr --- 17:39:53 DEBUG 17:39:55 INFO 17:39:55 INFO [loop_until]: kubectl --namespace=xlou top node 17:39:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:39:56 INFO [loop_until]: OK (rc = 0) 17:39:56 DEBUG --- stdout --- 17:39:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 342m 2% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 295m 1% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 336m 2% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2101m 13% 6498Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1082m 6% 2193Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1977m 12% 6255Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6245m 39% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3387m 21% 14480Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2979m 18% 14483Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 653m 4% 2253Mi 3% 17:39:56 DEBUG --- stderr --- 17:39:56 DEBUG 17:40:53 INFO 17:40:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:40:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:40:53 INFO [loop_until]: OK (rc = 0) 17:40:53 DEBUG --- stdout --- 17:40:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 239m 5824Mi am-55f77847b7-sgmd6 231m 5749Mi am-55f77847b7-wq5w5 233m 5748Mi ds-cts-0 5m 374Mi ds-cts-1 5m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 6971m 13720Mi ds-idrepo-1 4140m 13683Mi ds-idrepo-2 4147m 13821Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1842m 5011Mi idm-65858d8c4c-gdv6b 1874m 5190Mi lodemon-86d6dfd886-rxdp4 12m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 570m 744Mi 17:40:53 DEBUG --- stderr --- 17:40:53 DEBUG 17:40:56 INFO 17:40:56 INFO [loop_until]: kubectl --namespace=xlou top node 17:40:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:40:56 INFO [loop_until]: OK (rc = 0) 17:40:56 DEBUG --- stdout --- 17:40:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 293m 1% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 294m 1% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 292m 1% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2051m 12% 6506Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1132m 7% 2181Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1980m 12% 6267Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5856m 36% 14513Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4015m 25% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2598m 16% 14484Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 679m 4% 2254Mi 3% 17:40:56 DEBUG --- stderr --- 17:40:56 DEBUG 17:41:53 INFO 17:41:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:41:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:41:53 INFO [loop_until]: OK (rc = 0) 17:41:53 DEBUG --- stdout --- 17:41:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 234m 5824Mi am-55f77847b7-sgmd6 225m 5749Mi am-55f77847b7-wq5w5 282m 5751Mi ds-cts-0 6m 374Mi ds-cts-1 5m 378Mi ds-cts-2 9m 369Mi ds-idrepo-0 6642m 13822Mi ds-idrepo-1 4509m 13666Mi ds-idrepo-2 5243m 13704Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1822m 5018Mi idm-65858d8c4c-gdv6b 1900m 5195Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 594m 744Mi 17:41:53 DEBUG --- stderr --- 17:41:53 DEBUG 17:41:56 INFO 17:41:56 INFO [loop_until]: kubectl --namespace=xlou top node 17:41:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:41:56 INFO [loop_until]: OK (rc = 0) 17:41:56 DEBUG --- stdout --- 17:41:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 296m 1% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 282m 1% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 288m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2110m 13% 6518Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1130m 7% 2181Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2061m 12% 6273Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 9033m 56% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2802m 17% 14504Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4035m 25% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 660m 4% 2252Mi 3% 17:41:56 DEBUG --- stderr --- 17:41:56 DEBUG 17:42:53 INFO 17:42:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:42:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:42:53 INFO [loop_until]: OK (rc = 0) 17:42:53 DEBUG --- stdout --- 17:42:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 297m 5826Mi am-55f77847b7-sgmd6 265m 5751Mi am-55f77847b7-wq5w5 220m 5751Mi ds-cts-0 6m 373Mi ds-cts-1 6m 378Mi ds-cts-2 9m 369Mi ds-idrepo-0 5665m 13825Mi ds-idrepo-1 3401m 13826Mi ds-idrepo-2 3311m 13749Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1808m 5026Mi idm-65858d8c4c-gdv6b 1910m 5206Mi lodemon-86d6dfd886-rxdp4 5m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 588m 745Mi 17:42:53 DEBUG --- stderr --- 17:42:53 DEBUG 17:42:56 INFO 17:42:56 INFO [loop_until]: kubectl --namespace=xlou top node 17:42:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:42:56 INFO [loop_until]: OK (rc = 0) 17:42:56 DEBUG --- stdout --- 17:42:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 290m 1% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 277m 1% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 278m 1% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2051m 12% 6521Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1103m 6% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2001m 12% 6280Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7009m 44% 14524Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3655m 23% 14495Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3255m 20% 14300Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 682m 4% 2254Mi 3% 17:42:56 DEBUG --- stderr --- 17:42:56 DEBUG 17:43:53 INFO 17:43:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:43:53 INFO [loop_until]: OK (rc = 0) 17:43:53 DEBUG --- stdout --- 17:43:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 238m 5826Mi am-55f77847b7-sgmd6 234m 5751Mi am-55f77847b7-wq5w5 233m 5751Mi ds-cts-0 7m 373Mi ds-cts-1 6m 378Mi ds-cts-2 12m 373Mi ds-idrepo-0 5971m 13824Mi ds-idrepo-1 3193m 13805Mi ds-idrepo-2 4251m 13706Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1832m 5032Mi idm-65858d8c4c-gdv6b 1895m 5212Mi lodemon-86d6dfd886-rxdp4 9m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 616m 745Mi 17:43:53 DEBUG --- stderr --- 17:43:53 DEBUG 17:43:56 INFO 17:43:56 INFO [loop_until]: kubectl --namespace=xlou top node 17:43:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:43:56 INFO [loop_until]: OK (rc = 0) 17:43:56 DEBUG --- stdout --- 17:43:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 297m 1% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 288m 1% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 291m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2055m 12% 6528Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1124m 7% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1952m 12% 6299Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7705m 48% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3771m 23% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3907m 24% 14499Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 671m 4% 2254Mi 3% 17:43:56 DEBUG --- stderr --- 17:43:56 DEBUG 17:44:53 INFO 17:44:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:44:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:44:53 INFO [loop_until]: OK (rc = 0) 17:44:53 DEBUG --- stdout --- 17:44:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 236m 5826Mi am-55f77847b7-sgmd6 228m 5751Mi am-55f77847b7-wq5w5 294m 5754Mi ds-cts-0 7m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 367Mi ds-idrepo-0 5724m 13809Mi ds-idrepo-1 4621m 13716Mi ds-idrepo-2 3839m 13832Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1822m 5038Mi idm-65858d8c4c-gdv6b 1919m 5222Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 567m 745Mi 17:44:53 DEBUG --- stderr --- 17:44:53 DEBUG 17:44:56 INFO 17:44:56 INFO [loop_until]: kubectl --namespace=xlou top node 17:44:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:44:56 INFO [loop_until]: OK (rc = 0) 17:44:56 DEBUG --- stdout --- 17:44:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 290m 1% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 282m 1% 6889Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 291m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2099m 13% 6533Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1122m 7% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2003m 12% 6296Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8038m 50% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3141m 19% 14504Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3589m 22% 14491Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 663m 4% 2252Mi 3% 17:44:56 DEBUG --- stderr --- 17:44:56 DEBUG 17:45:53 INFO 17:45:53 INFO [loop_until]: kubectl --namespace=xlou top pods 17:45:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:45:54 INFO [loop_until]: OK (rc = 0) 17:45:54 DEBUG --- stdout --- 17:45:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 264m 5828Mi am-55f77847b7-sgmd6 281m 5753Mi am-55f77847b7-wq5w5 236m 5754Mi ds-cts-0 6m 373Mi ds-cts-1 5m 378Mi ds-cts-2 6m 367Mi ds-idrepo-0 9037m 13558Mi ds-idrepo-1 3471m 13753Mi ds-idrepo-2 4241m 13768Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1879m 5047Mi idm-65858d8c4c-gdv6b 1874m 5230Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 564m 745Mi 17:45:54 DEBUG --- stderr --- 17:45:54 DEBUG 17:45:56 INFO 17:45:56 INFO [loop_until]: kubectl --namespace=xlou top node 17:45:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:45:56 INFO [loop_until]: OK (rc = 0) 17:45:56 DEBUG --- stdout --- 17:45:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 291m 1% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 288m 1% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 292m 1% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2121m 13% 6542Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1115m 7% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2004m 12% 6305Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6089m 38% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2681m 16% 14512Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3500m 22% 14486Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 668m 4% 2256Mi 3% 17:45:56 DEBUG --- stderr --- 17:45:56 DEBUG 17:46:54 INFO 17:46:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:46:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:46:54 INFO [loop_until]: OK (rc = 0) 17:46:54 DEBUG --- stdout --- 17:46:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 232m 5828Mi am-55f77847b7-sgmd6 226m 5748Mi am-55f77847b7-wq5w5 232m 5754Mi ds-cts-0 6m 373Mi ds-cts-1 5m 378Mi ds-cts-2 6m 367Mi ds-idrepo-0 6808m 13818Mi ds-idrepo-1 3246m 13818Mi ds-idrepo-2 3583m 13843Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1773m 5053Mi idm-65858d8c4c-gdv6b 1855m 5237Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 572m 746Mi 17:46:54 DEBUG --- stderr --- 17:46:54 DEBUG 17:46:56 INFO 17:46:56 INFO [loop_until]: kubectl --namespace=xlou top node 17:46:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:46:57 INFO [loop_until]: OK (rc = 0) 17:46:57 DEBUG --- stdout --- 17:46:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 285m 1% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 289m 1% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 291m 1% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2029m 12% 6549Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1114m 7% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1948m 12% 6309Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6053m 38% 14516Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 6095m 38% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2340m 14% 14493Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 683m 4% 2256Mi 3% 17:46:57 DEBUG --- stderr --- 17:46:57 DEBUG 17:47:54 INFO 17:47:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:47:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:47:54 INFO [loop_until]: OK (rc = 0) 17:47:54 DEBUG --- stdout --- 17:47:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 241m 5828Mi am-55f77847b7-sgmd6 230m 5752Mi am-55f77847b7-wq5w5 252m 5755Mi ds-cts-0 6m 374Mi ds-cts-1 5m 378Mi ds-cts-2 6m 367Mi ds-idrepo-0 6576m 13775Mi ds-idrepo-1 2239m 13825Mi ds-idrepo-2 3333m 13703Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1825m 5058Mi idm-65858d8c4c-gdv6b 1936m 5244Mi lodemon-86d6dfd886-rxdp4 8m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 585m 746Mi 17:47:54 DEBUG --- stderr --- 17:47:54 DEBUG 17:47:57 INFO 17:47:57 INFO [loop_until]: kubectl --namespace=xlou top node 17:47:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:47:57 INFO [loop_until]: OK (rc = 0) 17:47:57 DEBUG --- stdout --- 17:47:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 295m 1% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 286m 1% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 292m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2107m 13% 6557Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1127m 7% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1963m 12% 6316Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5558m 34% 14521Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3366m 21% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3010m 18% 14471Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 677m 4% 2255Mi 3% 17:47:57 DEBUG --- stderr --- 17:47:57 DEBUG 17:48:54 INFO 17:48:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:48:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:48:54 INFO [loop_until]: OK (rc = 0) 17:48:54 DEBUG --- stdout --- 17:48:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 295m 5830Mi am-55f77847b7-sgmd6 262m 5754Mi am-55f77847b7-wq5w5 225m 5755Mi ds-cts-0 5m 374Mi ds-cts-1 6m 378Mi ds-cts-2 7m 367Mi ds-idrepo-0 7596m 13639Mi ds-idrepo-1 3694m 13715Mi ds-idrepo-2 3452m 13847Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1812m 5065Mi idm-65858d8c4c-gdv6b 1822m 5250Mi lodemon-86d6dfd886-rxdp4 5m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 530m 746Mi 17:48:54 DEBUG --- stderr --- 17:48:54 DEBUG 17:48:57 INFO 17:48:57 INFO [loop_until]: kubectl --namespace=xlou top node 17:48:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:48:57 INFO [loop_until]: OK (rc = 0) 17:48:57 DEBUG --- stdout --- 17:48:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 294m 1% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 293m 1% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 292m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2086m 13% 6560Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1119m 7% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1974m 12% 6322Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 5700m 35% 14522Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 4462m 28% 14497Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3466m 21% 14515Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 572m 3% 2256Mi 3% 17:48:57 DEBUG --- stderr --- 17:48:57 DEBUG 17:49:54 INFO 17:49:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:49:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:49:54 INFO [loop_until]: OK (rc = 0) 17:49:54 DEBUG --- stdout --- 17:49:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 233m 5830Mi am-55f77847b7-sgmd6 225m 5754Mi am-55f77847b7-wq5w5 230m 5755Mi ds-cts-0 7m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 367Mi ds-idrepo-0 5541m 13822Mi ds-idrepo-1 3675m 13697Mi ds-idrepo-2 3579m 13614Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1867m 5070Mi idm-65858d8c4c-gdv6b 1821m 5255Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 531m 747Mi 17:49:54 DEBUG --- stderr --- 17:49:54 DEBUG 17:49:57 INFO 17:49:57 INFO [loop_until]: kubectl --namespace=xlou top node 17:49:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:49:57 INFO [loop_until]: OK (rc = 0) 17:49:57 DEBUG --- stdout --- 17:49:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 295m 1% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 277m 1% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 288m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2033m 12% 6569Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1100m 6% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1965m 12% 6325Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7029m 44% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5002m 31% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2379m 14% 14453Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 590m 3% 2254Mi 3% 17:49:57 DEBUG --- stderr --- 17:49:57 DEBUG 17:50:54 INFO 17:50:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:50:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:50:54 INFO [loop_until]: OK (rc = 0) 17:50:54 DEBUG --- stdout --- 17:50:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 211m 5830Mi am-55f77847b7-sgmd6 224m 5754Mi am-55f77847b7-wq5w5 268m 5757Mi ds-cts-0 5m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 5775m 13824Mi ds-idrepo-1 3266m 13819Mi ds-idrepo-2 5489m 13816Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1776m 5078Mi idm-65858d8c4c-gdv6b 1801m 5262Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 509m 747Mi 17:50:54 DEBUG --- stderr --- 17:50:54 DEBUG 17:50:57 INFO 17:50:57 INFO [loop_until]: kubectl --namespace=xlou top node 17:50:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:50:57 INFO [loop_until]: OK (rc = 0) 17:50:57 DEBUG --- stdout --- 17:50:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 286m 1% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 275m 1% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 281m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2091m 13% 6584Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1122m 7% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2063m 12% 6338Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7693m 48% 14330Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2707m 17% 14517Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3211m 20% 14500Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 606m 3% 2257Mi 3% 17:50:57 DEBUG --- stderr --- 17:50:57 DEBUG 17:51:54 INFO 17:51:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:51:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:51:54 INFO [loop_until]: OK (rc = 0) 17:51:54 DEBUG --- stdout --- 17:51:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 305m 5831Mi am-55f77847b7-sgmd6 330m 5756Mi am-55f77847b7-wq5w5 225m 5757Mi ds-cts-0 6m 374Mi ds-cts-1 5m 378Mi ds-cts-2 6m 368Mi ds-idrepo-0 6736m 13575Mi ds-idrepo-1 3607m 13789Mi ds-idrepo-2 3340m 13825Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1781m 5092Mi idm-65858d8c4c-gdv6b 1949m 5281Mi lodemon-86d6dfd886-rxdp4 8m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 537m 747Mi 17:51:54 DEBUG --- stderr --- 17:51:54 DEBUG 17:51:57 INFO 17:51:57 INFO [loop_until]: kubectl --namespace=xlou top node 17:51:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:51:57 INFO [loop_until]: OK (rc = 0) 17:51:57 DEBUG --- stdout --- 17:51:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 296m 1% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 289m 1% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 289m 1% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2122m 13% 6593Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1118m 7% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2034m 12% 6346Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6745m 42% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3142m 19% 14496Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4020m 25% 14504Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 594m 3% 2255Mi 3% 17:51:57 DEBUG --- stderr --- 17:51:57 DEBUG 17:52:54 INFO 17:52:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:52:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:52:54 INFO [loop_until]: OK (rc = 0) 17:52:54 DEBUG --- stdout --- 17:52:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 228m 5831Mi am-55f77847b7-sgmd6 229m 5756Mi am-55f77847b7-wq5w5 224m 5757Mi ds-cts-0 7m 374Mi ds-cts-1 5m 378Mi ds-cts-2 7m 367Mi ds-idrepo-0 5627m 13824Mi ds-idrepo-1 2475m 13824Mi ds-idrepo-2 2657m 13816Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1771m 5098Mi idm-65858d8c4c-gdv6b 1890m 5285Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 535m 747Mi 17:52:54 DEBUG --- stderr --- 17:52:54 DEBUG 17:52:57 INFO 17:52:57 INFO [loop_until]: kubectl --namespace=xlou top node 17:52:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:52:57 INFO [loop_until]: OK (rc = 0) 17:52:57 DEBUG --- stdout --- 17:52:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 291m 1% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 289m 1% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 288m 1% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2139m 13% 6611Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1114m 7% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1991m 12% 6352Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 8217m 51% 14283Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 5108m 32% 14512Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 70m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4648m 29% 14292Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 612m 3% 2257Mi 3% 17:52:57 DEBUG --- stderr --- 17:52:57 DEBUG 17:53:54 INFO 17:53:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:53:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:53:54 INFO [loop_until]: OK (rc = 0) 17:53:54 DEBUG --- stdout --- 17:53:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 232m 5831Mi am-55f77847b7-sgmd6 228m 5756Mi am-55f77847b7-wq5w5 259m 5759Mi ds-cts-0 6m 373Mi ds-cts-1 5m 378Mi ds-cts-2 9m 368Mi ds-idrepo-0 8253m 13641Mi ds-idrepo-1 3675m 13811Mi ds-idrepo-2 4216m 13719Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1869m 5106Mi idm-65858d8c4c-gdv6b 1928m 5295Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 537m 747Mi 17:53:54 DEBUG --- stderr --- 17:53:54 DEBUG 17:53:57 INFO 17:53:57 INFO [loop_until]: kubectl --namespace=xlou top node 17:53:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:53:57 INFO [loop_until]: OK (rc = 0) 17:53:57 DEBUG --- stdout --- 17:53:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 290m 1% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 290m 1% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 291m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2119m 13% 6606Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1119m 7% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2022m 12% 6361Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6688m 42% 14522Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3872m 24% 14504Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3566m 22% 14431Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 607m 3% 2257Mi 3% 17:53:57 DEBUG --- stderr --- 17:53:57 DEBUG 17:54:54 INFO 17:54:54 INFO [loop_until]: kubectl --namespace=xlou top pods 17:54:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:54:54 INFO [loop_until]: OK (rc = 0) 17:54:54 DEBUG --- stdout --- 17:54:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 272m 5833Mi am-55f77847b7-sgmd6 266m 5757Mi am-55f77847b7-wq5w5 229m 5759Mi ds-cts-0 6m 374Mi ds-cts-1 6m 379Mi ds-cts-2 6m 368Mi ds-idrepo-0 6699m 13759Mi ds-idrepo-1 4599m 13524Mi ds-idrepo-2 3024m 13841Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1842m 5112Mi idm-65858d8c4c-gdv6b 1876m 5295Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 552m 747Mi 17:54:54 DEBUG --- stderr --- 17:54:54 DEBUG 17:54:57 INFO 17:54:57 INFO [loop_until]: kubectl --namespace=xlou top node 17:54:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:54:58 INFO [loop_until]: OK (rc = 0) 17:54:58 DEBUG --- stdout --- 17:54:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 293m 1% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 284m 1% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 291m 1% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2046m 12% 6606Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1113m 7% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2040m 12% 6368Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6365m 40% 14525Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3229m 20% 14298Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3699m 23% 14513Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 575m 3% 2257Mi 3% 17:54:58 DEBUG --- stderr --- 17:54:58 DEBUG 17:55:55 INFO 17:55:55 INFO [loop_until]: kubectl --namespace=xlou top pods 17:55:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:55:55 INFO [loop_until]: OK (rc = 0) 17:55:55 DEBUG --- stdout --- 17:55:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 231m 5832Mi am-55f77847b7-sgmd6 232m 5757Mi am-55f77847b7-wq5w5 233m 5759Mi ds-cts-0 7m 373Mi ds-cts-1 6m 378Mi ds-cts-2 6m 368Mi ds-idrepo-0 6876m 13775Mi ds-idrepo-1 3024m 13795Mi ds-idrepo-2 3400m 13830Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1831m 5117Mi idm-65858d8c4c-gdv6b 1875m 5295Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 534m 748Mi 17:55:55 DEBUG --- stderr --- 17:55:55 DEBUG 17:55:58 INFO 17:55:58 INFO [loop_until]: kubectl --namespace=xlou top node 17:55:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:55:58 INFO [loop_until]: OK (rc = 0) 17:55:58 DEBUG --- stdout --- 17:55:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 296m 1% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 288m 1% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 292m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2103m 13% 6605Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1124m 7% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2028m 12% 6373Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6613m 41% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 2471m 15% 14528Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2818m 17% 14508Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 602m 3% 2258Mi 3% 17:55:58 DEBUG --- stderr --- 17:55:58 DEBUG 17:56:55 INFO 17:56:55 INFO [loop_until]: kubectl --namespace=xlou top pods 17:56:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:56:55 INFO [loop_until]: OK (rc = 0) 17:56:55 DEBUG --- stdout --- 17:56:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 237m 5833Mi am-55f77847b7-sgmd6 232m 5757Mi am-55f77847b7-wq5w5 266m 5760Mi ds-cts-0 6m 374Mi ds-cts-1 6m 378Mi ds-cts-2 6m 368Mi ds-idrepo-0 7936m 13775Mi ds-idrepo-1 3988m 13823Mi ds-idrepo-2 3187m 13696Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1778m 5122Mi idm-65858d8c4c-gdv6b 1853m 5295Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 518m 748Mi 17:56:55 DEBUG --- stderr --- 17:56:55 DEBUG 17:56:58 INFO 17:56:58 INFO [loop_until]: kubectl --namespace=xlou top node 17:56:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:56:58 INFO [loop_until]: OK (rc = 0) 17:56:58 DEBUG --- stdout --- 17:56:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 299m 1% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 278m 1% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 286m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2097m 13% 6604Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1133m 7% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1996m 12% 6379Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6610m 41% 14535Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3428m 21% 14505Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2512m 15% 14513Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 618m 3% 2257Mi 3% 17:56:58 DEBUG --- stderr --- 17:56:58 DEBUG 17:57:55 INFO 17:57:55 INFO [loop_until]: kubectl --namespace=xlou top pods 17:57:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:57:55 INFO [loop_until]: OK (rc = 0) 17:57:55 DEBUG --- stdout --- 17:57:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 288m 5833Mi am-55f77847b7-sgmd6 268m 5758Mi am-55f77847b7-wq5w5 231m 5760Mi ds-cts-0 6m 375Mi ds-cts-1 5m 378Mi ds-cts-2 6m 369Mi ds-idrepo-0 6896m 13677Mi ds-idrepo-1 2461m 13847Mi ds-idrepo-2 3694m 13836Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1839m 5129Mi idm-65858d8c4c-gdv6b 1925m 5295Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 530m 748Mi 17:57:55 DEBUG --- stderr --- 17:57:55 DEBUG 17:57:58 INFO 17:57:58 INFO [loop_until]: kubectl --namespace=xlou top node 17:57:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:57:58 INFO [loop_until]: OK (rc = 0) 17:57:58 DEBUG --- stdout --- 17:57:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 295m 1% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 285m 1% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 285m 1% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2082m 13% 6607Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1108m 6% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1990m 12% 6387Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 6640m 41% 14541Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3612m 22% 14474Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2311m 14% 14508Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 592m 3% 2255Mi 3% 17:57:58 DEBUG --- stderr --- 17:57:58 DEBUG 17:58:55 INFO 17:58:55 INFO [loop_until]: kubectl --namespace=xlou top pods 17:58:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:58:55 INFO [loop_until]: OK (rc = 0) 17:58:55 DEBUG --- stdout --- 17:58:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 227m 5834Mi am-55f77847b7-sgmd6 227m 5758Mi am-55f77847b7-wq5w5 222m 5760Mi ds-cts-0 5m 376Mi ds-cts-1 5m 379Mi ds-cts-2 7m 368Mi ds-idrepo-0 5875m 13825Mi ds-idrepo-1 2703m 13822Mi ds-idrepo-2 4603m 13864Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1847m 5135Mi idm-65858d8c4c-gdv6b 1884m 5295Mi lodemon-86d6dfd886-rxdp4 7m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 530m 748Mi 17:58:55 DEBUG --- stderr --- 17:58:55 DEBUG 17:58:58 INFO 17:58:58 INFO [loop_until]: kubectl --namespace=xlou top node 17:58:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:58:58 INFO [loop_until]: OK (rc = 0) 17:58:58 DEBUG --- stdout --- 17:58:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 298m 1% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 289m 1% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 291m 1% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2105m 13% 6606Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1136m 7% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2041m 12% 6393Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7827m 49% 14540Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3337m 21% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3684m 23% 14504Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 619m 3% 2258Mi 3% 17:58:58 DEBUG --- stderr --- 17:58:58 DEBUG 17:59:55 INFO 17:59:55 INFO [loop_until]: kubectl --namespace=xlou top pods 17:59:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:59:55 INFO [loop_until]: OK (rc = 0) 17:59:55 DEBUG --- stdout --- 17:59:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 243m 5833Mi am-55f77847b7-sgmd6 226m 5758Mi am-55f77847b7-wq5w5 272m 5761Mi ds-cts-0 5m 375Mi ds-cts-1 6m 379Mi ds-cts-2 6m 368Mi ds-idrepo-0 7676m 13697Mi ds-idrepo-1 3362m 13843Mi ds-idrepo-2 4917m 13511Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 1849m 5143Mi idm-65858d8c4c-gdv6b 1922m 5295Mi lodemon-86d6dfd886-rxdp4 6m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 546m 748Mi 17:59:55 DEBUG --- stderr --- 17:59:55 DEBUG 17:59:58 INFO 17:59:58 INFO [loop_until]: kubectl --namespace=xlou top node 17:59:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 17:59:58 INFO [loop_until]: OK (rc = 0) 17:59:58 DEBUG --- stdout --- 17:59:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 294m 1% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 287m 1% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 289m 1% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2026m 12% 6608Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1115m 7% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1943m 12% 6400Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 7222m 45% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 3047m 19% 14505Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3142m 19% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 615m 3% 2258Mi 3% 17:59:58 DEBUG --- stderr --- 17:59:58 DEBUG 18:00:55 INFO 18:00:55 INFO [loop_until]: kubectl --namespace=xlou top pods 18:00:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:00:55 INFO [loop_until]: OK (rc = 0) 18:00:55 DEBUG --- stdout --- 18:00:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 66m 5834Mi am-55f77847b7-sgmd6 85m 5759Mi am-55f77847b7-wq5w5 102m 5761Mi ds-cts-0 6m 376Mi ds-cts-1 6m 379Mi ds-cts-2 5m 368Mi ds-idrepo-0 2155m 13644Mi ds-idrepo-1 693m 13838Mi ds-idrepo-2 1826m 13811Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 861m 5152Mi idm-65858d8c4c-gdv6b 837m 5295Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 550m 749Mi 18:00:55 DEBUG --- stderr --- 18:00:55 DEBUG 18:00:58 INFO 18:00:58 INFO [loop_until]: kubectl --namespace=xlou top node 18:00:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:00:58 INFO [loop_until]: OK (rc = 0) 18:00:58 DEBUG --- stdout --- 18:00:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 131m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 85m 0% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 898m 5% 6608Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1114m 7% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 786m 4% 6408Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 1259m 7% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 1018m 6% 14525Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1170m 7% 14508Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 541m 3% 2253Mi 3% 18:00:58 DEBUG --- stderr --- 18:00:58 DEBUG 18:01:55 INFO 18:01:55 INFO [loop_until]: kubectl --namespace=xlou top pods 18:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:01:55 INFO [loop_until]: OK (rc = 0) 18:01:55 DEBUG --- stdout --- 18:01:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 6m 5834Mi am-55f77847b7-sgmd6 6m 5759Mi am-55f77847b7-wq5w5 7m 5760Mi ds-cts-0 6m 375Mi ds-cts-1 7m 379Mi ds-cts-2 4m 369Mi ds-idrepo-0 282m 13541Mi ds-idrepo-1 155m 13732Mi ds-idrepo-2 8m 13633Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 5151Mi idm-65858d8c4c-gdv6b 7m 5294Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 26m 203Mi 18:01:55 DEBUG --- stderr --- 18:01:55 DEBUG 18:01:58 INFO 18:01:58 INFO [loop_until]: kubectl --namespace=xlou top node 18:01:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:01:58 INFO [loop_until]: OK (rc = 0) 18:01:58 DEBUG --- stdout --- 18:01:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6609Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6408Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 332m 2% 14254Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 603m 3% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14323Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1717Mi 2% 18:01:58 DEBUG --- stderr --- 18:01:58 DEBUG 18:02:56 INFO 18:02:56 INFO [loop_until]: kubectl --namespace=xlou top pods 18:02:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:02:56 INFO [loop_until]: OK (rc = 0) 18:02:56 DEBUG --- stdout --- 18:02:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 4Mi am-55f77847b7-klhnq 6m 5834Mi am-55f77847b7-sgmd6 6m 5759Mi am-55f77847b7-wq5w5 7m 5760Mi ds-cts-0 6m 375Mi ds-cts-1 5m 379Mi ds-cts-2 6m 370Mi ds-idrepo-0 12m 13541Mi ds-idrepo-1 14m 13565Mi ds-idrepo-2 15m 13633Mi end-user-ui-6845bc78c7-9zthp 1m 4Mi idm-65858d8c4c-5kkq9 8m 5151Mi idm-65858d8c4c-gdv6b 7m 5294Mi lodemon-86d6dfd886-rxdp4 1m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1m 203Mi 18:02:56 DEBUG --- stderr --- 18:02:56 DEBUG 18:02:58 INFO 18:02:58 INFO [loop_until]: kubectl --namespace=xlou top node 18:02:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:02:59 INFO [loop_until]: OK (rc = 0) 18:02:59 DEBUG --- stdout --- 18:02:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6922Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 6608Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 6407Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 14252Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14252Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14324Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1720Mi 2% 18:02:59 DEBUG --- stderr --- 18:02:59 DEBUG 127.0.0.1 - - [12/Aug/2023 18:03:05] "GET /monitoring/average?start_time=23-08-12_16:32:34&stop_time=23-08-12_17:01:04 HTTP/1.1" 200 - 18:03:56 INFO 18:03:56 INFO [loop_until]: kubectl --namespace=xlou top pods 18:03:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:03:56 INFO [loop_until]: OK (rc = 0) 18:03:56 DEBUG --- stdout --- 18:03:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 5Mi am-55f77847b7-klhnq 7m 5834Mi am-55f77847b7-sgmd6 13m 5759Mi am-55f77847b7-wq5w5 12m 5760Mi ds-cts-0 143m 375Mi ds-cts-1 60m 379Mi ds-cts-2 54m 371Mi ds-idrepo-0 365m 13572Mi ds-idrepo-1 183m 13566Mi ds-idrepo-2 248m 13635Mi end-user-ui-6845bc78c7-9zthp 1m 5Mi idm-65858d8c4c-5kkq9 8m 5151Mi idm-65858d8c4c-gdv6b 7m 5294Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 569m 495Mi 18:03:56 DEBUG --- stderr --- 18:03:56 DEBUG 18:03:59 INFO 18:03:59 INFO [loop_until]: kubectl --namespace=xlou top node 18:03:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:03:59 INFO [loop_until]: OK (rc = 0) 18:03:59 DEBUG --- stdout --- 18:03:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 92m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 73m 0% 6922Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 6611Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 143m 0% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 81m 0% 6405Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 139m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 123m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 450m 2% 14254Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 314m 1% 14253Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 126m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 279m 1% 14370Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 806m 5% 2000Mi 3% 18:03:59 DEBUG --- stderr --- 18:03:59 DEBUG 18:04:56 INFO 18:04:56 INFO [loop_until]: kubectl --namespace=xlou top pods 18:04:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:04:56 INFO [loop_until]: OK (rc = 0) 18:04:56 DEBUG --- stdout --- 18:04:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 5Mi am-55f77847b7-klhnq 6m 5834Mi am-55f77847b7-sgmd6 7m 5760Mi am-55f77847b7-wq5w5 7m 5761Mi ds-cts-0 7m 375Mi ds-cts-1 5m 379Mi ds-cts-2 5m 369Mi ds-idrepo-0 74m 13541Mi ds-idrepo-1 10m 13566Mi ds-idrepo-2 9m 13634Mi end-user-ui-6845bc78c7-9zthp 1m 5Mi idm-65858d8c4c-5kkq9 7m 5150Mi idm-65858d8c4c-gdv6b 7m 5294Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 611m 384Mi 18:04:56 DEBUG --- stderr --- 18:04:56 DEBUG 18:04:59 INFO 18:04:59 INFO [loop_until]: kubectl --namespace=xlou top node 18:04:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:04:59 INFO [loop_until]: OK (rc = 0) 18:04:59 DEBUG --- stdout --- 18:04:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6933Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6611Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 6406Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 127m 0% 14260Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14258Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14323Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 782m 4% 1901Mi 3% 18:04:59 DEBUG --- stderr --- 18:04:59 DEBUG 18:05:56 INFO 18:05:56 INFO [loop_until]: kubectl --namespace=xlou top pods 18:05:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:05:56 INFO [loop_until]: OK (rc = 0) 18:05:56 DEBUG --- stdout --- 18:05:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 5Mi am-55f77847b7-klhnq 7m 5834Mi am-55f77847b7-sgmd6 6m 5759Mi am-55f77847b7-wq5w5 8m 5760Mi ds-cts-0 6m 375Mi ds-cts-1 6m 380Mi ds-cts-2 4m 370Mi ds-idrepo-0 12m 13542Mi ds-idrepo-1 9m 13565Mi ds-idrepo-2 8m 13634Mi end-user-ui-6845bc78c7-9zthp 1m 5Mi idm-65858d8c4c-5kkq9 8m 5150Mi idm-65858d8c4c-gdv6b 10m 5294Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1016m 840Mi 18:05:56 DEBUG --- stderr --- 18:05:56 DEBUG 18:05:59 INFO 18:05:59 INFO [loop_until]: kubectl --namespace=xlou top node 18:05:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:05:59 INFO [loop_until]: OK (rc = 0) 18:05:59 DEBUG --- stdout --- 18:05:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6922Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 6610Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 6406Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14260Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14260Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14327Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1076m 6% 2334Mi 3% 18:05:59 DEBUG --- stderr --- 18:05:59 DEBUG 18:06:56 INFO 18:06:56 INFO [loop_until]: kubectl --namespace=xlou top pods 18:06:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:06:56 INFO [loop_until]: OK (rc = 0) 18:06:56 DEBUG --- stdout --- 18:06:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 5Mi am-55f77847b7-klhnq 7m 5834Mi am-55f77847b7-sgmd6 6m 5759Mi am-55f77847b7-wq5w5 8m 5761Mi ds-cts-0 5m 375Mi ds-cts-1 6m 380Mi ds-cts-2 5m 371Mi ds-idrepo-0 12m 13541Mi ds-idrepo-1 42m 13557Mi ds-idrepo-2 8m 13634Mi end-user-ui-6845bc78c7-9zthp 1m 5Mi idm-65858d8c4c-5kkq9 8m 5150Mi idm-65858d8c4c-gdv6b 7m 5293Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1144m 967Mi 18:06:56 DEBUG --- stderr --- 18:06:56 DEBUG 18:06:59 INFO 18:06:59 INFO [loop_until]: kubectl --namespace=xlou top node 18:06:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:06:59 INFO [loop_until]: OK (rc = 0) 18:06:59 DEBUG --- stdout --- 18:06:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 58m 0% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6609Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6406Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 14259Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 87m 0% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14326Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1312m 8% 2462Mi 4% 18:06:59 DEBUG --- stderr --- 18:06:59 DEBUG 18:07:56 INFO 18:07:56 INFO [loop_until]: kubectl --namespace=xlou top pods 18:07:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:07:56 INFO [loop_until]: OK (rc = 0) 18:07:56 DEBUG --- stdout --- 18:07:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-s976l 1m 5Mi am-55f77847b7-klhnq 7m 5834Mi am-55f77847b7-sgmd6 6m 5759Mi am-55f77847b7-wq5w5 8m 5760Mi ds-cts-0 5m 375Mi ds-cts-1 7m 379Mi ds-cts-2 4m 370Mi ds-idrepo-0 11m 13541Mi ds-idrepo-1 9m 13558Mi ds-idrepo-2 8m 13634Mi end-user-ui-6845bc78c7-9zthp 1m 5Mi idm-65858d8c4c-5kkq9 7m 5150Mi idm-65858d8c4c-gdv6b 7m 5293Mi lodemon-86d6dfd886-rxdp4 2m 66Mi login-ui-74d6fb46c-2hbvv 1m 3Mi overseer-0-64c9959746-2jz9t 1274m 1087Mi 18:07:56 DEBUG --- stderr --- 18:07:56 DEBUG 18:07:59 INFO 18:07:59 INFO [loop_until]: kubectl --namespace=xlou top node 18:07:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:07:59 INFO [loop_until]: OK (rc = 0) 18:07:59 DEBUG --- stdout --- 18:07:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6922Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 6612Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6405Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 14259Mi 24% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14250Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14323Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1157m 7% 2597Mi 4% 18:07:59 DEBUG --- stderr --- 18:07:59 DEBUG 18:08:25 INFO Finished: True 18:08:25 INFO Waiting for threads to register finish flag 18:08:59 INFO Done. Have a nice day! :) 127.0.0.1 - - [12/Aug/2023 18:08:59] "GET /monitoring/stop HTTP/1.1" 200 - 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Cpu_cores_used_per_pod.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Memory_usage_per_pod.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Disk_tps_read_per_pod.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Disk_tps_writes_per_pod.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Cpu_cores_used_per_node.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Memory_usage_used_per_node.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Cpu_iowait_per_node.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Network_receive_per_node.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Network_transmit_per_node.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/am_cts_task_count_token_session.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/am_authentication_rate.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/am_authentication_count_per_pod.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/Cts_reaper_Deletion_count.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/AM_oauth2_authorization_codes.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/ds_pods_replication_delay.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/am_cts_reaper_cache_size.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/node_disk_read_bytes_total.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/node_disk_written_bytes_total.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/ds_backend_entry_count.json does not exist. Skipping... 18:09:02 INFO File /tmp/lodemon_data-23-08-12_15:29:01/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [12/Aug/2023 18:09:04] "GET /monitoring/process HTTP/1.1" 200 -