==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-9c5f9bf5b-bl4rx Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sat, 12 Aug 2023 11:01:51 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=9c5f9bf5b skaffold.dev/run-id=1c4b5d13-f000-4c58-8475-10823934f209 Annotations: Status: Running IP: 10.106.45.62 IPs: IP: 10.106.45.62 Controlled By: ReplicaSet/lodemon-9c5f9bf5b Containers: lodemon: Container ID: containerd://8c80bb428e8696ec331c832c2420fedcdea1758c903fafad212d4e639767e743 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sat, 12 Aug 2023 11:01:52 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fsdng (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-fsdng: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 12:01:53 INFO 12:01:53 INFO --------------------- Get expected number of pods --------------------- 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 12:01:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:53 INFO [loop_until]: OK (rc = 0) 12:01:53 DEBUG --- stdout --- 12:01:53 DEBUG 3 12:01:53 DEBUG --- stderr --- 12:01:53 DEBUG 12:01:53 INFO 12:01:53 INFO ---------------------------- Get pod list ---------------------------- 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 12:01:53 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 12:01:53 INFO [loop_until]: OK (rc = 0) 12:01:53 DEBUG --- stdout --- 12:01:53 DEBUG am-55f77847b7-5g27b am-55f77847b7-6hcmp am-55f77847b7-8wqjg 12:01:53 DEBUG --- stderr --- 12:01:53 DEBUG 12:01:53 INFO 12:01:53 INFO -------------- Check pod am-55f77847b7-5g27b is running -------------- 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-5g27b -o=jsonpath={.status.phase} | grep "Running" 12:01:53 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:53 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:53 INFO [loop_until]: OK (rc = 0) 12:01:53 DEBUG --- stdout --- 12:01:53 DEBUG Running 12:01:53 DEBUG --- stderr --- 12:01:53 DEBUG 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-5g27b -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:53 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:53 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:53 INFO [loop_until]: OK (rc = 0) 12:01:53 DEBUG --- stdout --- 12:01:53 DEBUG true 12:01:53 DEBUG --- stderr --- 12:01:53 DEBUG 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-5g27b --output jsonpath={.status.startTime} 12:01:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:53 INFO [loop_until]: OK (rc = 0) 12:01:53 DEBUG --- stdout --- 12:01:53 DEBUG 2023-08-12T10:52:52Z 12:01:53 DEBUG --- stderr --- 12:01:53 DEBUG 12:01:53 INFO 12:01:53 INFO ------- Check pod am-55f77847b7-5g27b filesystem is accessible ------- 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-5g27b --container openam -- ls / | grep "bin" 12:01:53 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:53 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:53 INFO [loop_until]: OK (rc = 0) 12:01:53 DEBUG --- stdout --- 12:01:53 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 12:01:53 DEBUG --- stderr --- 12:01:53 DEBUG 12:01:53 INFO 12:01:53 INFO ------------- Check pod am-55f77847b7-5g27b restart count ------------- 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-5g27b --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:53 INFO [loop_until]: OK (rc = 0) 12:01:53 DEBUG --- stdout --- 12:01:53 DEBUG 0 12:01:53 DEBUG --- stderr --- 12:01:53 DEBUG 12:01:53 INFO Pod am-55f77847b7-5g27b has been restarted 0 times. 12:01:53 INFO 12:01:53 INFO -------------- Check pod am-55f77847b7-6hcmp is running -------------- 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-6hcmp -o=jsonpath={.status.phase} | grep "Running" 12:01:53 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:53 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:53 INFO [loop_until]: OK (rc = 0) 12:01:53 DEBUG --- stdout --- 12:01:53 DEBUG Running 12:01:53 DEBUG --- stderr --- 12:01:53 DEBUG 12:01:53 INFO 12:01:53 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-6hcmp -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:53 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG true 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-6hcmp --output jsonpath={.status.startTime} 12:01:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG 2023-08-12T10:52:52Z 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO ------- Check pod am-55f77847b7-6hcmp filesystem is accessible ------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-6hcmp --container openam -- ls / | grep "bin" 12:01:54 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO ------------- Check pod am-55f77847b7-6hcmp restart count ------------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-6hcmp --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG 0 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO Pod am-55f77847b7-6hcmp has been restarted 0 times. 12:01:54 INFO 12:01:54 INFO -------------- Check pod am-55f77847b7-8wqjg is running -------------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-8wqjg -o=jsonpath={.status.phase} | grep "Running" 12:01:54 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG Running 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-8wqjg -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:54 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG true 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-8wqjg --output jsonpath={.status.startTime} 12:01:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG 2023-08-12T10:52:52Z 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO ------- Check pod am-55f77847b7-8wqjg filesystem is accessible ------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-8wqjg --container openam -- ls / | grep "bin" 12:01:54 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO ------------- Check pod am-55f77847b7-8wqjg restart count ------------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-8wqjg --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG 0 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO Pod am-55f77847b7-8wqjg has been restarted 0 times. 12:01:54 INFO 12:01:54 INFO --------------------- Get expected number of pods --------------------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 12:01:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG 2 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO ---------------------------- Get pod list ---------------------------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 12:01:54 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG idm-65858d8c4c-gwvpj idm-65858d8c4c-x6slf 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO -------------- Check pod idm-65858d8c4c-gwvpj is running -------------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-gwvpj -o=jsonpath={.status.phase} | grep "Running" 12:01:54 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG Running 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-gwvpj -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:54 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG true 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-gwvpj --output jsonpath={.status.startTime} 12:01:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG 2023-08-12T10:52:52Z 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO ------- Check pod idm-65858d8c4c-gwvpj filesystem is accessible ------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-gwvpj --container openidm -- ls / | grep "bin" 12:01:54 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO 12:01:54 INFO ------------ Check pod idm-65858d8c4c-gwvpj restart count ------------ 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-gwvpj --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:54 INFO [loop_until]: OK (rc = 0) 12:01:54 DEBUG --- stdout --- 12:01:54 DEBUG 0 12:01:54 DEBUG --- stderr --- 12:01:54 DEBUG 12:01:54 INFO Pod idm-65858d8c4c-gwvpj has been restarted 0 times. 12:01:54 INFO 12:01:54 INFO -------------- Check pod idm-65858d8c4c-x6slf is running -------------- 12:01:54 INFO 12:01:54 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-x6slf -o=jsonpath={.status.phase} | grep "Running" 12:01:54 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG Running 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-x6slf -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG true 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-x6slf --output jsonpath={.status.startTime} 12:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG 2023-08-12T10:52:52Z 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO ------- Check pod idm-65858d8c4c-x6slf filesystem is accessible ------- 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-x6slf --container openidm -- ls / | grep "bin" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO ------------ Check pod idm-65858d8c4c-x6slf restart count ------------ 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-x6slf --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG 0 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO Pod idm-65858d8c4c-x6slf has been restarted 0 times. 12:01:55 INFO 12:01:55 INFO --------------------- Get expected number of pods --------------------- 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 12:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG 3 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO ---------------------------- Get pod list ---------------------------- 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 12:01:55 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG Running 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG true 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 12:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG 2023-08-12T10:19:05Z 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG 0 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO Pod ds-idrepo-0 has been restarted 0 times. 12:01:55 INFO 12:01:55 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG Running 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG true 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 12:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG 2023-08-12T10:30:59Z 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG 0 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO Pod ds-idrepo-1 has been restarted 0 times. 12:01:55 INFO 12:01:55 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:55 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:55 INFO [loop_until]: OK (rc = 0) 12:01:55 DEBUG --- stdout --- 12:01:55 DEBUG Running 12:01:55 DEBUG --- stderr --- 12:01:55 DEBUG 12:01:55 INFO 12:01:55 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:55 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG true 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 12:01:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG 2023-08-12T10:41:53Z 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 12:01:56 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG 0 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO Pod ds-idrepo-2 has been restarted 0 times. 12:01:56 INFO 12:01:56 INFO --------------------- Get expected number of pods --------------------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 12:01:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG 3 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO ---------------------------- Get pod list ---------------------------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 12:01:56 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO -------------------- Check pod ds-cts-0 is running -------------------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 12:01:56 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG Running 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:56 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG true 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 12:01:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG 2023-08-12T10:19:05Z 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 12:01:56 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG 0 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO Pod ds-cts-0 has been restarted 0 times. 12:01:56 INFO 12:01:56 INFO -------------------- Check pod ds-cts-1 is running -------------------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 12:01:56 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG Running 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:56 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG true 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 12:01:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG 2023-08-12T10:19:31Z 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 12:01:56 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO 12:01:56 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:56 INFO [loop_until]: OK (rc = 0) 12:01:56 DEBUG --- stdout --- 12:01:56 DEBUG 0 12:01:56 DEBUG --- stderr --- 12:01:56 DEBUG 12:01:56 INFO Pod ds-cts-1 has been restarted 0 times. 12:01:56 INFO 12:01:56 INFO -------------------- Check pod ds-cts-2 is running -------------------- 12:01:56 INFO 12:01:56 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 12:01:56 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:57 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:57 INFO [loop_until]: OK (rc = 0) 12:01:57 DEBUG --- stdout --- 12:01:57 DEBUG Running 12:01:57 DEBUG --- stderr --- 12:01:57 DEBUG 12:01:57 INFO 12:01:57 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 12:01:57 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:57 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:57 INFO [loop_until]: OK (rc = 0) 12:01:57 DEBUG --- stdout --- 12:01:57 DEBUG true 12:01:57 DEBUG --- stderr --- 12:01:57 DEBUG 12:01:57 INFO 12:01:57 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 12:01:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:57 INFO [loop_until]: OK (rc = 0) 12:01:57 DEBUG --- stdout --- 12:01:57 DEBUG 2023-08-12T10:19:56Z 12:01:57 DEBUG --- stderr --- 12:01:57 DEBUG 12:01:57 INFO 12:01:57 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 12:01:57 INFO 12:01:57 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 12:01:57 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 12:01:57 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 12:01:57 INFO [loop_until]: OK (rc = 0) 12:01:57 DEBUG --- stdout --- 12:01:57 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 12:01:57 DEBUG --- stderr --- 12:01:57 DEBUG 12:01:57 INFO 12:01:57 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 12:01:57 INFO 12:01:57 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 12:01:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:01:57 INFO [loop_until]: OK (rc = 0) 12:01:57 DEBUG --- stdout --- 12:01:57 DEBUG 0 12:01:57 DEBUG --- stderr --- 12:01:57 DEBUG 12:01:57 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.62:8080 Press CTRL+C to quit 12:02:28 INFO 12:02:28 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:28 INFO [loop_until]: OK (rc = 0) 12:02:28 DEBUG --- stdout --- 12:02:28 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:28 DEBUG --- stderr --- 12:02:28 DEBUG 12:02:28 INFO 12:02:28 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:28 INFO [loop_until]: OK (rc = 0) 12:02:28 DEBUG --- stdout --- 12:02:28 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:28 DEBUG --- stderr --- 12:02:28 DEBUG 12:02:28 INFO 12:02:28 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:28 INFO [loop_until]: OK (rc = 0) 12:02:28 DEBUG --- stdout --- 12:02:28 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:28 DEBUG --- stderr --- 12:02:28 DEBUG 12:02:28 INFO 12:02:28 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:28 INFO [loop_until]: OK (rc = 0) 12:02:28 DEBUG --- stdout --- 12:02:28 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:28 DEBUG --- stderr --- 12:02:28 DEBUG 12:02:28 INFO 12:02:28 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:28 INFO [loop_until]: OK (rc = 0) 12:02:28 DEBUG --- stdout --- 12:02:28 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:28 DEBUG --- stderr --- 12:02:28 DEBUG 12:02:28 INFO 12:02:28 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:28 INFO [loop_until]: OK (rc = 0) 12:02:28 DEBUG --- stdout --- 12:02:28 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:28 DEBUG --- stderr --- 12:02:28 DEBUG 12:02:29 INFO 12:02:29 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:29 INFO [loop_until]: OK (rc = 0) 12:02:29 DEBUG --- stdout --- 12:02:29 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:29 DEBUG --- stderr --- 12:02:29 DEBUG 12:02:29 INFO 12:02:29 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:29 INFO [loop_until]: OK (rc = 0) 12:02:29 DEBUG --- stdout --- 12:02:29 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:29 DEBUG --- stderr --- 12:02:29 DEBUG 12:02:29 INFO 12:02:29 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:29 INFO [loop_until]: OK (rc = 0) 12:02:29 DEBUG --- stdout --- 12:02:29 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:29 DEBUG --- stderr --- 12:02:29 DEBUG 12:02:29 INFO 12:02:29 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:29 INFO [loop_until]: OK (rc = 0) 12:02:29 DEBUG --- stdout --- 12:02:29 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:29 DEBUG --- stderr --- 12:02:29 DEBUG 12:02:29 INFO 12:02:29 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:29 INFO [loop_until]: OK (rc = 0) 12:02:29 DEBUG --- stdout --- 12:02:29 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:29 DEBUG --- stderr --- 12:02:29 DEBUG 12:02:29 INFO 12:02:29 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:29 INFO [loop_until]: OK (rc = 0) 12:02:29 DEBUG --- stdout --- 12:02:29 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:29 DEBUG --- stderr --- 12:02:29 DEBUG 12:02:29 INFO 12:02:29 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:29 INFO [loop_until]: OK (rc = 0) 12:02:29 DEBUG --- stdout --- 12:02:29 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:29 DEBUG --- stderr --- 12:02:29 DEBUG 12:02:29 INFO 12:02:29 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:30 INFO [loop_until]: OK (rc = 0) 12:02:30 DEBUG --- stdout --- 12:02:30 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:30 DEBUG --- stderr --- 12:02:30 DEBUG 12:02:30 INFO 12:02:30 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:30 INFO [loop_until]: OK (rc = 0) 12:02:30 DEBUG --- stdout --- 12:02:30 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:30 DEBUG --- stderr --- 12:02:30 DEBUG 12:02:30 INFO 12:02:30 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:30 INFO [loop_until]: OK (rc = 0) 12:02:30 DEBUG --- stdout --- 12:02:30 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:30 DEBUG --- stderr --- 12:02:30 DEBUG 12:02:30 INFO 12:02:30 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:30 INFO [loop_until]: OK (rc = 0) 12:02:30 DEBUG --- stdout --- 12:02:30 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:30 DEBUG --- stderr --- 12:02:30 DEBUG 12:02:30 INFO 12:02:30 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:30 INFO [loop_until]: OK (rc = 0) 12:02:30 DEBUG --- stdout --- 12:02:30 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:30 DEBUG --- stderr --- 12:02:30 DEBUG 12:02:30 INFO 12:02:30 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:30 INFO [loop_until]: OK (rc = 0) 12:02:30 DEBUG --- stdout --- 12:02:30 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:30 DEBUG --- stderr --- 12:02:30 DEBUG 12:02:30 INFO 12:02:30 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:30 INFO [loop_until]: OK (rc = 0) 12:02:30 DEBUG --- stdout --- 12:02:30 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:30 DEBUG --- stderr --- 12:02:30 DEBUG 12:02:30 INFO 12:02:30 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:30 INFO [loop_until]: OK (rc = 0) 12:02:30 DEBUG --- stdout --- 12:02:30 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:30 DEBUG --- stderr --- 12:02:30 DEBUG 12:02:31 INFO 12:02:31 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:31 INFO [loop_until]: OK (rc = 0) 12:02:31 DEBUG --- stdout --- 12:02:31 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:31 DEBUG --- stderr --- 12:02:31 DEBUG 12:02:31 INFO 12:02:31 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:31 INFO [loop_until]: OK (rc = 0) 12:02:31 DEBUG --- stdout --- 12:02:31 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:31 DEBUG --- stderr --- 12:02:31 DEBUG 12:02:31 INFO 12:02:31 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:31 INFO [loop_until]: OK (rc = 0) 12:02:31 DEBUG --- stdout --- 12:02:31 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:31 DEBUG --- stderr --- 12:02:31 DEBUG 12:02:31 INFO 12:02:31 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:31 INFO [loop_until]: OK (rc = 0) 12:02:31 DEBUG --- stdout --- 12:02:31 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:31 DEBUG --- stderr --- 12:02:31 DEBUG 12:02:31 INFO 12:02:31 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:31 INFO [loop_until]: OK (rc = 0) 12:02:31 DEBUG --- stdout --- 12:02:31 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:31 DEBUG --- stderr --- 12:02:31 DEBUG 12:02:31 INFO 12:02:31 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 12:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:31 INFO [loop_until]: OK (rc = 0) 12:02:31 DEBUG --- stdout --- 12:02:31 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 12:02:31 DEBUG --- stderr --- 12:02:31 DEBUG 12:02:31 INFO Initializing monitoring instance threads 12:02:31 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 12:02:31 INFO Starting instance threads 12:02:31 INFO 12:02:31 INFO Thread started 12:02:31 INFO [loop_until]: kubectl --namespace=xlou top node 12:02:31 INFO 12:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:31 INFO Thread started 12:02:31 INFO [loop_until]: kubectl --namespace=xlou top pods 12:02:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151" 12:02:31 INFO Thread started Exception in thread Thread-23: 12:02:31 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-24: 12:02:31 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 12:02:31 INFO Thread started Exception in thread Thread-25: self.run() 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691838151" Traceback (most recent call last): 12:02:31 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 910, in run 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691838151" File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 12:02:31 INFO Thread started 12:02:31 INFO Thread started self.run() Exception in thread Thread-28: 12:02:31 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151" File "/usr/local/lib/python3.9/threading.py", line 910, in run 12:02:31 INFO Thread started 12:02:31 INFO All threads has been started Traceback (most recent call last): self.run() 12:02:31 INFO [loop_until]: OK (rc = 0) 12:02:31 DEBUG --- stdout --- self._target(*self._args, **self._kwargs) 12:02:31 INFO [loop_until]: OK (rc = 0) self._target(*self._args, **self._kwargs) 12:02:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 15m 2495Mi am-55f77847b7-6hcmp 16m 3315Mi am-55f77847b7-8wqjg 16m 2345Mi File "/usr/local/lib/python3.9/threading.py", line 910, in run ds-cts-0 8m 380Mi ds-cts-1 10m 356Mi ds-cts-2 9m 340Mi ds-idrepo-0 27m 10342Mi ds-idrepo-1 21m 10287Mi ds-idrepo-2 36m 10236Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 3249Mi idm-65858d8c4c-x6slf 8m 1294Mi lodemon-9c5f9bf5b-bl4rx 387m 60Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 15Mi File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 12:02:31 DEBUG --- stdout --- File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 12:02:31 DEBUG --- stderr --- 127.0.0.1 - - [12/Aug/2023 12:02:31] "GET /monitoring/start HTTP/1.1" 200 - 12:02:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 338m 2% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 4339Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 3480Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 3610Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4561Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 135m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2555Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 86m 0% 11007Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1063Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 91m 0% 10875Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 10922Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1626Mi 2% instance.run() 12:02:31 DEBUG instance.run() 12:02:31 DEBUG --- stderr --- 12:02:31 DEBUG self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop if self.prom_data['functions']: self.run() KeyError: 'functions' instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/usr/local/lib/python3.9/threading.py", line 910, in run if self.prom_data['functions']: self._target(*self._args, **self._kwargs) KeyError: 'functions' File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop if self.prom_data['functions']: KeyError: 'functions' instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: KeyError: 'functions' 12:02:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:32 WARNING Response is NONE 12:02:32 DEBUG Exception is preset. Setting retry_loop to true 12:02:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:34 WARNING Response is NONE 12:02:34 WARNING Response is NONE 12:02:34 WARNING Response is NONE 12:02:34 DEBUG Exception is preset. Setting retry_loop to true 12:02:34 DEBUG Exception is preset. Setting retry_loop to true 12:02:34 DEBUG Exception is preset. Setting retry_loop to true 12:02:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:38 WARNING Response is NONE 12:02:38 WARNING Response is NONE 12:02:38 WARNING Response is NONE 12:02:38 WARNING Response is NONE 12:02:38 WARNING Response is NONE 12:02:38 DEBUG Exception is preset. Setting retry_loop to true 12:02:38 DEBUG Exception is preset. Setting retry_loop to true 12:02:38 DEBUG Exception is preset. Setting retry_loop to true 12:02:38 DEBUG Exception is preset. Setting retry_loop to true 12:02:38 DEBUG Exception is preset. Setting retry_loop to true 12:02:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:43 WARNING Response is NONE 12:02:43 DEBUG Exception is preset. Setting retry_loop to true 12:02:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:45 WARNING Response is NONE 12:02:45 WARNING Response is NONE 12:02:45 WARNING Response is NONE 12:02:45 DEBUG Exception is preset. Setting retry_loop to true 12:02:45 DEBUG Exception is preset. Setting retry_loop to true 12:02:45 DEBUG Exception is preset. Setting retry_loop to true 12:02:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:47 WARNING Response is NONE 12:02:47 DEBUG Exception is preset. Setting retry_loop to true 12:02:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:49 WARNING Response is NONE 12:02:49 DEBUG Exception is preset. Setting retry_loop to true 12:02:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:51 WARNING Response is NONE 12:02:51 WARNING Response is NONE 12:02:51 DEBUG Exception is preset. Setting retry_loop to true 12:02:51 DEBUG Exception is preset. Setting retry_loop to true 12:02:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:54 WARNING Response is NONE 12:02:54 DEBUG Exception is preset. Setting retry_loop to true 12:02:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:56 WARNING Response is NONE 12:02:56 DEBUG Exception is preset. Setting retry_loop to true 12:02:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:58 WARNING Response is NONE 12:02:58 DEBUG Exception is preset. Setting retry_loop to true 12:02:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:02:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:02:59 WARNING Response is NONE 12:02:59 DEBUG Exception is preset. Setting retry_loop to true 12:02:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:01 WARNING Response is NONE 12:03:01 DEBUG Exception is preset. Setting retry_loop to true 12:03:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:03 WARNING Response is NONE 12:03:03 DEBUG Exception is preset. Setting retry_loop to true 12:03:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:04 WARNING Response is NONE 12:03:04 DEBUG Exception is preset. Setting retry_loop to true 12:03:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:05 WARNING Response is NONE 12:03:05 DEBUG Exception is preset. Setting retry_loop to true 12:03:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:08 WARNING Response is NONE 12:03:08 DEBUG Exception is preset. Setting retry_loop to true 12:03:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:10 WARNING Response is NONE 12:03:10 DEBUG Exception is preset. Setting retry_loop to true 12:03:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:12 WARNING Response is NONE 12:03:12 DEBUG Exception is preset. Setting retry_loop to true 12:03:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:14 WARNING Response is NONE 12:03:14 DEBUG Exception is preset. Setting retry_loop to true 12:03:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:15 WARNING Response is NONE 12:03:15 DEBUG Exception is preset. Setting retry_loop to true 12:03:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:17 WARNING Response is NONE 12:03:17 DEBUG Exception is preset. Setting retry_loop to true 12:03:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:19 WARNING Response is NONE 12:03:19 DEBUG Exception is preset. Setting retry_loop to true 12:03:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:21 WARNING Response is NONE 12:03:21 DEBUG Exception is preset. Setting retry_loop to true 12:03:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:23 WARNING Response is NONE 12:03:23 DEBUG Exception is preset. Setting retry_loop to true 12:03:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:25 WARNING Response is NONE 12:03:25 DEBUG Exception is preset. Setting retry_loop to true 12:03:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:26 WARNING Response is NONE 12:03:26 DEBUG Exception is preset. Setting retry_loop to true 12:03:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:27 WARNING Response is NONE 12:03:27 DEBUG Exception is preset. Setting retry_loop to true 12:03:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:30 WARNING Response is NONE 12:03:30 DEBUG Exception is preset. Setting retry_loop to true 12:03:30 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:03:31 INFO 12:03:31 INFO [loop_until]: kubectl --namespace=xlou top pods 12:03:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:03:31 INFO 12:03:31 INFO [loop_until]: kubectl --namespace=xlou top node 12:03:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:03:31 INFO [loop_until]: OK (rc = 0) 12:03:31 DEBUG --- stdout --- 12:03:31 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 76m 0% 4335Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 3470Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3612Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 4563Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 134m 0% 2112Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2565Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 184m 1% 11014Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1065Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 83m 0% 10882Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 10935Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 268m 1% 1629Mi 2% 12:03:31 DEBUG --- stderr --- 12:03:31 DEBUG 12:03:31 INFO [loop_until]: OK (rc = 0) 12:03:31 DEBUG --- stdout --- 12:03:31 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 10m 2496Mi am-55f77847b7-6hcmp 11m 3316Mi am-55f77847b7-8wqjg 14m 2345Mi ds-cts-0 67m 383Mi ds-cts-1 30m 357Mi ds-cts-2 7m 344Mi ds-idrepo-0 568m 10346Mi ds-idrepo-1 34m 10300Mi ds-idrepo-2 202m 10242Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 11m 3249Mi idm-65858d8c4c-x6slf 9m 1304Mi lodemon-9c5f9bf5b-bl4rx 3m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 346m 48Mi 12:03:31 DEBUG --- stderr --- 12:03:31 DEBUG 12:03:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:32 WARNING Response is NONE 12:03:32 DEBUG Exception is preset. Setting retry_loop to true 12:03:32 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:03:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:33 WARNING Response is NONE 12:03:33 DEBUG Exception is preset. Setting retry_loop to true 12:03:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:36 WARNING Response is NONE 12:03:36 DEBUG Exception is preset. Setting retry_loop to true 12:03:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:37 WARNING Response is NONE 12:03:37 DEBUG Exception is preset. Setting retry_loop to true 12:03:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:38 WARNING Response is NONE 12:03:38 DEBUG Exception is preset. Setting retry_loop to true 12:03:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:40 WARNING Response is NONE 12:03:40 DEBUG Exception is preset. Setting retry_loop to true 12:03:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:42 WARNING Response is NONE 12:03:42 DEBUG Exception is preset. Setting retry_loop to true 12:03:42 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:03:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:44 WARNING Response is NONE 12:03:44 DEBUG Exception is preset. Setting retry_loop to true 12:03:44 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:03:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:45 WARNING Response is NONE 12:03:45 DEBUG Exception is preset. Setting retry_loop to true 12:03:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:47 WARNING Response is NONE 12:03:47 DEBUG Exception is preset. Setting retry_loop to true 12:03:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:48 WARNING Response is NONE 12:03:48 DEBUG Exception is preset. Setting retry_loop to true 12:03:48 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:03:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:48 WARNING Response is NONE 12:03:48 DEBUG Exception is preset. Setting retry_loop to true 12:03:48 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:03:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:49 WARNING Response is NONE 12:03:49 DEBUG Exception is preset. Setting retry_loop to true 12:03:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:51 WARNING Response is NONE 12:03:51 DEBUG Exception is preset. Setting retry_loop to true 12:03:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:52 WARNING Response is NONE 12:03:52 DEBUG Exception is preset. Setting retry_loop to true 12:03:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:56 WARNING Response is NONE 12:03:56 DEBUG Exception is preset. Setting retry_loop to true 12:03:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:03:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:03:58 WARNING Response is NONE 12:03:58 DEBUG Exception is preset. Setting retry_loop to true 12:03:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:01 WARNING Response is NONE 12:04:01 DEBUG Exception is preset. Setting retry_loop to true 12:04:01 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:04:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:02 WARNING Response is NONE 12:04:02 DEBUG Exception is preset. Setting retry_loop to true 12:04:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:03 WARNING Response is NONE 12:04:03 DEBUG Exception is preset. Setting retry_loop to true 12:04:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:07 WARNING Response is NONE 12:04:07 DEBUG Exception is preset. Setting retry_loop to true 12:04:07 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:04:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:09 WARNING Response is NONE 12:04:09 DEBUG Exception is preset. Setting retry_loop to true 12:04:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:13 WARNING Response is NONE 12:04:13 DEBUG Exception is preset. Setting retry_loop to true 12:04:13 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:04:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:15 WARNING Response is NONE 12:04:15 DEBUG Exception is preset. Setting retry_loop to true 12:04:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:20 WARNING Response is NONE 12:04:20 DEBUG Exception is preset. Setting retry_loop to true 12:04:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:26 WARNING Response is NONE 12:04:26 DEBUG Exception is preset. Setting retry_loop to true 12:04:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:31 WARNING Response is NONE 12:04:31 DEBUG Exception is preset. Setting retry_loop to true 12:04:31 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:04:32 INFO 12:04:32 INFO [loop_until]: kubectl --namespace=xlou top node 12:04:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:04:32 INFO 12:04:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:04:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:04:32 INFO [loop_until]: OK (rc = 0) 12:04:32 DEBUG --- stdout --- 12:04:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 14m 2498Mi am-55f77847b7-6hcmp 15m 3316Mi am-55f77847b7-8wqjg 12m 2341Mi ds-cts-0 14m 383Mi ds-cts-1 8m 357Mi ds-cts-2 9m 344Mi ds-idrepo-0 19m 10345Mi ds-idrepo-1 20m 10301Mi ds-idrepo-2 22m 10243Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 9m 3251Mi idm-65858d8c4c-x6slf 7m 1317Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 48Mi 12:04:32 DEBUG --- stderr --- 12:04:32 DEBUG 12:04:32 INFO [loop_until]: OK (rc = 0) 12:04:32 DEBUG --- stdout --- 12:04:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 4340Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3477Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 3613Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 4564Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2111Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2579Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 11015Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1064Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 71m 0% 10883Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10938Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1626Mi 2% 12:04:32 DEBUG --- stderr --- 12:04:32 DEBUG 12:04:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:37 WARNING Response is NONE 12:04:37 DEBUG Exception is preset. Setting retry_loop to true 12:04:37 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 WARNING Response is NONE 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 DEBUG Exception is preset. Setting retry_loop to true 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:54 WARNING Response is NONE 12:04:54 DEBUG Exception is preset. Setting retry_loop to true 12:04:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:04:56 WARNING Response is NONE 12:04:56 WARNING Response is NONE 12:04:56 WARNING Response is NONE 12:04:56 DEBUG Exception is preset. Setting retry_loop to true 12:04:56 DEBUG Exception is preset. Setting retry_loop to true 12:04:56 DEBUG Exception is preset. Setting retry_loop to true 12:04:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:04:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:00 WARNING Response is NONE 12:05:00 WARNING Response is NONE 12:05:00 WARNING Response is NONE 12:05:00 WARNING Response is NONE 12:05:00 WARNING Response is NONE 12:05:00 DEBUG Exception is preset. Setting retry_loop to true 12:05:00 DEBUG Exception is preset. Setting retry_loop to true 12:05:00 DEBUG Exception is preset. Setting retry_loop to true 12:05:00 DEBUG Exception is preset. Setting retry_loop to true 12:05:00 DEBUG Exception is preset. Setting retry_loop to true 12:05:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:05 WARNING Response is NONE 12:05:05 DEBUG Exception is preset. Setting retry_loop to true 12:05:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:07 WARNING Response is NONE 12:05:07 WARNING Response is NONE 12:05:07 DEBUG Exception is preset. Setting retry_loop to true 12:05:07 DEBUG Exception is preset. Setting retry_loop to true 12:05:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:09 WARNING Response is NONE 12:05:09 DEBUG Exception is preset. Setting retry_loop to true 12:05:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:09 WARNING Response is NONE 12:05:09 DEBUG Exception is preset. Setting retry_loop to true 12:05:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:11 WARNING Response is NONE 12:05:11 WARNING Response is NONE 12:05:11 DEBUG Exception is preset. Setting retry_loop to true 12:05:11 DEBUG Exception is preset. Setting retry_loop to true 12:05:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:13 WARNING Response is NONE 12:05:13 WARNING Response is NONE 12:05:13 DEBUG Exception is preset. Setting retry_loop to true 12:05:13 DEBUG Exception is preset. Setting retry_loop to true 12:05:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:16 WARNING Response is NONE 12:05:16 DEBUG Exception is preset. Setting retry_loop to true 12:05:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:18 WARNING Response is NONE 12:05:18 DEBUG Exception is preset. Setting retry_loop to true 12:05:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:20 WARNING Response is NONE 12:05:20 DEBUG Exception is preset. Setting retry_loop to true 12:05:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:22 WARNING Response is NONE 12:05:22 DEBUG Exception is preset. Setting retry_loop to true 12:05:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:24 WARNING Response is NONE 12:05:24 DEBUG Exception is preset. Setting retry_loop to true 12:05:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:26 WARNING Response is NONE 12:05:26 DEBUG Exception is preset. Setting retry_loop to true 12:05:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:27 WARNING Response is NONE 12:05:27 DEBUG Exception is preset. Setting retry_loop to true 12:05:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:29 WARNING Response is NONE 12:05:29 DEBUG Exception is preset. Setting retry_loop to true 12:05:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:31 WARNING Response is NONE 12:05:31 DEBUG Exception is preset. Setting retry_loop to true 12:05:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:32 INFO 12:05:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:05:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:05:32 INFO 12:05:32 INFO [loop_until]: kubectl --namespace=xlou top node 12:05:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:05:32 INFO [loop_until]: OK (rc = 0) 12:05:32 DEBUG --- stdout --- 12:05:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 11m 2498Mi am-55f77847b7-6hcmp 14m 3316Mi am-55f77847b7-8wqjg 14m 2342Mi ds-cts-0 7m 383Mi ds-cts-1 10m 360Mi ds-cts-2 9m 346Mi ds-idrepo-0 24m 10345Mi ds-idrepo-1 37m 10301Mi ds-idrepo-2 36m 10239Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 10m 3251Mi idm-65858d8c4c-x6slf 7m 1328Mi lodemon-9c5f9bf5b-bl4rx 3m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 246m 98Mi 12:05:32 DEBUG --- stderr --- 12:05:32 DEBUG 12:05:32 INFO [loop_until]: OK (rc = 0) 12:05:32 DEBUG --- stdout --- 12:05:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4340Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 3477Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3613Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4567Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2113Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2588Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 79m 0% 11012Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1066Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 89m 0% 10879Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 86m 0% 10937Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 375m 2% 1627Mi 2% 12:05:32 DEBUG --- stderr --- 12:05:32 DEBUG 12:05:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:33 WARNING Response is NONE 12:05:33 DEBUG Exception is preset. Setting retry_loop to true 12:05:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:34 WARNING Response is NONE 12:05:34 DEBUG Exception is preset. Setting retry_loop to true 12:05:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:36 WARNING Response is NONE 12:05:36 DEBUG Exception is preset. Setting retry_loop to true 12:05:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:37 WARNING Response is NONE 12:05:37 DEBUG Exception is preset. Setting retry_loop to true 12:05:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:38 WARNING Response is NONE 12:05:38 DEBUG Exception is preset. Setting retry_loop to true 12:05:38 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:05:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:41 WARNING Response is NONE 12:05:41 DEBUG Exception is preset. Setting retry_loop to true 12:05:41 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:05:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:43 WARNING Response is NONE 12:05:43 DEBUG Exception is preset. Setting retry_loop to true 12:05:43 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:05:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:45 WARNING Response is NONE 12:05:45 DEBUG Exception is preset. Setting retry_loop to true 12:05:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:47 WARNING Response is NONE 12:05:47 DEBUG Exception is preset. Setting retry_loop to true 12:05:47 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:05:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:48 WARNING Response is NONE 12:05:48 DEBUG Exception is preset. Setting retry_loop to true 12:05:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:51 WARNING Response is NONE 12:05:51 DEBUG Exception is preset. Setting retry_loop to true 12:05:51 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:05:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:53 WARNING Response is NONE 12:05:53 DEBUG Exception is preset. Setting retry_loop to true 12:05:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:55 WARNING Response is NONE 12:05:55 WARNING Response is NONE 12:05:55 DEBUG Exception is preset. Setting retry_loop to true 12:05:55 DEBUG Exception is preset. Setting retry_loop to true 12:05:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:56 WARNING Response is NONE 12:05:56 DEBUG Exception is preset. Setting retry_loop to true 12:05:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:57 WARNING Response is NONE 12:05:57 WARNING Response is NONE 12:05:57 DEBUG Exception is preset. Setting retry_loop to true 12:05:57 DEBUG Exception is preset. Setting retry_loop to true 12:05:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:05:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:05:59 WARNING Response is NONE 12:05:59 DEBUG Exception is preset. Setting retry_loop to true 12:05:59 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:06:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:04 WARNING Response is NONE 12:06:04 DEBUG Exception is preset. Setting retry_loop to true 12:06:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:06 WARNING Response is NONE 12:06:06 WARNING Response is NONE 12:06:06 DEBUG Exception is preset. Setting retry_loop to true 12:06:06 DEBUG Exception is preset. Setting retry_loop to true 12:06:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:08 WARNING Response is NONE 12:06:08 DEBUG Exception is preset. Setting retry_loop to true 12:06:08 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:06:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:08 WARNING Response is NONE 12:06:08 DEBUG Exception is preset. Setting retry_loop to true 12:06:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:10 WARNING Response is NONE 12:06:10 DEBUG Exception is preset. Setting retry_loop to true 12:06:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:15 WARNING Response is NONE 12:06:15 DEBUG Exception is preset. Setting retry_loop to true 12:06:15 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:06:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:17 WARNING Response is NONE 12:06:17 WARNING Response is NONE 12:06:17 DEBUG Exception is preset. Setting retry_loop to true 12:06:17 DEBUG Exception is preset. Setting retry_loop to true 12:06:17 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: 12:06:17 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Traceback (most recent call last): Exception in thread Thread-29: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:06:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:19 WARNING Response is NONE 12:06:19 DEBUG Exception is preset. Setting retry_loop to true 12:06:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:22 WARNING Response is NONE 12:06:22 DEBUG Exception is preset. Setting retry_loop to true 12:06:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:31 WARNING Response is NONE 12:06:31 DEBUG Exception is preset. Setting retry_loop to true 12:06:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:32 INFO 12:06:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:06:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:06:32 INFO [loop_until]: OK (rc = 0) 12:06:32 DEBUG --- stdout --- 12:06:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 12m 2499Mi am-55f77847b7-6hcmp 9m 3318Mi am-55f77847b7-8wqjg 12m 2349Mi ds-cts-0 12m 383Mi ds-cts-1 10m 360Mi ds-cts-2 7m 346Mi ds-idrepo-0 41m 10346Mi ds-idrepo-1 19m 10303Mi ds-idrepo-2 20m 10243Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 3255Mi idm-65858d8c4c-x6slf 8m 1337Mi lodemon-9c5f9bf5b-bl4rx 3m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 98Mi 12:06:32 DEBUG --- stderr --- 12:06:32 DEBUG 12:06:32 INFO 12:06:32 INFO [loop_until]: kubectl --namespace=xlou top node 12:06:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:06:32 INFO [loop_until]: OK (rc = 0) 12:06:32 DEBUG --- stdout --- 12:06:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4342Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3480Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 3618Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4573Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2601Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 86m 0% 11015Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 10884Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 75m 0% 10939Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1626Mi 2% 12:06:32 DEBUG --- stderr --- 12:06:32 DEBUG 12:06:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:33 WARNING Response is NONE 12:06:33 DEBUG Exception is preset. Setting retry_loop to true 12:06:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 12:06:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:42 WARNING Response is NONE 12:06:42 DEBUG Exception is preset. Setting retry_loop to true 12:06:42 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:06:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691838151 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 12:06:44 WARNING Response is NONE 12:06:44 DEBUG Exception is preset. Setting retry_loop to true 12:06:44 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 12:07:32 INFO 12:07:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:07:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:07:32 INFO [loop_until]: OK (rc = 0) 12:07:32 DEBUG --- stdout --- 12:07:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 98m 2561Mi am-55f77847b7-6hcmp 129m 3360Mi am-55f77847b7-8wqjg 70m 2357Mi ds-cts-0 6m 383Mi ds-cts-1 80m 361Mi ds-cts-2 13m 347Mi ds-idrepo-0 368m 10351Mi ds-idrepo-1 103m 10306Mi ds-idrepo-2 15m 10244Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 39m 3274Mi idm-65858d8c4c-x6slf 44m 1387Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 2m 98Mi 12:07:32 DEBUG --- stderr --- 12:07:32 DEBUG 12:07:32 INFO 12:07:32 INFO [loop_until]: kubectl --namespace=xlou top node 12:07:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:07:32 INFO [loop_until]: OK (rc = 0) 12:07:32 DEBUG --- stdout --- 12:07:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 176m 1% 4385Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 3498Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 157m 0% 3673Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 108m 0% 4591Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 100m 0% 2638Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 372m 2% 11018Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 138m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 133m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 150m 0% 10890Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 127m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 165m 1% 10949Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 537m 3% 1869Mi 3% 12:07:32 DEBUG --- stderr --- 12:07:32 DEBUG 12:08:32 INFO 12:08:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:08:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:08:32 INFO [loop_until]: OK (rc = 0) 12:08:32 DEBUG --- stdout --- 12:08:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 22m 2568Mi am-55f77847b7-6hcmp 12m 3360Mi am-55f77847b7-8wqjg 16m 2358Mi ds-cts-0 211m 385Mi ds-cts-1 66m 362Mi ds-cts-2 73m 348Mi ds-idrepo-0 3453m 12887Mi ds-idrepo-1 200m 10307Mi ds-idrepo-2 206m 10254Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 13m 3268Mi idm-65858d8c4c-x6slf 16m 1389Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1050m 376Mi 12:08:32 DEBUG --- stderr --- 12:08:32 DEBUG 12:08:32 INFO 12:08:32 INFO [loop_until]: kubectl --namespace=xlou top node 12:08:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:08:32 INFO [loop_until]: OK (rc = 0) 12:08:32 DEBUG --- stdout --- 12:08:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4381Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3492Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 3682Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 4582Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2112Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 2651Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 3559m 22% 13487Mi 22% gke-xlou-cdm-ds-32e4dcb1-4z9d 133m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 345m 2% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 280m 1% 10892Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 191m 1% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 280m 1% 10947Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1175m 7% 1902Mi 3% 12:08:32 DEBUG --- stderr --- 12:08:32 DEBUG 12:09:32 INFO 12:09:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:09:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:09:32 INFO [loop_until]: OK (rc = 0) 12:09:32 DEBUG --- stdout --- 12:09:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 18m 2578Mi am-55f77847b7-6hcmp 12m 3360Mi am-55f77847b7-8wqjg 15m 2358Mi ds-cts-0 6m 385Mi ds-cts-1 6m 362Mi ds-cts-2 6m 348Mi ds-idrepo-0 2865m 13331Mi ds-idrepo-1 33m 10306Mi ds-idrepo-2 18m 10256Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 9m 3269Mi idm-65858d8c4c-x6slf 9m 1405Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1124m 376Mi 12:09:32 DEBUG --- stderr --- 12:09:32 DEBUG 12:09:32 INFO 12:09:32 INFO [loop_until]: kubectl --namespace=xlou top node 12:09:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:09:32 INFO [loop_until]: OK (rc = 0) 12:09:32 DEBUG --- stdout --- 12:09:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4383Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 3496Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 3690Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 4579Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 2665Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 2974m 18% 13912Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 10897Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 80m 0% 10944Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1201m 7% 1900Mi 3% 12:09:32 DEBUG --- stderr --- 12:09:32 DEBUG 12:10:32 INFO 12:10:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:10:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:10:32 INFO [loop_until]: OK (rc = 0) 12:10:32 DEBUG --- stdout --- 12:10:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 14m 2587Mi am-55f77847b7-6hcmp 12m 3361Mi am-55f77847b7-8wqjg 26m 2355Mi ds-cts-0 7m 386Mi ds-cts-1 7m 362Mi ds-cts-2 7m 349Mi ds-idrepo-0 3065m 13422Mi ds-idrepo-1 17m 10308Mi ds-idrepo-2 22m 10257Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 9m 3269Mi idm-65858d8c4c-x6slf 9m 1418Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1169m 377Mi 12:10:32 DEBUG --- stderr --- 12:10:32 DEBUG 12:10:32 INFO 12:10:32 INFO [loop_until]: kubectl --namespace=xlou top node 12:10:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:10:32 INFO [loop_until]: OK (rc = 0) 12:10:32 DEBUG --- stdout --- 12:10:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4383Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 88m 0% 3489Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 3701Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 4581Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2675Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 3196m 20% 14003Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10899Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 10945Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1262m 7% 1898Mi 3% 12:10:32 DEBUG --- stderr --- 12:10:32 DEBUG 12:11:32 INFO 12:11:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:11:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:11:32 INFO [loop_until]: OK (rc = 0) 12:11:32 DEBUG --- stdout --- 12:11:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 12m 2597Mi am-55f77847b7-6hcmp 9m 3361Mi am-55f77847b7-8wqjg 12m 2363Mi ds-cts-0 7m 387Mi ds-cts-1 6m 362Mi ds-cts-2 6m 350Mi ds-idrepo-0 3110m 13456Mi ds-idrepo-1 22m 10311Mi ds-idrepo-2 22m 10258Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 3272Mi idm-65858d8c4c-x6slf 9m 1430Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1255m 377Mi 12:11:32 DEBUG --- stderr --- 12:11:32 DEBUG 12:11:32 INFO 12:11:32 INFO [loop_until]: kubectl --namespace=xlou top node 12:11:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:11:32 INFO [loop_until]: OK (rc = 0) 12:11:32 DEBUG --- stdout --- 12:11:32 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 4380Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 3494Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3713Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4586Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2687Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 3200m 20% 14032Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 10893Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 10948Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1344m 8% 1901Mi 3% 12:11:32 DEBUG --- stderr --- 12:11:32 DEBUG 12:12:32 INFO 12:12:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:12:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:12:32 INFO [loop_until]: OK (rc = 0) 12:12:32 DEBUG --- stdout --- 12:12:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 11m 2608Mi am-55f77847b7-6hcmp 12m 3361Mi am-55f77847b7-8wqjg 8m 2373Mi ds-cts-0 9m 386Mi ds-cts-1 6m 363Mi ds-cts-2 9m 349Mi ds-idrepo-0 3319m 13615Mi ds-idrepo-1 19m 10312Mi ds-idrepo-2 13m 10258Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 3272Mi idm-65858d8c4c-x6slf 11m 1440Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1231m 99Mi 12:12:32 DEBUG --- stderr --- 12:12:32 DEBUG 12:12:33 INFO 12:12:33 INFO [loop_until]: kubectl --namespace=xlou top node 12:12:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:12:33 INFO [loop_until]: OK (rc = 0) 12:12:33 DEBUG --- stdout --- 12:12:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 4382Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 3507Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3721Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4584Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2699Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 3260m 20% 14184Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 10898Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 10949Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1262m 7% 1625Mi 2% 12:12:33 DEBUG --- stderr --- 12:12:33 DEBUG 12:13:32 INFO 12:13:32 INFO [loop_until]: kubectl --namespace=xlou top pods 12:13:32 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:13:32 INFO [loop_until]: OK (rc = 0) 12:13:32 DEBUG --- stdout --- 12:13:32 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 11m 2617Mi am-55f77847b7-6hcmp 10m 3366Mi am-55f77847b7-8wqjg 10m 2381Mi ds-cts-0 7m 387Mi ds-cts-1 7m 363Mi ds-cts-2 7m 350Mi ds-idrepo-0 16m 13615Mi ds-idrepo-1 13m 10314Mi ds-idrepo-2 14m 10255Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 13m 3273Mi idm-65858d8c4c-x6slf 10m 1453Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 98Mi 12:13:32 DEBUG --- stderr --- 12:13:32 DEBUG 12:13:33 INFO 12:13:33 INFO [loop_until]: kubectl --namespace=xlou top node 12:13:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:13:33 INFO [loop_until]: OK (rc = 0) 12:13:33 DEBUG --- stdout --- 12:13:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 4387Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3515Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3732Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 4585Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2712Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 14184Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 10893Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 10952Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1625Mi 2% 12:13:33 DEBUG --- stderr --- 12:13:33 DEBUG 12:14:33 INFO 12:14:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:14:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:14:33 INFO [loop_until]: OK (rc = 0) 12:14:33 DEBUG --- stdout --- 12:14:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 10m 2627Mi am-55f77847b7-6hcmp 12m 3366Mi am-55f77847b7-8wqjg 8m 2394Mi ds-cts-0 8m 386Mi ds-cts-1 6m 363Mi ds-cts-2 8m 349Mi ds-idrepo-0 18m 13615Mi ds-idrepo-1 2693m 13177Mi ds-idrepo-2 19m 10262Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 10m 3270Mi idm-65858d8c4c-x6slf 9m 1462Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 968m 391Mi 12:14:33 DEBUG --- stderr --- 12:14:33 DEBUG 12:14:33 INFO 12:14:33 INFO [loop_until]: kubectl --namespace=xlou top node 12:14:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:14:33 INFO [loop_until]: OK (rc = 0) 12:14:33 DEBUG --- stdout --- 12:14:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4389Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3529Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3743Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 4584Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2724Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 14185Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10901Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2800m 17% 13736Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1085m 6% 1916Mi 3% 12:14:33 DEBUG --- stderr --- 12:14:33 DEBUG 12:15:33 INFO 12:15:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:15:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:15:33 INFO [loop_until]: OK (rc = 0) 12:15:33 DEBUG --- stdout --- 12:15:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 11m 2638Mi am-55f77847b7-6hcmp 21m 3366Mi am-55f77847b7-8wqjg 7m 2403Mi ds-cts-0 10m 386Mi ds-cts-1 6m 363Mi ds-cts-2 8m 351Mi ds-idrepo-0 18m 13615Mi ds-idrepo-1 2778m 13340Mi ds-idrepo-2 16m 10260Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 10m 3270Mi idm-65858d8c4c-x6slf 9m 1475Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1015m 391Mi 12:15:33 DEBUG --- stderr --- 12:15:33 DEBUG 12:15:33 INFO 12:15:33 INFO [loop_until]: kubectl --namespace=xlou top node 12:15:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:15:33 INFO [loop_until]: OK (rc = 0) 12:15:33 DEBUG --- stdout --- 12:15:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 4389Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3538Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3752Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4584Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 150m 0% 2128Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2734Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 14186Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 10901Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2862m 18% 13933Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1110m 6% 1916Mi 3% 12:15:33 DEBUG --- stderr --- 12:15:33 DEBUG 12:16:33 INFO 12:16:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:16:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:16:33 INFO [loop_until]: OK (rc = 0) 12:16:33 DEBUG --- stdout --- 12:16:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 14m 2651Mi am-55f77847b7-6hcmp 10m 3366Mi am-55f77847b7-8wqjg 27m 2418Mi ds-cts-0 6m 387Mi ds-cts-1 8m 363Mi ds-cts-2 7m 350Mi ds-idrepo-0 17m 13616Mi ds-idrepo-1 2904m 13452Mi ds-idrepo-2 13m 10259Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 11m 3270Mi idm-65858d8c4c-x6slf 9m 1487Mi lodemon-9c5f9bf5b-bl4rx 1m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1106m 392Mi 12:16:33 DEBUG --- stderr --- 12:16:33 DEBUG 12:16:33 INFO 12:16:33 INFO [loop_until]: kubectl --namespace=xlou top node 12:16:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:16:33 INFO [loop_until]: OK (rc = 0) 12:16:33 DEBUG --- stdout --- 12:16:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4391Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 3551Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3764Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 4583Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2744Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 14188Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 10901Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2992m 18% 14005Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1172m 7% 1918Mi 3% 12:16:33 DEBUG --- stderr --- 12:16:33 DEBUG 12:17:33 INFO 12:17:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:17:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:17:33 INFO [loop_until]: OK (rc = 0) 12:17:33 DEBUG --- stdout --- 12:17:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 11m 2661Mi am-55f77847b7-6hcmp 10m 3366Mi am-55f77847b7-8wqjg 8m 2431Mi ds-cts-0 8m 387Mi ds-cts-1 7m 364Mi ds-cts-2 6m 350Mi ds-idrepo-0 19m 13616Mi ds-idrepo-1 2979m 13452Mi ds-idrepo-2 17m 10260Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 3270Mi idm-65858d8c4c-x6slf 10m 1497Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1169m 392Mi 12:17:33 DEBUG --- stderr --- 12:17:33 DEBUG 12:17:33 INFO 12:17:33 INFO [loop_until]: kubectl --namespace=xlou top node 12:17:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:17:33 INFO [loop_until]: OK (rc = 0) 12:17:33 DEBUG --- stdout --- 12:17:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 4392Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3563Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3775Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4584Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2760Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 14190Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10900Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3111m 19% 14004Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1258m 7% 1918Mi 3% 12:17:33 DEBUG --- stderr --- 12:17:33 DEBUG 12:18:33 INFO 12:18:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:18:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:18:33 INFO [loop_until]: OK (rc = 0) 12:18:33 DEBUG --- stdout --- 12:18:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 9m 2672Mi am-55f77847b7-6hcmp 11m 3366Mi am-55f77847b7-8wqjg 7m 2440Mi ds-cts-0 7m 387Mi ds-cts-1 5m 363Mi ds-cts-2 9m 350Mi ds-idrepo-0 18m 13615Mi ds-idrepo-1 3221m 13668Mi ds-idrepo-2 13m 10266Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 9m 3271Mi idm-65858d8c4c-x6slf 7m 1509Mi lodemon-9c5f9bf5b-bl4rx 1m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1239m 392Mi 12:18:33 DEBUG --- stderr --- 12:18:33 DEBUG 12:18:33 INFO 12:18:33 INFO [loop_until]: kubectl --namespace=xlou top node 12:18:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:18:33 INFO [loop_until]: OK (rc = 0) 12:18:33 DEBUG --- stdout --- 12:18:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4391Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3577Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3787Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4583Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2124Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2771Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 14190Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 10909Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3204m 20% 14208Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1301m 8% 1917Mi 3% 12:18:33 DEBUG --- stderr --- 12:18:33 DEBUG 12:19:33 INFO 12:19:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:19:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:19:33 INFO [loop_until]: OK (rc = 0) 12:19:33 DEBUG --- stdout --- 12:19:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 10m 2681Mi am-55f77847b7-6hcmp 10m 3367Mi am-55f77847b7-8wqjg 8m 2452Mi ds-cts-0 7m 387Mi ds-cts-1 5m 363Mi ds-cts-2 7m 350Mi ds-idrepo-0 17m 13615Mi ds-idrepo-1 19m 13667Mi ds-idrepo-2 13m 10268Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 7m 3271Mi idm-65858d8c4c-x6slf 9m 1521Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 98Mi 12:19:33 DEBUG --- stderr --- 12:19:33 DEBUG 12:19:33 INFO 12:19:33 INFO [loop_until]: kubectl --namespace=xlou top node 12:19:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:19:33 INFO [loop_until]: OK (rc = 0) 12:19:33 DEBUG --- stdout --- 12:19:33 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4390Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3590Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3797Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4585Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 131m 0% 2124Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2778Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 14191Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10907Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 69m 0% 14211Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1628Mi 2% 12:19:33 DEBUG --- stderr --- 12:19:33 DEBUG 12:20:33 INFO 12:20:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:20:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:20:33 INFO [loop_until]: OK (rc = 0) 12:20:33 DEBUG --- stdout --- 12:20:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 9m 2694Mi am-55f77847b7-6hcmp 12m 3368Mi am-55f77847b7-8wqjg 9m 2463Mi ds-cts-0 6m 387Mi ds-cts-1 8m 363Mi ds-cts-2 7m 352Mi ds-idrepo-0 24m 13616Mi ds-idrepo-1 20m 13667Mi ds-idrepo-2 2644m 12180Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 3271Mi idm-65858d8c4c-x6slf 7m 1532Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1054m 403Mi 12:20:33 DEBUG --- stderr --- 12:20:33 DEBUG 12:20:34 INFO 12:20:34 INFO [loop_until]: kubectl --namespace=xlou top node 12:20:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:20:34 INFO [loop_until]: OK (rc = 0) 12:20:34 DEBUG --- stdout --- 12:20:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 76m 0% 4400Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 3597Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 3808Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4585Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 133m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2793Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 77m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2633m 16% 13026Mi 22% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 70m 0% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1216m 7% 1929Mi 3% 12:20:34 DEBUG --- stderr --- 12:20:34 DEBUG 12:21:33 INFO 12:21:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:21:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:21:33 INFO [loop_until]: OK (rc = 0) 12:21:33 DEBUG --- stdout --- 12:21:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 8m 2706Mi am-55f77847b7-6hcmp 11m 3374Mi am-55f77847b7-8wqjg 7m 2474Mi ds-cts-0 7m 387Mi ds-cts-1 8m 363Mi ds-cts-2 6m 352Mi ds-idrepo-0 20m 13617Mi ds-idrepo-1 17m 13667Mi ds-idrepo-2 2688m 13377Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 12m 3271Mi idm-65858d8c4c-x6slf 13m 1544Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1017m 403Mi 12:21:33 DEBUG --- stderr --- 12:21:33 DEBUG 12:21:34 INFO 12:21:34 INFO [loop_until]: kubectl --namespace=xlou top node 12:21:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:21:34 INFO [loop_until]: OK (rc = 0) 12:21:34 DEBUG --- stdout --- 12:21:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4397Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3610Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 3817Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 4587Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2805Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2747m 17% 13933Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14211Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1076m 6% 1930Mi 3% 12:21:34 DEBUG --- stderr --- 12:21:34 DEBUG 12:22:33 INFO 12:22:33 INFO [loop_until]: kubectl --namespace=xlou top pods 12:22:33 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:22:33 INFO [loop_until]: OK (rc = 0) 12:22:33 DEBUG --- stdout --- 12:22:33 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 9m 2718Mi am-55f77847b7-6hcmp 11m 3375Mi am-55f77847b7-8wqjg 9m 2484Mi ds-cts-0 9m 388Mi ds-cts-1 7m 364Mi ds-cts-2 6m 352Mi ds-idrepo-0 27m 13617Mi ds-idrepo-1 13m 13667Mi ds-idrepo-2 2763m 13402Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 16m 3271Mi idm-65858d8c4c-x6slf 9m 1556Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1117m 406Mi 12:22:33 DEBUG --- stderr --- 12:22:33 DEBUG 12:22:34 INFO 12:22:34 INFO [loop_until]: kubectl --namespace=xlou top node 12:22:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:22:34 INFO [loop_until]: OK (rc = 0) 12:22:34 DEBUG --- stdout --- 12:22:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4398Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 3622Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3829Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 4587Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2820Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 81m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2737m 17% 13961Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14212Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1201m 7% 1930Mi 3% 12:22:34 DEBUG --- stderr --- 12:22:34 DEBUG 12:23:34 INFO 12:23:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:23:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:23:34 INFO [loop_until]: OK (rc = 0) 12:23:34 DEBUG --- stdout --- 12:23:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 8m 2726Mi am-55f77847b7-6hcmp 11m 3375Mi am-55f77847b7-8wqjg 8m 2496Mi ds-cts-0 9m 388Mi ds-cts-1 7m 364Mi ds-cts-2 6m 352Mi ds-idrepo-0 18m 13616Mi ds-idrepo-1 14m 13668Mi ds-idrepo-2 2792m 13524Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 14m 3272Mi idm-65858d8c4c-x6slf 8m 1567Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1127m 406Mi 12:23:34 DEBUG --- stderr --- 12:23:34 DEBUG 12:23:34 INFO 12:23:34 INFO [loop_until]: kubectl --namespace=xlou top node 12:23:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:23:34 INFO [loop_until]: OK (rc = 0) 12:23:34 DEBUG --- stdout --- 12:23:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4397Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 3632Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3841Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 4590Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2829Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 14192Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2915m 18% 14074Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14213Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1217m 7% 1929Mi 3% 12:23:34 DEBUG --- stderr --- 12:23:34 DEBUG 12:24:34 INFO 12:24:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:24:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:24:34 INFO [loop_until]: OK (rc = 0) 12:24:34 DEBUG --- stdout --- 12:24:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 11m 2762Mi am-55f77847b7-6hcmp 9m 3380Mi am-55f77847b7-8wqjg 11m 2507Mi ds-cts-0 7m 388Mi ds-cts-1 7m 364Mi ds-cts-2 6m 352Mi ds-idrepo-0 16m 13616Mi ds-idrepo-1 13m 13668Mi ds-idrepo-2 3000m 13449Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 3272Mi idm-65858d8c4c-x6slf 18m 1578Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1255m 406Mi 12:24:34 DEBUG --- stderr --- 12:24:34 DEBUG 12:24:34 INFO 12:24:34 INFO [loop_until]: kubectl --namespace=xlou top node 12:24:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:24:34 INFO [loop_until]: OK (rc = 0) 12:24:34 DEBUG --- stdout --- 12:24:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4404Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3875Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4586Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 80m 0% 2840Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3088m 19% 13997Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14214Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1332m 8% 1932Mi 3% 12:24:34 DEBUG --- stderr --- 12:24:34 DEBUG 12:25:34 INFO 12:25:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:25:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:25:34 INFO [loop_until]: OK (rc = 0) 12:25:34 DEBUG --- stdout --- 12:25:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 9m 2775Mi am-55f77847b7-6hcmp 10m 3380Mi am-55f77847b7-8wqjg 9m 2516Mi ds-cts-0 9m 388Mi ds-cts-1 7m 364Mi ds-cts-2 8m 353Mi ds-idrepo-0 17m 13616Mi ds-idrepo-1 14m 13668Mi ds-idrepo-2 12m 13653Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 3272Mi idm-65858d8c4c-x6slf 8m 1589Mi lodemon-9c5f9bf5b-bl4rx 1m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 398m 98Mi 12:25:34 DEBUG --- stderr --- 12:25:34 DEBUG 12:25:34 INFO 12:25:34 INFO [loop_until]: kubectl --namespace=xlou top node 12:25:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:25:34 INFO [loop_until]: OK (rc = 0) 12:25:34 DEBUG --- stdout --- 12:25:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 4401Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3655Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3887Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4586Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2852Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 14190Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14200Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14215Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1628Mi 2% 12:25:34 DEBUG --- stderr --- 12:25:34 DEBUG 12:26:34 INFO 12:26:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:26:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:26:34 INFO [loop_until]: OK (rc = 0) 12:26:34 DEBUG --- stdout --- 12:26:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 9m 2784Mi am-55f77847b7-6hcmp 12m 3381Mi am-55f77847b7-8wqjg 33m 2575Mi ds-cts-0 14m 388Mi ds-cts-1 7m 367Mi ds-cts-2 6m 352Mi ds-idrepo-0 60m 13616Mi ds-idrepo-1 173m 13676Mi ds-idrepo-2 10m 13657Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 106m 3367Mi idm-65858d8c4c-x6slf 7m 1601Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 2069m 465Mi 12:26:34 DEBUG --- stderr --- 12:26:34 DEBUG 12:26:34 INFO 12:26:34 INFO [loop_until]: kubectl --namespace=xlou top node 12:26:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:26:34 INFO [loop_until]: OK (rc = 0) 12:26:34 DEBUG --- stdout --- 12:26:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4401Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 84m 0% 3711Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 133m 0% 3965Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4587Mi 7% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1184m 7% 4185Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 14192Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 217m 1% 14236Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14216Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1926m 12% 1924Mi 3% 12:26:34 DEBUG --- stderr --- 12:26:34 DEBUG 12:27:34 INFO 12:27:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:27:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:27:34 INFO [loop_until]: OK (rc = 0) 12:27:34 DEBUG --- stdout --- 12:27:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 70m 3541Mi am-55f77847b7-6hcmp 69m 3726Mi am-55f77847b7-8wqjg 58m 3269Mi ds-cts-0 6m 391Mi ds-cts-1 7m 364Mi ds-cts-2 5m 354Mi ds-idrepo-0 3502m 13680Mi ds-idrepo-1 709m 13693Mi ds-idrepo-2 650m 13681Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 4430m 3611Mi idm-65858d8c4c-x6slf 3512m 3660Mi lodemon-9c5f9bf5b-bl4rx 1m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 617m 531Mi 12:27:34 DEBUG --- stderr --- 12:27:34 DEBUG 12:27:34 INFO 12:27:34 INFO [loop_until]: kubectl --namespace=xlou top node 12:27:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:27:34 INFO [loop_until]: OK (rc = 0) 12:27:34 DEBUG --- stdout --- 12:27:34 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 120m 0% 4803Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 121m 0% 4356Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 116m 0% 4635Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 4214m 26% 4931Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 997m 6% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3547m 22% 4892Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3529m 22% 14250Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 726m 4% 14224Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 750m 4% 14232Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 655m 4% 2053Mi 3% 12:27:34 DEBUG --- stderr --- 12:27:34 DEBUG 12:28:34 INFO 12:28:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:28:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:28:34 INFO [loop_until]: OK (rc = 0) 12:28:34 DEBUG --- stdout --- 12:28:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 61m 4102Mi am-55f77847b7-6hcmp 65m 4089Mi am-55f77847b7-8wqjg 59m 3963Mi ds-cts-0 8m 390Mi ds-cts-1 7m 363Mi ds-cts-2 5m 353Mi ds-idrepo-0 3797m 13685Mi ds-idrepo-1 973m 13608Mi ds-idrepo-2 898m 13741Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3507m 3631Mi idm-65858d8c4c-x6slf 2864m 3646Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 498m 532Mi 12:28:34 DEBUG --- stderr --- 12:28:34 DEBUG 12:28:35 INFO 12:28:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:28:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:28:35 INFO [loop_until]: OK (rc = 0) 12:28:35 DEBUG --- stdout --- 12:28:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 5114Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 120m 0% 5079Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 119m 0% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 3644m 22% 4942Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 995m 6% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3049m 19% 4898Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3925m 24% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 978m 6% 14290Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 970m 6% 14146Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 563m 3% 2051Mi 3% 12:28:35 DEBUG --- stderr --- 12:28:35 DEBUG 12:29:34 INFO 12:29:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:29:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:29:34 INFO [loop_until]: OK (rc = 0) 12:29:34 DEBUG --- stdout --- 12:29:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 60m 4884Mi am-55f77847b7-6hcmp 49m 4089Mi am-55f77847b7-8wqjg 61m 4696Mi ds-cts-0 7m 390Mi ds-cts-1 6m 363Mi ds-cts-2 7m 353Mi ds-idrepo-0 4372m 13770Mi ds-idrepo-1 724m 13782Mi ds-idrepo-2 707m 13755Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3480m 3641Mi idm-65858d8c4c-x6slf 2922m 3652Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 501m 532Mi 12:29:34 DEBUG --- stderr --- 12:29:34 DEBUG 12:29:35 INFO 12:29:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:29:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:29:35 INFO [loop_until]: OK (rc = 0) 12:29:35 DEBUG --- stdout --- 12:29:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 118m 0% 5112Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 121m 0% 5811Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 118m 0% 6075Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 3702m 23% 4951Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 989m 6% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3091m 19% 4904Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4424m 27% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 767m 4% 14289Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1250m 7% 14311Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 572m 3% 2054Mi 3% 12:29:35 DEBUG --- stderr --- 12:29:35 DEBUG 12:30:34 INFO 12:30:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:30:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:30:34 INFO [loop_until]: OK (rc = 0) 12:30:34 DEBUG --- stdout --- 12:30:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 68m 5523Mi am-55f77847b7-6hcmp 55m 4091Mi am-55f77847b7-8wqjg 65m 5442Mi ds-cts-0 6m 390Mi ds-cts-1 11m 365Mi ds-cts-2 6m 353Mi ds-idrepo-0 3753m 13777Mi ds-idrepo-1 672m 13782Mi ds-idrepo-2 657m 13755Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3476m 3650Mi idm-65858d8c4c-x6slf 2980m 3692Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 506m 532Mi 12:30:34 DEBUG --- stderr --- 12:30:34 DEBUG 12:30:35 INFO 12:30:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:30:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:30:35 INFO [loop_until]: OK (rc = 0) 12:30:35 DEBUG --- stdout --- 12:30:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 5115Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 117m 0% 6534Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3751m 23% 4958Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 999m 6% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3180m 20% 4947Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3798m 23% 14341Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 689m 4% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 803m 5% 14313Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 555m 3% 2053Mi 3% 12:30:35 DEBUG --- stderr --- 12:30:35 DEBUG 12:31:34 INFO 12:31:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:31:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:31:34 INFO [loop_until]: OK (rc = 0) 12:31:34 DEBUG --- stdout --- 12:31:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 47m 5713Mi am-55f77847b7-6hcmp 53m 4096Mi am-55f77847b7-8wqjg 47m 5699Mi ds-cts-0 9m 390Mi ds-cts-1 7m 365Mi ds-cts-2 7m 353Mi ds-idrepo-0 3951m 13779Mi ds-idrepo-1 794m 13784Mi ds-idrepo-2 635m 13755Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3403m 3658Mi idm-65858d8c4c-x6slf 2899m 3664Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 497m 533Mi 12:31:34 DEBUG --- stderr --- 12:31:34 DEBUG 12:31:35 INFO 12:31:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:31:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:31:35 INFO [loop_until]: OK (rc = 0) 12:31:35 DEBUG --- stdout --- 12:31:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 5121Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 118m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 114m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3595m 22% 4976Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 985m 6% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3032m 19% 4917Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3865m 24% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 682m 4% 14288Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 858m 5% 14312Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 575m 3% 2052Mi 3% 12:31:35 DEBUG --- stderr --- 12:31:35 DEBUG 12:32:34 INFO 12:32:34 INFO [loop_until]: kubectl --namespace=xlou top pods 12:32:34 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:32:34 INFO [loop_until]: OK (rc = 0) 12:32:34 DEBUG --- stdout --- 12:32:34 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5716Mi am-55f77847b7-6hcmp 60m 4549Mi am-55f77847b7-8wqjg 49m 5700Mi ds-cts-0 6m 390Mi ds-cts-1 6m 365Mi ds-cts-2 7m 353Mi ds-idrepo-0 4020m 13781Mi ds-idrepo-1 771m 13785Mi ds-idrepo-2 801m 13787Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3383m 3670Mi idm-65858d8c4c-x6slf 2811m 3669Mi lodemon-9c5f9bf5b-bl4rx 1m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 498m 533Mi 12:32:34 DEBUG --- stderr --- 12:32:34 DEBUG 12:32:35 INFO 12:32:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:32:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:32:35 INFO [loop_until]: OK (rc = 0) 12:32:35 DEBUG --- stdout --- 12:32:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 126m 0% 5647Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3691m 23% 4979Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 986m 6% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3030m 19% 4923Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4060m 25% 14343Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 863m 5% 14348Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 808m 5% 14321Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 563m 3% 2052Mi 3% 12:32:35 DEBUG --- stderr --- 12:32:35 DEBUG 12:33:35 INFO 12:33:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:33:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:33:35 INFO [loop_until]: OK (rc = 0) 12:33:35 DEBUG --- stdout --- 12:33:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 57m 5738Mi am-55f77847b7-6hcmp 67m 5325Mi am-55f77847b7-8wqjg 54m 5711Mi ds-cts-0 6m 390Mi ds-cts-1 11m 365Mi ds-cts-2 6m 353Mi ds-idrepo-0 4286m 13807Mi ds-idrepo-1 951m 13831Mi ds-idrepo-2 1328m 13820Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3443m 3681Mi idm-65858d8c4c-x6slf 2886m 3677Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 479m 533Mi 12:33:35 DEBUG --- stderr --- 12:33:35 DEBUG 12:33:35 INFO 12:33:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:33:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:33:35 INFO [loop_until]: OK (rc = 0) 12:33:35 DEBUG --- stdout --- 12:33:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 128m 0% 6373Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 105m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 115m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3669m 23% 4988Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 982m 6% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2926m 18% 4930Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4056m 25% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1292m 8% 14350Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 988m 6% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 537m 3% 2052Mi 3% 12:33:35 DEBUG --- stderr --- 12:33:35 DEBUG 12:34:35 INFO 12:34:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:34:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:34:35 INFO [loop_until]: OK (rc = 0) 12:34:35 DEBUG --- stdout --- 12:34:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 50m 5740Mi am-55f77847b7-6hcmp 58m 5769Mi am-55f77847b7-8wqjg 46m 5713Mi ds-cts-0 6m 390Mi ds-cts-1 7m 366Mi ds-cts-2 6m 355Mi ds-idrepo-0 3808m 13813Mi ds-idrepo-1 735m 13823Mi ds-idrepo-2 862m 13819Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3492m 3694Mi idm-65858d8c4c-x6slf 3051m 3685Mi lodemon-9c5f9bf5b-bl4rx 1m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 490m 534Mi 12:34:35 DEBUG --- stderr --- 12:34:35 DEBUG 12:34:35 INFO 12:34:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:34:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:34:35 INFO [loop_until]: OK (rc = 0) 12:34:35 DEBUG --- stdout --- 12:34:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 118m 0% 6789Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 115m 0% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3678m 23% 5003Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 997m 6% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3103m 19% 4938Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3977m 25% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 989m 6% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 990m 6% 14337Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 577m 3% 2053Mi 3% 12:34:35 DEBUG --- stderr --- 12:34:35 DEBUG 12:35:35 INFO 12:35:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:35:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:35:35 INFO [loop_until]: OK (rc = 0) 12:35:35 DEBUG --- stdout --- 12:35:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 49m 5742Mi am-55f77847b7-6hcmp 49m 5769Mi am-55f77847b7-8wqjg 49m 5714Mi ds-cts-0 6m 391Mi ds-cts-1 7m 366Mi ds-cts-2 7m 354Mi ds-idrepo-0 3645m 13813Mi ds-idrepo-1 941m 13823Mi ds-idrepo-2 880m 13811Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3376m 3701Mi idm-65858d8c4c-x6slf 2970m 3729Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 501m 534Mi 12:35:35 DEBUG --- stderr --- 12:35:35 DEBUG 12:35:35 INFO 12:35:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:35:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:35:35 INFO [loop_until]: OK (rc = 0) 12:35:35 DEBUG --- stdout --- 12:35:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 109m 0% 6786Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6843Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3589m 22% 5009Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 991m 6% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3192m 20% 4983Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3995m 25% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 985m 6% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1050m 6% 14348Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 568m 3% 2052Mi 3% 12:35:35 DEBUG --- stderr --- 12:35:35 DEBUG 12:36:35 INFO 12:36:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:36:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:36:35 INFO [loop_until]: OK (rc = 0) 12:36:35 DEBUG --- stdout --- 12:36:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 57m 5743Mi am-55f77847b7-6hcmp 56m 5771Mi am-55f77847b7-8wqjg 58m 5715Mi ds-cts-0 7m 391Mi ds-cts-1 8m 366Mi ds-cts-2 5m 353Mi ds-idrepo-0 4840m 13795Mi ds-idrepo-1 1439m 13819Mi ds-idrepo-2 1311m 13835Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3589m 3715Mi idm-65858d8c4c-x6slf 2786m 3706Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 473m 534Mi 12:36:35 DEBUG --- stderr --- 12:36:35 DEBUG 12:36:35 INFO 12:36:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:36:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:36:35 INFO [loop_until]: OK (rc = 0) 12:36:35 DEBUG --- stdout --- 12:36:35 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 120m 0% 6786Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 115m 0% 6845Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 116m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3768m 23% 5024Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 992m 6% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2861m 18% 4957Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4881m 30% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1415m 8% 14353Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1490m 9% 14355Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 528m 3% 2052Mi 3% 12:36:35 DEBUG --- stderr --- 12:36:35 DEBUG 12:37:35 INFO 12:37:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:37:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:37:35 INFO [loop_until]: OK (rc = 0) 12:37:35 DEBUG --- stdout --- 12:37:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 46m 5779Mi am-55f77847b7-6hcmp 50m 5772Mi am-55f77847b7-8wqjg 55m 5754Mi ds-cts-0 21m 397Mi ds-cts-1 8m 366Mi ds-cts-2 14m 354Mi ds-idrepo-0 3800m 13807Mi ds-idrepo-1 1025m 13803Mi ds-idrepo-2 897m 13804Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3396m 3725Mi idm-65858d8c4c-x6slf 2819m 3720Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 504m 535Mi 12:37:35 DEBUG --- stderr --- 12:37:35 DEBUG 12:37:35 INFO 12:37:35 INFO [loop_until]: kubectl --namespace=xlou top node 12:37:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:37:36 INFO [loop_until]: OK (rc = 0) 12:37:36 DEBUG --- stdout --- 12:37:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 6792Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 112m 0% 6883Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3649m 22% 5038Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 980m 6% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2994m 18% 4981Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3950m 24% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 75m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 999m 6% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1083m 6% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 563m 3% 2054Mi 3% 12:37:36 DEBUG --- stderr --- 12:37:36 DEBUG 12:38:35 INFO 12:38:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:38:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:38:35 INFO [loop_until]: OK (rc = 0) 12:38:35 DEBUG --- stdout --- 12:38:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 49m 5780Mi am-55f77847b7-6hcmp 55m 5778Mi am-55f77847b7-8wqjg 56m 5754Mi ds-cts-0 6m 389Mi ds-cts-1 9m 368Mi ds-cts-2 7m 354Mi ds-idrepo-0 4615m 13823Mi ds-idrepo-1 1452m 13820Mi ds-idrepo-2 1375m 13842Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3584m 3735Mi idm-65858d8c4c-x6slf 2945m 3727Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 488m 535Mi 12:38:35 DEBUG --- stderr --- 12:38:35 DEBUG 12:38:36 INFO 12:38:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:38:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:38:36 INFO [loop_until]: OK (rc = 0) 12:38:36 DEBUG --- stdout --- 12:38:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6799Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 117m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3690m 23% 5044Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 985m 6% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3111m 19% 4981Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4697m 29% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1431m 9% 14351Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1645m 10% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 562m 3% 2055Mi 3% 12:38:36 DEBUG --- stderr --- 12:38:36 DEBUG 12:39:35 INFO 12:39:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:39:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:39:35 INFO [loop_until]: OK (rc = 0) 12:39:35 DEBUG --- stdout --- 12:39:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 49m 5780Mi am-55f77847b7-6hcmp 52m 5780Mi am-55f77847b7-8wqjg 45m 5754Mi ds-cts-0 6m 390Mi ds-cts-1 6m 369Mi ds-cts-2 6m 354Mi ds-idrepo-0 3971m 13838Mi ds-idrepo-1 956m 13842Mi ds-idrepo-2 979m 13839Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3441m 3744Mi idm-65858d8c4c-x6slf 2957m 3740Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 499m 534Mi 12:39:35 DEBUG --- stderr --- 12:39:35 DEBUG 12:39:36 INFO 12:39:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:39:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:39:36 INFO [loop_until]: OK (rc = 0) 12:39:36 DEBUG --- stdout --- 12:39:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6798Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 104m 0% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3634m 22% 5054Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 983m 6% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3050m 19% 4992Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4000m 25% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 862m 5% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1016m 6% 14369Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 571m 3% 2052Mi 3% 12:39:36 DEBUG --- stderr --- 12:39:36 DEBUG 12:40:35 INFO 12:40:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:40:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:40:35 INFO [loop_until]: OK (rc = 0) 12:40:35 DEBUG --- stdout --- 12:40:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5781Mi am-55f77847b7-6hcmp 56m 5785Mi am-55f77847b7-8wqjg 50m 5755Mi ds-cts-0 6m 389Mi ds-cts-1 6m 369Mi ds-cts-2 7m 354Mi ds-idrepo-0 4088m 13819Mi ds-idrepo-1 1085m 13820Mi ds-idrepo-2 967m 13834Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3505m 3752Mi idm-65858d8c4c-x6slf 2849m 3747Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 479m 535Mi 12:40:35 DEBUG --- stderr --- 12:40:35 DEBUG 12:40:36 INFO 12:40:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:40:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:40:36 INFO [loop_until]: OK (rc = 0) 12:40:36 DEBUG --- stdout --- 12:40:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 116m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3693m 23% 5062Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 978m 6% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2987m 18% 5003Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4381m 27% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 977m 6% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1112m 6% 14355Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 553m 3% 2056Mi 3% 12:40:36 DEBUG --- stderr --- 12:40:36 DEBUG 12:41:35 INFO 12:41:35 INFO [loop_until]: kubectl --namespace=xlou top pods 12:41:35 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:41:35 INFO [loop_until]: OK (rc = 0) 12:41:35 DEBUG --- stdout --- 12:41:35 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 50m 5781Mi am-55f77847b7-6hcmp 48m 5784Mi am-55f77847b7-8wqjg 51m 5756Mi ds-cts-0 6m 390Mi ds-cts-1 7m 369Mi ds-cts-2 6m 354Mi ds-idrepo-0 4121m 13823Mi ds-idrepo-1 1252m 13826Mi ds-idrepo-2 1394m 13832Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3535m 3761Mi idm-65858d8c4c-x6slf 3011m 3757Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 478m 536Mi 12:41:35 DEBUG --- stderr --- 12:41:35 DEBUG 12:41:36 INFO 12:41:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:41:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:41:36 INFO [loop_until]: OK (rc = 0) 12:41:36 DEBUG --- stdout --- 12:41:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 112m 0% 6886Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6889Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3668m 23% 5077Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 988m 6% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3131m 19% 5014Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4298m 27% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1228m 7% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1089m 6% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 555m 3% 2056Mi 3% 12:41:36 DEBUG --- stderr --- 12:41:36 DEBUG 12:42:36 INFO 12:42:36 INFO [loop_until]: kubectl --namespace=xlou top pods 12:42:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:42:36 INFO [loop_until]: OK (rc = 0) 12:42:36 DEBUG --- stdout --- 12:42:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5782Mi am-55f77847b7-6hcmp 57m 5784Mi am-55f77847b7-8wqjg 48m 5756Mi ds-cts-0 6m 389Mi ds-cts-1 6m 369Mi ds-cts-2 5m 354Mi ds-idrepo-0 4462m 13824Mi ds-idrepo-1 1425m 13831Mi ds-idrepo-2 1513m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3511m 3772Mi idm-65858d8c4c-x6slf 2921m 3772Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 500m 536Mi 12:42:36 DEBUG --- stderr --- 12:42:36 DEBUG 12:42:36 INFO 12:42:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:42:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:42:36 INFO [loop_until]: OK (rc = 0) 12:42:36 DEBUG --- stdout --- 12:42:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6802Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6889Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3723m 23% 5085Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 979m 6% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3062m 19% 5027Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4739m 29% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1167m 7% 14350Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1365m 8% 14348Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 555m 3% 2056Mi 3% 12:42:36 DEBUG --- stderr --- 12:42:36 DEBUG 12:43:36 INFO 12:43:36 INFO [loop_until]: kubectl --namespace=xlou top pods 12:43:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:43:36 INFO [loop_until]: OK (rc = 0) 12:43:36 DEBUG --- stdout --- 12:43:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 48m 5785Mi am-55f77847b7-6hcmp 49m 5784Mi am-55f77847b7-8wqjg 51m 5756Mi ds-cts-0 6m 389Mi ds-cts-1 8m 369Mi ds-cts-2 6m 355Mi ds-idrepo-0 3701m 13838Mi ds-idrepo-1 1003m 13844Mi ds-idrepo-2 1107m 13845Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3404m 3780Mi idm-65858d8c4c-x6slf 2856m 3773Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 491m 536Mi 12:43:36 DEBUG --- stderr --- 12:43:36 DEBUG 12:43:36 INFO 12:43:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:43:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:43:36 INFO [loop_until]: OK (rc = 0) 12:43:36 DEBUG --- stdout --- 12:43:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 112m 0% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3629m 22% 5090Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 983m 6% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2948m 18% 5029Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4027m 25% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1069m 6% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1127m 7% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 551m 3% 2057Mi 3% 12:43:36 DEBUG --- stderr --- 12:43:36 DEBUG 12:44:36 INFO 12:44:36 INFO [loop_until]: kubectl --namespace=xlou top pods 12:44:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:44:36 INFO [loop_until]: OK (rc = 0) 12:44:36 DEBUG --- stdout --- 12:44:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 46m 5785Mi am-55f77847b7-6hcmp 53m 5784Mi am-55f77847b7-8wqjg 45m 5760Mi ds-cts-0 8m 389Mi ds-cts-1 6m 369Mi ds-cts-2 6m 354Mi ds-idrepo-0 4223m 13824Mi ds-idrepo-1 1118m 13830Mi ds-idrepo-2 998m 13834Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3605m 3789Mi idm-65858d8c4c-x6slf 2925m 3779Mi lodemon-9c5f9bf5b-bl4rx 1m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 496m 537Mi 12:44:36 DEBUG --- stderr --- 12:44:36 DEBUG 12:44:36 INFO 12:44:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:44:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:44:36 INFO [loop_until]: OK (rc = 0) 12:44:36 DEBUG --- stdout --- 12:44:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 116m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 104m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3646m 22% 5096Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 996m 6% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3045m 19% 5037Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4405m 27% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1033m 6% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1217m 7% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 569m 3% 2058Mi 3% 12:44:36 DEBUG --- stderr --- 12:44:36 DEBUG 12:45:36 INFO 12:45:36 INFO [loop_until]: kubectl --namespace=xlou top pods 12:45:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:45:36 INFO [loop_until]: OK (rc = 0) 12:45:36 DEBUG --- stdout --- 12:45:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 45m 5785Mi am-55f77847b7-6hcmp 52m 5784Mi am-55f77847b7-8wqjg 48m 5760Mi ds-cts-0 6m 389Mi ds-cts-1 7m 367Mi ds-cts-2 11m 354Mi ds-idrepo-0 4837m 13808Mi ds-idrepo-1 1413m 13809Mi ds-idrepo-2 1263m 13845Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3489m 3802Mi idm-65858d8c4c-x6slf 2938m 3787Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 477m 537Mi 12:45:36 DEBUG --- stderr --- 12:45:36 DEBUG 12:45:36 INFO 12:45:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:45:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:45:36 INFO [loop_until]: OK (rc = 0) 12:45:36 DEBUG --- stdout --- 12:45:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6889Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3721m 23% 5106Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 997m 6% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3100m 19% 5042Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4724m 29% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1343m 8% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1422m 8% 14338Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 529m 3% 2059Mi 3% 12:45:36 DEBUG --- stderr --- 12:45:36 DEBUG 12:46:36 INFO 12:46:36 INFO [loop_until]: kubectl --namespace=xlou top pods 12:46:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:46:36 INFO [loop_until]: OK (rc = 0) 12:46:36 DEBUG --- stdout --- 12:46:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 48m 5785Mi am-55f77847b7-6hcmp 54m 5784Mi am-55f77847b7-8wqjg 49m 5760Mi ds-cts-0 6m 389Mi ds-cts-1 7m 369Mi ds-cts-2 7m 354Mi ds-idrepo-0 4276m 13825Mi ds-idrepo-1 1431m 13852Mi ds-idrepo-2 1304m 13812Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3507m 3811Mi idm-65858d8c4c-x6slf 2913m 3794Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 488m 537Mi 12:46:36 DEBUG --- stderr --- 12:46:36 DEBUG 12:46:36 INFO 12:46:36 INFO [loop_until]: kubectl --namespace=xlou top node 12:46:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:46:36 INFO [loop_until]: OK (rc = 0) 12:46:36 DEBUG --- stdout --- 12:46:36 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6800Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3758m 23% 5118Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 998m 6% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3048m 19% 5048Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4294m 27% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1298m 8% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1413m 8% 14355Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 551m 3% 2060Mi 3% 12:46:36 DEBUG --- stderr --- 12:46:36 DEBUG 12:47:36 INFO 12:47:36 INFO [loop_until]: kubectl --namespace=xlou top pods 12:47:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:47:36 INFO [loop_until]: OK (rc = 0) 12:47:36 DEBUG --- stdout --- 12:47:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 48m 5785Mi am-55f77847b7-6hcmp 46m 5787Mi am-55f77847b7-8wqjg 47m 5760Mi ds-cts-0 8m 389Mi ds-cts-1 12m 373Mi ds-cts-2 12m 355Mi ds-idrepo-0 4682m 13837Mi ds-idrepo-1 848m 13832Mi ds-idrepo-2 1036m 13811Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3627m 3821Mi idm-65858d8c4c-x6slf 2965m 3802Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 480m 537Mi 12:47:36 DEBUG --- stderr --- 12:47:36 DEBUG 12:47:37 INFO 12:47:37 INFO [loop_until]: kubectl --namespace=xlou top node 12:47:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:47:37 INFO [loop_until]: OK (rc = 0) 12:47:37 DEBUG --- stdout --- 12:47:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 108m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3624m 22% 5128Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 994m 6% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3083m 19% 5061Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 5077m 31% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1149m 7% 14350Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 909m 5% 14370Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 562m 3% 2058Mi 3% 12:47:37 DEBUG --- stderr --- 12:47:37 DEBUG 12:48:36 INFO 12:48:36 INFO [loop_until]: kubectl --namespace=xlou top pods 12:48:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:48:36 INFO [loop_until]: OK (rc = 0) 12:48:36 DEBUG --- stdout --- 12:48:36 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 46m 5785Mi am-55f77847b7-6hcmp 49m 5787Mi am-55f77847b7-8wqjg 47m 5760Mi ds-cts-0 9m 394Mi ds-cts-1 6m 373Mi ds-cts-2 7m 355Mi ds-idrepo-0 3995m 13823Mi ds-idrepo-1 928m 13829Mi ds-idrepo-2 942m 13827Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3383m 3828Mi idm-65858d8c4c-x6slf 2818m 3810Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 469m 537Mi 12:48:36 DEBUG --- stderr --- 12:48:36 DEBUG 12:48:37 INFO 12:48:37 INFO [loop_until]: kubectl --namespace=xlou top node 12:48:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:48:37 INFO [loop_until]: OK (rc = 0) 12:48:37 DEBUG --- stdout --- 12:48:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 103m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3596m 22% 5139Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 979m 6% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3028m 19% 5063Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4182m 26% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1016m 6% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 995m 6% 14364Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 537m 3% 2059Mi 3% 12:48:37 DEBUG --- stderr --- 12:48:37 DEBUG 12:49:36 INFO 12:49:36 INFO [loop_until]: kubectl --namespace=xlou top pods 12:49:36 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:49:37 INFO [loop_until]: OK (rc = 0) 12:49:37 DEBUG --- stdout --- 12:49:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 49m 5785Mi am-55f77847b7-6hcmp 49m 5787Mi am-55f77847b7-8wqjg 50m 5760Mi ds-cts-0 10m 392Mi ds-cts-1 6m 373Mi ds-cts-2 5m 355Mi ds-idrepo-0 4284m 13800Mi ds-idrepo-1 1098m 13815Mi ds-idrepo-2 1010m 13801Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3461m 3837Mi idm-65858d8c4c-x6slf 2875m 3819Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 497m 537Mi 12:49:37 DEBUG --- stderr --- 12:49:37 DEBUG 12:49:37 INFO 12:49:37 INFO [loop_until]: kubectl --namespace=xlou top node 12:49:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:49:37 INFO [loop_until]: OK (rc = 0) 12:49:37 DEBUG --- stdout --- 12:49:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6890Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3675m 23% 5146Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 991m 6% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3095m 19% 5076Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4297m 27% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1205m 7% 14326Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1315m 8% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 551m 3% 2058Mi 3% 12:49:37 DEBUG --- stderr --- 12:49:37 DEBUG 12:50:37 INFO 12:50:37 INFO [loop_until]: kubectl --namespace=xlou top pods 12:50:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:50:37 INFO [loop_until]: OK (rc = 0) 12:50:37 DEBUG --- stdout --- 12:50:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 47m 5786Mi am-55f77847b7-6hcmp 51m 5787Mi am-55f77847b7-8wqjg 44m 5761Mi ds-cts-0 6m 392Mi ds-cts-1 7m 373Mi ds-cts-2 10m 355Mi ds-idrepo-0 4677m 13833Mi ds-idrepo-1 1210m 13835Mi ds-idrepo-2 1461m 13843Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3401m 3847Mi idm-65858d8c4c-x6slf 2863m 3828Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 487m 538Mi 12:50:37 DEBUG --- stderr --- 12:50:37 DEBUG 12:50:37 INFO 12:50:37 INFO [loop_until]: kubectl --namespace=xlou top node 12:50:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:50:37 INFO [loop_until]: OK (rc = 0) 12:50:37 DEBUG --- stdout --- 12:50:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 104m 0% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 105m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3559m 22% 5156Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 989m 6% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3110m 19% 5084Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4657m 29% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1356m 8% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1229m 7% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 562m 3% 2056Mi 3% 12:50:37 DEBUG --- stderr --- 12:50:37 DEBUG 12:51:37 INFO 12:51:37 INFO [loop_until]: kubectl --namespace=xlou top pods 12:51:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:51:37 INFO [loop_until]: OK (rc = 0) 12:51:37 DEBUG --- stdout --- 12:51:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 47m 5786Mi am-55f77847b7-6hcmp 52m 5788Mi am-55f77847b7-8wqjg 45m 5761Mi ds-cts-0 6m 393Mi ds-cts-1 6m 373Mi ds-cts-2 9m 355Mi ds-idrepo-0 4039m 13825Mi ds-idrepo-1 996m 13854Mi ds-idrepo-2 1006m 13837Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3533m 3857Mi idm-65858d8c4c-x6slf 2943m 3834Mi lodemon-9c5f9bf5b-bl4rx 1m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 503m 538Mi 12:51:37 DEBUG --- stderr --- 12:51:37 DEBUG 12:51:37 INFO 12:51:37 INFO [loop_until]: kubectl --namespace=xlou top node 12:51:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:51:37 INFO [loop_until]: OK (rc = 0) 12:51:37 DEBUG --- stdout --- 12:51:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 105m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3736m 23% 5169Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 993m 6% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3025m 19% 5085Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4357m 27% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1071m 6% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 864m 5% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 546m 3% 2057Mi 3% 12:51:37 DEBUG --- stderr --- 12:51:37 DEBUG 12:52:37 INFO 12:52:37 INFO [loop_until]: kubectl --namespace=xlou top pods 12:52:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:52:37 INFO [loop_until]: OK (rc = 0) 12:52:37 DEBUG --- stdout --- 12:52:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 46m 5786Mi am-55f77847b7-6hcmp 53m 5788Mi am-55f77847b7-8wqjg 44m 5761Mi ds-cts-0 6m 394Mi ds-cts-1 6m 373Mi ds-cts-2 6m 355Mi ds-idrepo-0 4864m 13815Mi ds-idrepo-1 1331m 13811Mi ds-idrepo-2 1289m 13822Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3573m 3868Mi idm-65858d8c4c-x6slf 2763m 3845Mi lodemon-9c5f9bf5b-bl4rx 6m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 483m 539Mi 12:52:37 DEBUG --- stderr --- 12:52:37 DEBUG 12:52:37 INFO 12:52:37 INFO [loop_until]: kubectl --namespace=xlou top node 12:52:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:52:37 INFO [loop_until]: OK (rc = 0) 12:52:37 DEBUG --- stdout --- 12:52:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 104m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 102m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3698m 23% 5178Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 981m 6% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2974m 18% 5096Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 5020m 31% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1371m 8% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1420m 8% 14349Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 549m 3% 2057Mi 3% 12:52:37 DEBUG --- stderr --- 12:52:37 DEBUG 12:53:37 INFO 12:53:37 INFO [loop_until]: kubectl --namespace=xlou top pods 12:53:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:53:37 INFO [loop_until]: OK (rc = 0) 12:53:37 DEBUG --- stdout --- 12:53:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 48m 5787Mi am-55f77847b7-6hcmp 49m 5789Mi am-55f77847b7-8wqjg 48m 5761Mi ds-cts-0 7m 393Mi ds-cts-1 6m 374Mi ds-cts-2 6m 355Mi ds-idrepo-0 4109m 13829Mi ds-idrepo-1 987m 13853Mi ds-idrepo-2 862m 13851Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3457m 3877Mi idm-65858d8c4c-x6slf 2918m 3852Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 472m 538Mi 12:53:37 DEBUG --- stderr --- 12:53:37 DEBUG 12:53:37 INFO 12:53:37 INFO [loop_until]: kubectl --namespace=xlou top node 12:53:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:53:37 INFO [loop_until]: OK (rc = 0) 12:53:37 DEBUG --- stdout --- 12:53:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3537m 22% 5186Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 979m 6% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2918m 18% 5105Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4079m 25% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1020m 6% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1068m 6% 14382Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 548m 3% 2058Mi 3% 12:53:37 DEBUG --- stderr --- 12:53:37 DEBUG 12:54:37 INFO 12:54:37 INFO [loop_until]: kubectl --namespace=xlou top pods 12:54:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:54:37 INFO [loop_until]: OK (rc = 0) 12:54:37 DEBUG --- stdout --- 12:54:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 49m 5787Mi am-55f77847b7-6hcmp 48m 5796Mi am-55f77847b7-8wqjg 48m 5761Mi ds-cts-0 6m 392Mi ds-cts-1 9m 373Mi ds-cts-2 6m 355Mi ds-idrepo-0 4013m 13823Mi ds-idrepo-1 1051m 13853Mi ds-idrepo-2 996m 13851Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3468m 3885Mi idm-65858d8c4c-x6slf 2891m 3859Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 499m 539Mi 12:54:37 DEBUG --- stderr --- 12:54:37 DEBUG 12:54:37 INFO 12:54:37 INFO [loop_until]: kubectl --namespace=xlou top node 12:54:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:54:37 INFO [loop_until]: OK (rc = 0) 12:54:37 DEBUG --- stdout --- 12:54:37 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 108m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3688m 23% 5199Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 991m 6% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3185m 20% 5117Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4258m 26% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 867m 5% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1130m 7% 14390Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 556m 3% 2058Mi 3% 12:54:37 DEBUG --- stderr --- 12:54:37 DEBUG 12:55:37 INFO 12:55:37 INFO [loop_until]: kubectl --namespace=xlou top pods 12:55:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:55:37 INFO [loop_until]: OK (rc = 0) 12:55:37 DEBUG --- stdout --- 12:55:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 50m 5787Mi am-55f77847b7-6hcmp 49m 5796Mi am-55f77847b7-8wqjg 48m 5761Mi ds-cts-0 11m 392Mi ds-cts-1 7m 373Mi ds-cts-2 6m 355Mi ds-idrepo-0 4054m 13823Mi ds-idrepo-1 1048m 13823Mi ds-idrepo-2 784m 13843Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3553m 3892Mi idm-65858d8c4c-x6slf 2775m 3868Mi lodemon-9c5f9bf5b-bl4rx 2m 65Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 473m 539Mi 12:55:37 DEBUG --- stderr --- 12:55:37 DEBUG 12:55:38 INFO 12:55:38 INFO [loop_until]: kubectl --namespace=xlou top node 12:55:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:55:38 INFO [loop_until]: OK (rc = 0) 12:55:38 DEBUG --- stdout --- 12:55:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 105m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3651m 22% 5203Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 982m 6% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2984m 18% 5122Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4051m 25% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 851m 5% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1152m 7% 14361Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 556m 3% 2057Mi 3% 12:55:38 DEBUG --- stderr --- 12:55:38 DEBUG 12:56:37 INFO 12:56:37 INFO [loop_until]: kubectl --namespace=xlou top pods 12:56:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:56:37 INFO [loop_until]: OK (rc = 0) 12:56:37 DEBUG --- stdout --- 12:56:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 32m 5788Mi am-55f77847b7-6hcmp 34m 5796Mi am-55f77847b7-8wqjg 36m 5762Mi ds-cts-0 8m 393Mi ds-cts-1 6m 373Mi ds-cts-2 9m 356Mi ds-idrepo-0 4631m 13813Mi ds-idrepo-1 666m 13829Mi ds-idrepo-2 1458m 13839Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2502m 3899Mi idm-65858d8c4c-x6slf 1753m 3883Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 332m 538Mi 12:56:37 DEBUG --- stderr --- 12:56:37 DEBUG 12:56:38 INFO 12:56:38 INFO [loop_until]: kubectl --namespace=xlou top node 12:56:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:56:38 INFO [loop_until]: OK (rc = 0) 12:56:38 DEBUG --- stdout --- 12:56:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2907m 18% 5210Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 540m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1701m 10% 5141Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4218m 26% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1336m 8% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 676m 4% 14360Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 442m 2% 2056Mi 3% 12:56:38 DEBUG --- stderr --- 12:56:38 DEBUG 12:57:37 INFO 12:57:37 INFO [loop_until]: kubectl --namespace=xlou top pods 12:57:37 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:57:37 INFO [loop_until]: OK (rc = 0) 12:57:37 DEBUG --- stdout --- 12:57:37 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 6m 5788Mi am-55f77847b7-6hcmp 8m 5797Mi am-55f77847b7-8wqjg 4m 5762Mi ds-cts-0 6m 393Mi ds-cts-1 5m 373Mi ds-cts-2 7m 355Mi ds-idrepo-0 16m 13766Mi ds-idrepo-1 10m 13787Mi ds-idrepo-2 10m 13798Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 5m 3899Mi idm-65858d8c4c-x6slf 8m 3883Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 104Mi 12:57:37 DEBUG --- stderr --- 12:57:37 DEBUG 12:57:38 INFO 12:57:38 INFO [loop_until]: kubectl --namespace=xlou top node 12:57:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:57:38 INFO [loop_until]: OK (rc = 0) 12:57:38 DEBUG --- stdout --- 12:57:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 5209Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 134m 0% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5137Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 56m 0% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14317Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 64m 0% 1629Mi 2% 12:57:38 DEBUG --- stderr --- 12:57:38 DEBUG 127.0.0.1 - - [12/Aug/2023 12:58:23] "GET /monitoring/average?start_time=23-08-12_11:27:52&stop_time=23-08-12_11:56:23 HTTP/1.1" 200 - 12:58:38 INFO 12:58:38 INFO [loop_until]: kubectl --namespace=xlou top pods 12:58:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:58:38 INFO [loop_until]: OK (rc = 0) 12:58:38 DEBUG --- stdout --- 12:58:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 6m 5788Mi am-55f77847b7-6hcmp 8m 5796Mi am-55f77847b7-8wqjg 5m 5762Mi ds-cts-0 7m 393Mi ds-cts-1 6m 374Mi ds-cts-2 9m 355Mi ds-idrepo-0 15m 13766Mi ds-idrepo-1 10m 13787Mi ds-idrepo-2 8m 13797Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 5m 3899Mi idm-65858d8c4c-x6slf 7m 3882Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 104Mi 12:58:38 DEBUG --- stderr --- 12:58:38 DEBUG 12:58:38 INFO 12:58:38 INFO [loop_until]: kubectl --namespace=xlou top node 12:58:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:58:38 INFO [loop_until]: OK (rc = 0) 12:58:38 DEBUG --- stdout --- 12:58:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 5209Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 5142Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 14340Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1630Mi 2% 12:58:38 DEBUG --- stderr --- 12:58:38 DEBUG 12:59:38 INFO 12:59:38 INFO [loop_until]: kubectl --namespace=xlou top pods 12:59:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:59:38 INFO [loop_until]: OK (rc = 0) 12:59:38 DEBUG --- stdout --- 12:59:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 34m 5788Mi am-55f77847b7-6hcmp 39m 5796Mi am-55f77847b7-8wqjg 42m 5774Mi ds-cts-0 8m 393Mi ds-cts-1 7m 374Mi ds-cts-2 7m 356Mi ds-idrepo-0 2927m 13810Mi ds-idrepo-1 550m 13813Mi ds-idrepo-2 602m 13794Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2782m 3921Mi idm-65858d8c4c-x6slf 2226m 3928Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 779m 513Mi 12:59:38 DEBUG --- stderr --- 12:59:38 DEBUG 12:59:38 INFO 12:59:38 INFO [loop_until]: kubectl --namespace=xlou top node 12:59:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 12:59:38 INFO [loop_until]: OK (rc = 0) 12:59:38 DEBUG --- stdout --- 12:59:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2605m 16% 5232Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 706m 4% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2800m 17% 5186Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3060m 19% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 968m 6% 14327Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 787m 4% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 986m 6% 2019Mi 3% 12:59:38 DEBUG --- stderr --- 12:59:38 DEBUG 13:00:38 INFO 13:00:38 INFO [loop_until]: kubectl --namespace=xlou top pods 13:00:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:00:38 INFO [loop_until]: OK (rc = 0) 13:00:38 DEBUG --- stdout --- 13:00:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 52m 5789Mi am-55f77847b7-6hcmp 56m 5791Mi am-55f77847b7-8wqjg 51m 5774Mi ds-cts-0 7m 393Mi ds-cts-1 6m 374Mi ds-cts-2 5m 356Mi ds-idrepo-0 4324m 13835Mi ds-idrepo-1 1017m 13853Mi ds-idrepo-2 1086m 13840Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3631m 3934Mi idm-65858d8c4c-x6slf 3022m 3942Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 552m 520Mi 13:00:38 DEBUG --- stderr --- 13:00:38 DEBUG 13:00:38 INFO 13:00:38 INFO [loop_until]: kubectl --namespace=xlou top node 13:00:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:00:38 INFO [loop_until]: OK (rc = 0) 13:00:38 DEBUG --- stdout --- 13:00:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 124m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3700m 23% 5243Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1111m 6% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3246m 20% 5197Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4297m 27% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1223m 7% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 932m 5% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 616m 3% 2040Mi 3% 13:00:38 DEBUG --- stderr --- 13:00:38 DEBUG 13:01:38 INFO 13:01:38 INFO [loop_until]: kubectl --namespace=xlou top pods 13:01:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:01:38 INFO [loop_until]: OK (rc = 0) 13:01:38 DEBUG --- stdout --- 13:01:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 52m 5790Mi am-55f77847b7-6hcmp 56m 5791Mi am-55f77847b7-8wqjg 49m 5774Mi ds-cts-0 6m 393Mi ds-cts-1 7m 374Mi ds-cts-2 6m 356Mi ds-idrepo-0 5140m 13815Mi ds-idrepo-1 1975m 13832Mi ds-idrepo-2 2484m 13815Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3712m 3945Mi idm-65858d8c4c-x6slf 3082m 3991Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 558m 522Mi 13:01:38 DEBUG --- stderr --- 13:01:38 DEBUG 13:01:38 INFO 13:01:38 INFO [loop_until]: kubectl --namespace=xlou top node 13:01:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:01:38 INFO [loop_until]: OK (rc = 0) 13:01:38 DEBUG --- stdout --- 13:01:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3912m 24% 5250Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1113m 7% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3250m 20% 5246Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4770m 30% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2239m 14% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2197m 13% 14368Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 622m 3% 2043Mi 3% 13:01:38 DEBUG --- stderr --- 13:01:38 DEBUG 13:02:38 INFO 13:02:38 INFO [loop_until]: kubectl --namespace=xlou top pods 13:02:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:02:38 INFO [loop_until]: OK (rc = 0) 13:02:38 DEBUG --- stdout --- 13:02:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 53m 5790Mi am-55f77847b7-6hcmp 53m 5792Mi am-55f77847b7-8wqjg 52m 5774Mi ds-cts-0 8m 393Mi ds-cts-1 7m 374Mi ds-cts-2 8m 356Mi ds-idrepo-0 4943m 13835Mi ds-idrepo-1 1199m 13854Mi ds-idrepo-2 1371m 13804Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3602m 3957Mi idm-65858d8c4c-x6slf 3099m 3965Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 549m 525Mi 13:02:38 DEBUG --- stderr --- 13:02:38 DEBUG 13:02:38 INFO 13:02:38 INFO [loop_until]: kubectl --namespace=xlou top node 13:02:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:02:38 INFO [loop_until]: OK (rc = 0) 13:02:38 DEBUG --- stdout --- 13:02:38 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3855m 24% 5276Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1107m 6% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3237m 20% 5219Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4740m 29% 14425Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1371m 8% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1278m 8% 14341Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 619m 3% 2042Mi 3% 13:02:38 DEBUG --- stderr --- 13:02:38 DEBUG 13:03:38 INFO 13:03:38 INFO [loop_until]: kubectl --namespace=xlou top pods 13:03:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:03:38 INFO [loop_until]: OK (rc = 0) 13:03:38 DEBUG --- stdout --- 13:03:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 52m 5789Mi am-55f77847b7-6hcmp 56m 5792Mi am-55f77847b7-8wqjg 52m 5774Mi ds-cts-0 6m 395Mi ds-cts-1 6m 374Mi ds-cts-2 7m 356Mi ds-idrepo-0 4971m 13820Mi ds-idrepo-1 1288m 13810Mi ds-idrepo-2 1794m 13821Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3749m 3966Mi idm-65858d8c4c-x6slf 3077m 3983Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 553m 527Mi 13:03:38 DEBUG --- stderr --- 13:03:38 DEBUG 13:03:38 INFO 13:03:38 INFO [loop_until]: kubectl --namespace=xlou top node 13:03:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:03:39 INFO [loop_until]: OK (rc = 0) 13:03:39 DEBUG --- stdout --- 13:03:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 113m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3781m 23% 5278Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1122m 7% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3195m 20% 5239Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 5229m 32% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1377m 8% 14136Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1399m 8% 14357Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 594m 3% 2047Mi 3% 13:03:39 DEBUG --- stderr --- 13:03:39 DEBUG 13:04:38 INFO 13:04:38 INFO [loop_until]: kubectl --namespace=xlou top pods 13:04:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:04:38 INFO [loop_until]: OK (rc = 0) 13:04:38 DEBUG --- stdout --- 13:04:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 54m 5790Mi am-55f77847b7-6hcmp 58m 5792Mi am-55f77847b7-8wqjg 52m 5774Mi ds-cts-0 6m 393Mi ds-cts-1 6m 374Mi ds-cts-2 6m 356Mi ds-idrepo-0 4361m 13785Mi ds-idrepo-1 1168m 13756Mi ds-idrepo-2 1019m 13698Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3747m 3975Mi idm-65858d8c4c-x6slf 2956m 3991Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 544m 529Mi 13:04:38 DEBUG --- stderr --- 13:04:38 DEBUG 13:04:39 INFO 13:04:39 INFO [loop_until]: kubectl --namespace=xlou top node 13:04:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:04:39 INFO [loop_until]: OK (rc = 0) 13:04:39 DEBUG --- stdout --- 13:04:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 121m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3913m 24% 5287Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1104m 6% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3182m 20% 5249Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4518m 28% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1038m 6% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1226m 7% 14303Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 635m 3% 2063Mi 3% 13:04:39 DEBUG --- stderr --- 13:04:39 DEBUG 13:05:38 INFO 13:05:38 INFO [loop_until]: kubectl --namespace=xlou top pods 13:05:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:05:38 INFO [loop_until]: OK (rc = 0) 13:05:38 DEBUG --- stdout --- 13:05:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5790Mi am-55f77847b7-6hcmp 54m 5792Mi am-55f77847b7-8wqjg 50m 5775Mi ds-cts-0 6m 393Mi ds-cts-1 6m 374Mi ds-cts-2 5m 356Mi ds-idrepo-0 5195m 13826Mi ds-idrepo-1 1565m 13807Mi ds-idrepo-2 771m 13737Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3691m 3988Mi idm-65858d8c4c-x6slf 3228m 3998Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 574m 533Mi 13:05:38 DEBUG --- stderr --- 13:05:38 DEBUG 13:05:39 INFO 13:05:39 INFO [loop_until]: kubectl --namespace=xlou top node 13:05:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:05:39 INFO [loop_until]: OK (rc = 0) 13:05:39 DEBUG --- stdout --- 13:05:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3941m 24% 5302Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1118m 7% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3276m 20% 5257Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 5629m 35% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 827m 5% 14288Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1884m 11% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 652m 4% 2052Mi 3% 13:05:39 DEBUG --- stderr --- 13:05:39 DEBUG 13:06:38 INFO 13:06:38 INFO [loop_until]: kubectl --namespace=xlou top pods 13:06:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:06:38 INFO [loop_until]: OK (rc = 0) 13:06:38 DEBUG --- stdout --- 13:06:38 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5790Mi am-55f77847b7-6hcmp 54m 5792Mi am-55f77847b7-8wqjg 52m 5775Mi ds-cts-0 8m 393Mi ds-cts-1 6m 374Mi ds-cts-2 5m 356Mi ds-idrepo-0 4219m 13645Mi ds-idrepo-1 908m 13663Mi ds-idrepo-2 1859m 13815Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3655m 3997Mi idm-65858d8c4c-x6slf 2967m 4008Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 527m 535Mi 13:06:38 DEBUG --- stderr --- 13:06:38 DEBUG 13:06:39 INFO 13:06:39 INFO [loop_until]: kubectl --namespace=xlou top node 13:06:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:06:39 INFO [loop_until]: OK (rc = 0) 13:06:39 DEBUG --- stdout --- 13:06:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 116m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3800m 23% 5309Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1090m 6% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3156m 19% 5260Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4227m 26% 14238Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1605m 10% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 864m 5% 14214Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 592m 3% 2052Mi 3% 13:06:39 DEBUG --- stderr --- 13:06:39 DEBUG 13:07:38 INFO 13:07:38 INFO [loop_until]: kubectl --namespace=xlou top pods 13:07:38 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:07:39 INFO [loop_until]: OK (rc = 0) 13:07:39 DEBUG --- stdout --- 13:07:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 50m 5790Mi am-55f77847b7-6hcmp 57m 5792Mi am-55f77847b7-8wqjg 49m 5775Mi ds-cts-0 8m 393Mi ds-cts-1 7m 374Mi ds-cts-2 8m 356Mi ds-idrepo-0 4337m 13709Mi ds-idrepo-1 879m 13709Mi ds-idrepo-2 805m 13775Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3724m 4006Mi idm-65858d8c4c-x6slf 3007m 4018Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 568m 537Mi 13:07:39 DEBUG --- stderr --- 13:07:39 DEBUG 13:07:39 INFO 13:07:39 INFO [loop_until]: kubectl --namespace=xlou top node 13:07:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:07:39 INFO [loop_until]: OK (rc = 0) 13:07:39 DEBUG --- stdout --- 13:07:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3881m 24% 5320Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1124m 7% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3207m 20% 5271Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 4298m 27% 14302Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 850m 5% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 939m 5% 14264Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 623m 3% 2054Mi 3% 13:07:39 DEBUG --- stderr --- 13:07:39 DEBUG 13:08:39 INFO 13:08:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:08:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:08:39 INFO [loop_until]: OK (rc = 0) 13:08:39 DEBUG --- stdout --- 13:08:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 55m 5790Mi am-55f77847b7-6hcmp 56m 5793Mi am-55f77847b7-8wqjg 54m 5775Mi ds-cts-0 10m 391Mi ds-cts-1 6m 374Mi ds-cts-2 7m 356Mi ds-idrepo-0 4186m 13772Mi ds-idrepo-1 1102m 13757Mi ds-idrepo-2 1045m 13826Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3640m 4014Mi idm-65858d8c4c-x6slf 2999m 4025Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 555m 540Mi 13:08:39 DEBUG --- stderr --- 13:08:39 DEBUG 13:08:39 INFO 13:08:39 INFO [loop_until]: kubectl --namespace=xlou top node 13:08:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:08:39 INFO [loop_until]: OK (rc = 0) 13:08:39 DEBUG --- stdout --- 13:08:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 115m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 114m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3841m 24% 5328Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1097m 6% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3153m 19% 5282Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4286m 26% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1065m 6% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 932m 5% 14308Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 619m 3% 2056Mi 3% 13:08:39 DEBUG --- stderr --- 13:08:39 DEBUG 13:09:39 INFO 13:09:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:09:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:09:39 INFO [loop_until]: OK (rc = 0) 13:09:39 DEBUG --- stdout --- 13:09:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 53m 5790Mi am-55f77847b7-6hcmp 53m 5793Mi am-55f77847b7-8wqjg 57m 5774Mi ds-cts-0 9m 391Mi ds-cts-1 7m 374Mi ds-cts-2 7m 356Mi ds-idrepo-0 4434m 13854Mi ds-idrepo-1 1158m 13817Mi ds-idrepo-2 1015m 13719Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3725m 4025Mi idm-65858d8c4c-x6slf 3109m 4034Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 542m 542Mi 13:09:39 DEBUG --- stderr --- 13:09:39 DEBUG 13:09:39 INFO 13:09:39 INFO [loop_until]: kubectl --namespace=xlou top node 13:09:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:09:39 INFO [loop_until]: OK (rc = 0) 13:09:39 DEBUG --- stdout --- 13:09:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 115m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 113m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3896m 24% 5336Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1122m 7% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3245m 20% 5284Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4275m 26% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1047m 6% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1119m 7% 14381Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 605m 3% 2060Mi 3% 13:09:39 DEBUG --- stderr --- 13:09:39 DEBUG 13:10:39 INFO 13:10:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:10:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:10:39 INFO [loop_until]: OK (rc = 0) 13:10:39 DEBUG --- stdout --- 13:10:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 54m 5791Mi am-55f77847b7-6hcmp 53m 5793Mi am-55f77847b7-8wqjg 55m 5777Mi ds-cts-0 6m 391Mi ds-cts-1 6m 374Mi ds-cts-2 12m 363Mi ds-idrepo-0 4276m 13716Mi ds-idrepo-1 1060m 13594Mi ds-idrepo-2 913m 13763Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3670m 4035Mi idm-65858d8c4c-x6slf 2974m 4040Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 558m 544Mi 13:10:39 DEBUG --- stderr --- 13:10:39 DEBUG 13:10:39 INFO 13:10:39 INFO [loop_until]: kubectl --namespace=xlou top node 13:10:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:10:39 INFO [loop_until]: OK (rc = 0) 13:10:39 DEBUG --- stdout --- 13:10:39 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 115m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3829m 24% 5346Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1115m 7% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3209m 20% 5306Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4309m 27% 14313Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 875m 5% 14323Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1078m 6% 14144Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 623m 3% 2061Mi 3% 13:10:39 DEBUG --- stderr --- 13:10:39 DEBUG 13:11:39 INFO 13:11:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:11:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:11:39 INFO [loop_until]: OK (rc = 0) 13:11:39 DEBUG --- stdout --- 13:11:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 49m 5791Mi am-55f77847b7-6hcmp 57m 5793Mi am-55f77847b7-8wqjg 49m 5776Mi ds-cts-0 6m 391Mi ds-cts-1 7m 374Mi ds-cts-2 5m 353Mi ds-idrepo-0 4357m 13769Mi ds-idrepo-1 790m 13625Mi ds-idrepo-2 1554m 13821Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3607m 4043Mi idm-65858d8c4c-x6slf 3014m 4051Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 552m 546Mi 13:11:39 DEBUG --- stderr --- 13:11:39 DEBUG 13:11:39 INFO 13:11:39 INFO [loop_until]: kubectl --namespace=xlou top node 13:11:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:11:40 INFO [loop_until]: OK (rc = 0) 13:11:40 DEBUG --- stdout --- 13:11:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 113m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3833m 24% 5356Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1102m 6% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3189m 20% 5303Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4516m 28% 14373Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1338m 8% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 944m 5% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 629m 3% 2065Mi 3% 13:11:40 DEBUG --- stderr --- 13:11:40 DEBUG 13:12:39 INFO 13:12:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:12:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:12:39 INFO [loop_until]: OK (rc = 0) 13:12:39 DEBUG --- stdout --- 13:12:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 53m 5791Mi am-55f77847b7-6hcmp 57m 5793Mi am-55f77847b7-8wqjg 51m 5777Mi ds-cts-0 6m 391Mi ds-cts-1 11m 374Mi ds-cts-2 5m 353Mi ds-idrepo-0 4318m 13831Mi ds-idrepo-1 854m 13683Mi ds-idrepo-2 774m 13658Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3601m 4055Mi idm-65858d8c4c-x6slf 2988m 4058Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 552m 548Mi 13:12:39 DEBUG --- stderr --- 13:12:39 DEBUG 13:12:40 INFO 13:12:40 INFO [loop_until]: kubectl --namespace=xlou top node 13:12:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:12:40 INFO [loop_until]: OK (rc = 0) 13:12:40 DEBUG --- stdout --- 13:12:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 119m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3841m 24% 5370Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1090m 6% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3153m 19% 5312Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4337m 27% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1210m 7% 14213Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1102m 6% 14274Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 596m 3% 2069Mi 3% 13:12:40 DEBUG --- stderr --- 13:12:40 DEBUG 13:13:39 INFO 13:13:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:13:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:13:39 INFO [loop_until]: OK (rc = 0) 13:13:39 DEBUG --- stdout --- 13:13:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 53m 5791Mi am-55f77847b7-6hcmp 56m 5793Mi am-55f77847b7-8wqjg 53m 5777Mi ds-cts-0 6m 391Mi ds-cts-1 11m 374Mi ds-cts-2 6m 353Mi ds-idrepo-0 4268m 13791Mi ds-idrepo-1 889m 13632Mi ds-idrepo-2 1243m 13689Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3704m 4064Mi idm-65858d8c4c-x6slf 3168m 4066Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 556m 551Mi 13:13:39 DEBUG --- stderr --- 13:13:39 DEBUG 13:13:40 INFO 13:13:40 INFO [loop_until]: kubectl --namespace=xlou top node 13:13:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:13:40 INFO [loop_until]: OK (rc = 0) 13:13:40 DEBUG --- stdout --- 13:13:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3807m 23% 5381Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1091m 6% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3316m 20% 5320Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4425m 27% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1374m 8% 14269Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1151m 7% 14197Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 638m 4% 2072Mi 3% 13:13:40 DEBUG --- stderr --- 13:13:40 DEBUG 13:14:39 INFO 13:14:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:14:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:14:39 INFO [loop_until]: OK (rc = 0) 13:14:39 DEBUG --- stdout --- 13:14:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 56m 5786Mi am-55f77847b7-6hcmp 52m 5799Mi am-55f77847b7-8wqjg 52m 5776Mi ds-cts-0 6m 391Mi ds-cts-1 7m 374Mi ds-cts-2 6m 353Mi ds-idrepo-0 4839m 13842Mi ds-idrepo-1 1092m 13667Mi ds-idrepo-2 1201m 13620Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3669m 4074Mi idm-65858d8c4c-x6slf 2988m 4075Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 547m 554Mi 13:14:39 DEBUG --- stderr --- 13:14:39 DEBUG 13:14:40 INFO 13:14:40 INFO [loop_until]: kubectl --namespace=xlou top node 13:14:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:14:40 INFO [loop_until]: OK (rc = 0) 13:14:40 DEBUG --- stdout --- 13:14:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 113m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 114m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3886m 24% 5384Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1098m 6% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3160m 19% 5325Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4591m 28% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1341m 8% 14185Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1233m 7% 14235Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 620m 3% 2074Mi 3% 13:14:40 DEBUG --- stderr --- 13:14:40 DEBUG 13:15:39 INFO 13:15:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:15:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:15:39 INFO [loop_until]: OK (rc = 0) 13:15:39 DEBUG --- stdout --- 13:15:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 53m 5786Mi am-55f77847b7-6hcmp 55m 5799Mi am-55f77847b7-8wqjg 53m 5776Mi ds-cts-0 6m 391Mi ds-cts-1 6m 374Mi ds-cts-2 7m 353Mi ds-idrepo-0 4277m 13793Mi ds-idrepo-1 1070m 13634Mi ds-idrepo-2 802m 13658Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3608m 4084Mi idm-65858d8c4c-x6slf 3020m 4082Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 550m 557Mi 13:15:39 DEBUG --- stderr --- 13:15:39 DEBUG 13:15:40 INFO 13:15:40 INFO [loop_until]: kubectl --namespace=xlou top node 13:15:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:15:40 INFO [loop_until]: OK (rc = 0) 13:15:40 DEBUG --- stdout --- 13:15:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 113m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3889m 24% 5394Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1074m 6% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3257m 20% 5336Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4324m 27% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1033m 6% 14227Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1151m 7% 14191Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 605m 3% 2077Mi 3% 13:15:40 DEBUG --- stderr --- 13:15:40 DEBUG 13:16:39 INFO 13:16:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:16:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:16:39 INFO [loop_until]: OK (rc = 0) 13:16:39 DEBUG --- stdout --- 13:16:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 50m 5786Mi am-55f77847b7-6hcmp 53m 5799Mi am-55f77847b7-8wqjg 49m 5776Mi ds-cts-0 7m 391Mi ds-cts-1 6m 374Mi ds-cts-2 5m 353Mi ds-idrepo-0 4639m 13822Mi ds-idrepo-1 1713m 13672Mi ds-idrepo-2 1109m 13705Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3488m 4093Mi idm-65858d8c4c-x6slf 3138m 4092Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 569m 558Mi 13:16:39 DEBUG --- stderr --- 13:16:39 DEBUG 13:16:40 INFO 13:16:40 INFO [loop_until]: kubectl --namespace=xlou top node 13:16:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:16:40 INFO [loop_until]: OK (rc = 0) 13:16:40 DEBUG --- stdout --- 13:16:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3817m 24% 5405Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1110m 6% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3149m 19% 5349Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5076m 31% 14440Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1072m 6% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1557m 9% 14236Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 632m 3% 2080Mi 3% 13:16:40 DEBUG --- stderr --- 13:16:40 DEBUG 13:17:39 INFO 13:17:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:17:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:17:39 INFO [loop_until]: OK (rc = 0) 13:17:39 DEBUG --- stdout --- 13:17:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5786Mi am-55f77847b7-6hcmp 56m 5799Mi am-55f77847b7-8wqjg 51m 5776Mi ds-cts-0 6m 392Mi ds-cts-1 8m 374Mi ds-cts-2 8m 353Mi ds-idrepo-0 4598m 13730Mi ds-idrepo-1 1254m 13646Mi ds-idrepo-2 842m 13644Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3893m 4107Mi idm-65858d8c4c-x6slf 3064m 4101Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 567m 561Mi 13:17:39 DEBUG --- stderr --- 13:17:39 DEBUG 13:17:40 INFO 13:17:40 INFO [loop_until]: kubectl --namespace=xlou top node 13:17:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:17:40 INFO [loop_until]: OK (rc = 0) 13:17:40 DEBUG --- stdout --- 13:17:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3977m 25% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1076m 6% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3241m 20% 5353Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4516m 28% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 939m 5% 14216Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1212m 7% 14223Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 620m 3% 2082Mi 3% 13:17:40 DEBUG --- stderr --- 13:17:40 DEBUG 13:18:39 INFO 13:18:39 INFO [loop_until]: kubectl --namespace=xlou top pods 13:18:39 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:18:39 INFO [loop_until]: OK (rc = 0) 13:18:39 DEBUG --- stdout --- 13:18:39 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5787Mi am-55f77847b7-6hcmp 57m 5799Mi am-55f77847b7-8wqjg 54m 5776Mi ds-cts-0 6m 392Mi ds-cts-1 6m 374Mi ds-cts-2 5m 353Mi ds-idrepo-0 4105m 13781Mi ds-idrepo-1 810m 13691Mi ds-idrepo-2 945m 13673Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3849m 4115Mi idm-65858d8c4c-x6slf 2987m 4108Mi lodemon-9c5f9bf5b-bl4rx 5m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 567m 563Mi 13:18:39 DEBUG --- stderr --- 13:18:39 DEBUG 13:18:40 INFO 13:18:40 INFO [loop_until]: kubectl --namespace=xlou top node 13:18:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:18:40 INFO [loop_until]: OK (rc = 0) 13:18:40 DEBUG --- stdout --- 13:18:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 112m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3917m 24% 5426Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1115m 7% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3242m 20% 5363Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4190m 26% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 867m 5% 14252Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 863m 5% 14256Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 609m 3% 2082Mi 3% 13:18:40 DEBUG --- stderr --- 13:18:40 DEBUG 13:19:40 INFO 13:19:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:19:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:19:40 INFO [loop_until]: OK (rc = 0) 13:19:40 DEBUG --- stdout --- 13:19:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 52m 5787Mi am-55f77847b7-6hcmp 53m 5800Mi am-55f77847b7-8wqjg 52m 5776Mi ds-cts-0 6m 391Mi ds-cts-1 6m 374Mi ds-cts-2 6m 354Mi ds-idrepo-0 4649m 13823Mi ds-idrepo-1 1132m 13721Mi ds-idrepo-2 1293m 13677Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3685m 4125Mi idm-65858d8c4c-x6slf 2979m 4116Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 544m 565Mi 13:19:40 DEBUG --- stderr --- 13:19:40 DEBUG 13:19:40 INFO 13:19:40 INFO [loop_until]: kubectl --namespace=xlou top node 13:19:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:19:40 INFO [loop_until]: OK (rc = 0) 13:19:40 DEBUG --- stdout --- 13:19:40 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 112m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3710m 23% 5435Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1053m 6% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3161m 19% 5371Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4626m 29% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1045m 6% 14261Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1112m 6% 14297Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 605m 3% 2083Mi 3% 13:19:40 DEBUG --- stderr --- 13:19:40 DEBUG 13:20:40 INFO 13:20:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:20:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:20:40 INFO [loop_until]: OK (rc = 0) 13:20:40 DEBUG --- stdout --- 13:20:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 52m 5787Mi am-55f77847b7-6hcmp 55m 5800Mi am-55f77847b7-8wqjg 50m 5776Mi ds-cts-0 6m 393Mi ds-cts-1 6m 374Mi ds-cts-2 6m 354Mi ds-idrepo-0 4316m 13848Mi ds-idrepo-1 1305m 13700Mi ds-idrepo-2 779m 13720Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3623m 4137Mi idm-65858d8c4c-x6slf 3107m 4122Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 536m 568Mi 13:20:40 DEBUG --- stderr --- 13:20:40 DEBUG 13:20:41 INFO 13:20:41 INFO [loop_until]: kubectl --namespace=xlou top node 13:20:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:20:41 INFO [loop_until]: OK (rc = 0) 13:20:41 DEBUG --- stdout --- 13:20:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3841m 24% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1117m 7% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3144m 19% 5377Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4235m 26% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 846m 5% 14295Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1197m 7% 14272Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 611m 3% 2086Mi 3% 13:20:41 DEBUG --- stderr --- 13:20:41 DEBUG 13:21:40 INFO 13:21:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:21:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:21:40 INFO [loop_until]: OK (rc = 0) 13:21:40 DEBUG --- stdout --- 13:21:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 52m 5787Mi am-55f77847b7-6hcmp 54m 5800Mi am-55f77847b7-8wqjg 53m 5776Mi ds-cts-0 6m 393Mi ds-cts-1 6m 374Mi ds-cts-2 6m 353Mi ds-idrepo-0 4009m 13854Mi ds-idrepo-1 1367m 13734Mi ds-idrepo-2 1092m 13744Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3714m 4144Mi idm-65858d8c4c-x6slf 3116m 4130Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 541m 570Mi 13:21:40 DEBUG --- stderr --- 13:21:40 DEBUG 13:21:41 INFO 13:21:41 INFO [loop_until]: kubectl --namespace=xlou top node 13:21:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:21:41 INFO [loop_until]: OK (rc = 0) 13:21:41 DEBUG --- stdout --- 13:21:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 117m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 114m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3836m 24% 5457Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1089m 6% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3287m 20% 5382Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4287m 26% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1244m 7% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1375m 8% 14312Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 611m 3% 2086Mi 3% 13:21:41 DEBUG --- stderr --- 13:21:41 DEBUG 13:22:40 INFO 13:22:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:22:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:22:40 INFO [loop_until]: OK (rc = 0) 13:22:40 DEBUG --- stdout --- 13:22:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 49m 5787Mi am-55f77847b7-6hcmp 56m 5800Mi am-55f77847b7-8wqjg 49m 5776Mi ds-cts-0 6m 392Mi ds-cts-1 9m 375Mi ds-cts-2 5m 353Mi ds-idrepo-0 4080m 13853Mi ds-idrepo-1 760m 13765Mi ds-idrepo-2 994m 13771Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3627m 4153Mi idm-65858d8c4c-x6slf 2996m 4138Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 543m 573Mi 13:22:40 DEBUG --- stderr --- 13:22:40 DEBUG 13:22:41 INFO 13:22:41 INFO [loop_until]: kubectl --namespace=xlou top node 13:22:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:22:41 INFO [loop_until]: OK (rc = 0) 13:22:41 DEBUG --- stdout --- 13:22:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3900m 24% 5468Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1109m 6% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3171m 19% 5391Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4033m 25% 14472Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 853m 5% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 838m 5% 14339Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 632m 3% 2090Mi 3% 13:22:41 DEBUG --- stderr --- 13:22:41 DEBUG 13:23:40 INFO 13:23:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:23:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:23:40 INFO [loop_until]: OK (rc = 0) 13:23:40 DEBUG --- stdout --- 13:23:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 52m 5787Mi am-55f77847b7-6hcmp 55m 5800Mi am-55f77847b7-8wqjg 52m 5776Mi ds-cts-0 10m 392Mi ds-cts-1 6m 374Mi ds-cts-2 5m 353Mi ds-idrepo-0 3949m 13854Mi ds-idrepo-1 851m 13788Mi ds-idrepo-2 916m 13804Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3763m 4162Mi idm-65858d8c4c-x6slf 2993m 4147Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 548m 574Mi 13:23:40 DEBUG --- stderr --- 13:23:40 DEBUG 13:23:41 INFO 13:23:41 INFO [loop_until]: kubectl --namespace=xlou top node 13:23:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:23:41 INFO [loop_until]: OK (rc = 0) 13:23:41 DEBUG --- stdout --- 13:23:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 114m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3986m 25% 5478Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1081m 6% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3106m 19% 5403Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4206m 26% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 845m 5% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 880m 5% 14360Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 614m 3% 2091Mi 3% 13:23:41 DEBUG --- stderr --- 13:23:41 DEBUG 13:24:40 INFO 13:24:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:24:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:24:40 INFO [loop_until]: OK (rc = 0) 13:24:40 DEBUG --- stdout --- 13:24:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 53m 5787Mi am-55f77847b7-6hcmp 57m 5800Mi am-55f77847b7-8wqjg 55m 5776Mi ds-cts-0 6m 394Mi ds-cts-1 6m 374Mi ds-cts-2 6m 353Mi ds-idrepo-0 4085m 13850Mi ds-idrepo-1 1037m 13809Mi ds-idrepo-2 1034m 13820Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3685m 4171Mi idm-65858d8c4c-x6slf 3058m 4153Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 546m 577Mi 13:24:40 DEBUG --- stderr --- 13:24:40 DEBUG 13:24:41 INFO 13:24:41 INFO [loop_until]: kubectl --namespace=xlou top node 13:24:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:24:41 INFO [loop_until]: OK (rc = 0) 13:24:41 DEBUG --- stdout --- 13:24:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 118m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 114m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3924m 24% 5486Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1099m 6% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3217m 20% 5409Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4182m 26% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 943m 5% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1210m 7% 14388Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 599m 3% 2094Mi 3% 13:24:41 DEBUG --- stderr --- 13:24:41 DEBUG 13:25:40 INFO 13:25:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:25:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:25:40 INFO [loop_until]: OK (rc = 0) 13:25:40 DEBUG --- stdout --- 13:25:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5787Mi am-55f77847b7-6hcmp 55m 5800Mi am-55f77847b7-8wqjg 56m 5776Mi ds-cts-0 6m 394Mi ds-cts-1 6m 374Mi ds-cts-2 11m 357Mi ds-idrepo-0 3888m 13851Mi ds-idrepo-1 853m 13839Mi ds-idrepo-2 850m 13853Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3612m 4183Mi idm-65858d8c4c-x6slf 2993m 4162Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 533m 579Mi 13:25:40 DEBUG --- stderr --- 13:25:40 DEBUG 13:25:41 INFO 13:25:41 INFO [loop_until]: kubectl --namespace=xlou top node 13:25:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:25:41 INFO [loop_until]: OK (rc = 0) 13:25:41 DEBUG --- stdout --- 13:25:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 120m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3856m 24% 5494Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1096m 6% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3189m 20% 5414Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4062m 25% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 886m 5% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1102m 6% 14433Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 612m 3% 2097Mi 3% 13:25:41 DEBUG --- stderr --- 13:25:41 DEBUG 13:26:40 INFO 13:26:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:26:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:26:40 INFO [loop_until]: OK (rc = 0) 13:26:40 DEBUG --- stdout --- 13:26:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 55m 5787Mi am-55f77847b7-6hcmp 54m 5800Mi am-55f77847b7-8wqjg 52m 5776Mi ds-cts-0 6m 395Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 4042m 13823Mi ds-idrepo-1 1138m 13832Mi ds-idrepo-2 836m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3663m 4193Mi idm-65858d8c4c-x6slf 3157m 4173Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 549m 578Mi 13:26:40 DEBUG --- stderr --- 13:26:40 DEBUG 13:26:41 INFO 13:26:41 INFO [loop_until]: kubectl --namespace=xlou top node 13:26:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:26:41 INFO [loop_until]: OK (rc = 0) 13:26:41 DEBUG --- stdout --- 13:26:41 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 116m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 113m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3731m 23% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1104m 6% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3182m 20% 5426Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4137m 26% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 907m 5% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 935m 5% 14414Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 630m 3% 2094Mi 3% 13:26:41 DEBUG --- stderr --- 13:26:41 DEBUG 13:27:40 INFO 13:27:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:27:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:27:40 INFO [loop_until]: OK (rc = 0) 13:27:40 DEBUG --- stdout --- 13:27:40 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 54m 5788Mi am-55f77847b7-6hcmp 55m 5800Mi am-55f77847b7-8wqjg 50m 5777Mi ds-cts-0 7m 395Mi ds-cts-1 14m 378Mi ds-cts-2 6m 357Mi ds-idrepo-0 4072m 13823Mi ds-idrepo-1 1090m 13828Mi ds-idrepo-2 875m 13826Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3727m 4199Mi idm-65858d8c4c-x6slf 3156m 4180Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 555m 579Mi 13:27:40 DEBUG --- stderr --- 13:27:40 DEBUG 13:27:41 INFO 13:27:41 INFO [loop_until]: kubectl --namespace=xlou top node 13:27:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:27:42 INFO [loop_until]: OK (rc = 0) 13:27:42 DEBUG --- stdout --- 13:27:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 116m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3805m 23% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1116m 7% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3310m 20% 5436Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4145m 26% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1010m 6% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1226m 7% 14420Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 609m 3% 2096Mi 3% 13:27:42 DEBUG --- stderr --- 13:27:42 DEBUG 13:28:40 INFO 13:28:40 INFO [loop_until]: kubectl --namespace=xlou top pods 13:28:40 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:28:41 INFO [loop_until]: OK (rc = 0) 13:28:41 DEBUG --- stdout --- 13:28:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 51m 5788Mi am-55f77847b7-6hcmp 54m 5800Mi am-55f77847b7-8wqjg 50m 5776Mi ds-cts-0 6m 395Mi ds-cts-1 6m 378Mi ds-cts-2 5m 357Mi ds-idrepo-0 4671m 13800Mi ds-idrepo-1 952m 13809Mi ds-idrepo-2 1737m 13810Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 3646m 4211Mi idm-65858d8c4c-x6slf 3112m 4186Mi lodemon-9c5f9bf5b-bl4rx 4m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 538m 579Mi 13:28:41 DEBUG --- stderr --- 13:28:41 DEBUG 13:28:42 INFO 13:28:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:28:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:28:42 INFO [loop_until]: OK (rc = 0) 13:28:42 DEBUG --- stdout --- 13:28:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 116m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 113m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3900m 24% 5521Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1116m 7% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3275m 20% 5443Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4709m 29% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1691m 10% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1011m 6% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 615m 3% 2095Mi 3% 13:28:42 DEBUG --- stderr --- 13:28:42 DEBUG 13:29:41 INFO 13:29:41 INFO [loop_until]: kubectl --namespace=xlou top pods 13:29:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:29:41 INFO [loop_until]: OK (rc = 0) 13:29:41 DEBUG --- stdout --- 13:29:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 6m 5788Mi am-55f77847b7-6hcmp 10m 5800Mi am-55f77847b7-8wqjg 5m 5776Mi ds-cts-0 6m 395Mi ds-cts-1 6m 378Mi ds-cts-2 5m 357Mi ds-idrepo-0 83m 13814Mi ds-idrepo-1 350m 13816Mi ds-idrepo-2 141m 13795Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 4213Mi idm-65858d8c4c-x6slf 7m 4189Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 105m 162Mi 13:29:41 DEBUG --- stderr --- 13:29:41 DEBUG 13:29:42 INFO 13:29:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:29:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:29:42 INFO [loop_until]: OK (rc = 0) 13:29:42 DEBUG --- stdout --- 13:29:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5522Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 131m 0% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5445Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 171m 1% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 184m 1% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 49m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 390m 2% 14412Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 174m 1% 1688Mi 2% 13:29:42 DEBUG --- stderr --- 13:29:42 DEBUG 13:30:41 INFO 13:30:41 INFO [loop_until]: kubectl --namespace=xlou top pods 13:30:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:30:41 INFO [loop_until]: OK (rc = 0) 13:30:41 DEBUG --- stdout --- 13:30:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 8m 5788Mi am-55f77847b7-6hcmp 9m 5800Mi am-55f77847b7-8wqjg 6m 5776Mi ds-cts-0 6m 395Mi ds-cts-1 5m 378Mi ds-cts-2 5m 357Mi ds-idrepo-0 15m 13814Mi ds-idrepo-1 10m 13815Mi ds-idrepo-2 9m 13795Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 4212Mi idm-65858d8c4c-x6slf 7m 4189Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 162Mi 13:30:41 DEBUG --- stderr --- 13:30:41 DEBUG 13:30:42 INFO 13:30:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:30:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:30:42 INFO [loop_until]: OK (rc = 0) 13:30:42 DEBUG --- stdout --- 13:30:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5522Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5446Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14412Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1683Mi 2% 13:30:42 DEBUG --- stderr --- 13:30:42 DEBUG 127.0.0.1 - - [12/Aug/2023 13:30:54] "GET /monitoring/average?start_time=23-08-12_12:00:23&stop_time=23-08-12_12:28:54 HTTP/1.1" 200 - 13:31:41 INFO 13:31:41 INFO [loop_until]: kubectl --namespace=xlou top pods 13:31:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:31:41 INFO [loop_until]: OK (rc = 0) 13:31:41 DEBUG --- stdout --- 13:31:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 6m 5788Mi am-55f77847b7-6hcmp 9m 5800Mi am-55f77847b7-8wqjg 6m 5777Mi ds-cts-0 6m 395Mi ds-cts-1 5m 379Mi ds-cts-2 6m 357Mi ds-idrepo-0 14m 13814Mi ds-idrepo-1 10m 13816Mi ds-idrepo-2 14m 13795Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 16m 4213Mi idm-65858d8c4c-x6slf 7m 4189Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 2298m 446Mi 13:31:41 DEBUG --- stderr --- 13:31:41 DEBUG 13:31:42 INFO 13:31:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:31:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:31:42 INFO [loop_until]: OK (rc = 0) 13:31:42 DEBUG --- stdout --- 13:31:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 89m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 5522Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 133m 0% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 165m 1% 5455Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 14453Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 49m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 99m 0% 14416Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1935m 12% 1964Mi 3% 13:31:42 DEBUG --- stderr --- 13:31:42 DEBUG 13:32:41 INFO 13:32:41 INFO [loop_until]: kubectl --namespace=xlou top pods 13:32:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:32:41 INFO [loop_until]: OK (rc = 0) 13:32:41 DEBUG --- stdout --- 13:32:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 80m 5790Mi am-55f77847b7-6hcmp 82m 5801Mi am-55f77847b7-8wqjg 76m 5777Mi ds-cts-0 6m 395Mi ds-cts-1 7m 379Mi ds-cts-2 6m 357Mi ds-idrepo-0 5105m 13784Mi ds-idrepo-1 3781m 13834Mi ds-idrepo-2 3398m 13828Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2334m 4305Mi idm-65858d8c4c-x6slf 2023m 4259Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 577m 570Mi 13:32:41 DEBUG --- stderr --- 13:32:41 DEBUG 13:32:42 INFO 13:32:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:32:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:32:42 INFO [loop_until]: OK (rc = 0) 13:32:42 DEBUG --- stdout --- 13:32:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2572m 16% 5611Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 871m 5% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2148m 13% 5533Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4975m 31% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3422m 21% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4060m 25% 14402Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 615m 3% 2087Mi 3% 13:32:42 DEBUG --- stderr --- 13:32:42 DEBUG 13:33:41 INFO 13:33:41 INFO [loop_until]: kubectl --namespace=xlou top pods 13:33:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:33:41 INFO [loop_until]: OK (rc = 0) 13:33:41 DEBUG --- stdout --- 13:33:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 77m 5790Mi am-55f77847b7-6hcmp 79m 5801Mi am-55f77847b7-8wqjg 71m 5777Mi ds-cts-0 6m 395Mi ds-cts-1 14m 383Mi ds-cts-2 7m 357Mi ds-idrepo-0 5926m 13810Mi ds-idrepo-1 3399m 13815Mi ds-idrepo-2 3382m 13819Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2445m 4328Mi idm-65858d8c4c-x6slf 1957m 4324Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 484m 578Mi 13:33:41 DEBUG --- stderr --- 13:33:41 DEBUG 13:33:42 INFO 13:33:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:33:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:33:42 INFO [loop_until]: OK (rc = 0) 13:33:42 DEBUG --- stdout --- 13:33:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 140m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 134m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2608m 16% 5636Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 896m 5% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2130m 13% 5573Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 6012m 37% 14461Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3350m 21% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3242m 20% 14437Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 564m 3% 2093Mi 3% 13:33:42 DEBUG --- stderr --- 13:33:42 DEBUG 13:34:41 INFO 13:34:41 INFO [loop_until]: kubectl --namespace=xlou top pods 13:34:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:34:41 INFO [loop_until]: OK (rc = 0) 13:34:41 DEBUG --- stdout --- 13:34:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 75m 5790Mi am-55f77847b7-6hcmp 75m 5801Mi am-55f77847b7-8wqjg 69m 5777Mi ds-cts-0 6m 395Mi ds-cts-1 7m 376Mi ds-cts-2 5m 357Mi ds-idrepo-0 6059m 13828Mi ds-idrepo-1 3637m 13823Mi ds-idrepo-2 3853m 13806Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2118m 4337Mi idm-65858d8c4c-x6slf 1783m 4328Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 416m 579Mi 13:34:41 DEBUG --- stderr --- 13:34:41 DEBUG 13:34:42 INFO 13:34:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:34:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:34:42 INFO [loop_until]: OK (rc = 0) 13:34:42 DEBUG --- stdout --- 13:34:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2349m 14% 5644Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 880m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1939m 12% 5584Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5702m 35% 14440Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3759m 23% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3559m 22% 14417Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 507m 3% 2094Mi 3% 13:34:42 DEBUG --- stderr --- 13:34:42 DEBUG 13:35:41 INFO 13:35:41 INFO [loop_until]: kubectl --namespace=xlou top pods 13:35:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:35:41 INFO [loop_until]: OK (rc = 0) 13:35:41 DEBUG --- stdout --- 13:35:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 69m 5790Mi am-55f77847b7-6hcmp 77m 5801Mi am-55f77847b7-8wqjg 69m 5777Mi ds-cts-0 6m 395Mi ds-cts-1 7m 377Mi ds-cts-2 5m 357Mi ds-idrepo-0 4806m 13822Mi ds-idrepo-1 3141m 13804Mi ds-idrepo-2 3578m 13810Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2121m 4347Mi idm-65858d8c4c-x6slf 1791m 4340Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 443m 579Mi 13:35:41 DEBUG --- stderr --- 13:35:41 DEBUG 13:35:42 INFO 13:35:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:35:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:35:42 INFO [loop_until]: OK (rc = 0) 13:35:42 DEBUG --- stdout --- 13:35:42 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 130m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2357m 14% 5668Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 886m 5% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2006m 12% 5592Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4979m 31% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3660m 23% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3763m 23% 14435Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 514m 3% 2096Mi 3% 13:35:42 DEBUG --- stderr --- 13:35:42 DEBUG 13:36:41 INFO 13:36:41 INFO [loop_until]: kubectl --namespace=xlou top pods 13:36:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:36:41 INFO [loop_until]: OK (rc = 0) 13:36:41 DEBUG --- stdout --- 13:36:41 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 71m 5790Mi am-55f77847b7-6hcmp 76m 5801Mi am-55f77847b7-8wqjg 69m 5777Mi ds-cts-0 7m 395Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 6142m 13818Mi ds-idrepo-1 3405m 13703Mi ds-idrepo-2 3068m 13688Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2205m 4357Mi idm-65858d8c4c-x6slf 1888m 4349Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 456m 579Mi 13:36:41 DEBUG --- stderr --- 13:36:41 DEBUG 13:36:42 INFO 13:36:42 INFO [loop_until]: kubectl --namespace=xlou top node 13:36:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:36:43 INFO [loop_until]: OK (rc = 0) 13:36:43 DEBUG --- stdout --- 13:36:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 131m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 124m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2392m 15% 5666Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 851m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1937m 12% 5600Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 7058m 44% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3195m 20% 14307Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3096m 19% 14357Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 500m 3% 2098Mi 3% 13:36:43 DEBUG --- stderr --- 13:36:43 DEBUG 13:37:42 INFO 13:37:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:37:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:37:42 INFO [loop_until]: OK (rc = 0) 13:37:42 DEBUG --- stdout --- 13:37:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 70m 5790Mi am-55f77847b7-6hcmp 74m 5801Mi am-55f77847b7-8wqjg 71m 5777Mi ds-cts-0 7m 395Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 6017m 13860Mi ds-idrepo-1 3765m 13792Mi ds-idrepo-2 3304m 13617Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2171m 4364Mi idm-65858d8c4c-x6slf 1830m 4355Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 427m 580Mi 13:37:42 DEBUG --- stderr --- 13:37:42 DEBUG 13:37:43 INFO 13:37:43 INFO [loop_until]: kubectl --namespace=xlou top node 13:37:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:37:43 INFO [loop_until]: OK (rc = 0) 13:37:43 DEBUG --- stdout --- 13:37:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 135m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2354m 14% 5670Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 870m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2002m 12% 5612Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5852m 36% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3375m 21% 14274Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3994m 25% 14291Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 501m 3% 2099Mi 3% 13:37:43 DEBUG --- stderr --- 13:37:43 DEBUG 13:38:42 INFO 13:38:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:38:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:38:42 INFO [loop_until]: OK (rc = 0) 13:38:42 DEBUG --- stdout --- 13:38:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 70m 5790Mi am-55f77847b7-6hcmp 76m 5801Mi am-55f77847b7-8wqjg 70m 5779Mi ds-cts-0 6m 395Mi ds-cts-1 6m 376Mi ds-cts-2 6m 357Mi ds-idrepo-0 4534m 13727Mi ds-idrepo-1 2303m 13858Mi ds-idrepo-2 2437m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2086m 4371Mi idm-65858d8c4c-x6slf 1828m 4362Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 438m 580Mi 13:38:42 DEBUG --- stderr --- 13:38:42 DEBUG 13:38:43 INFO 13:38:43 INFO [loop_until]: kubectl --namespace=xlou top node 13:38:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:38:43 INFO [loop_until]: OK (rc = 0) 13:38:43 DEBUG --- stdout --- 13:38:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 134m 0% 6920Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2292m 14% 5678Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 833m 5% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1972m 12% 5616Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4461m 28% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2756m 17% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3275m 20% 14415Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 505m 3% 2099Mi 3% 13:38:43 DEBUG --- stderr --- 13:38:43 DEBUG 13:39:42 INFO 13:39:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:39:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:39:42 INFO [loop_until]: OK (rc = 0) 13:39:42 DEBUG --- stdout --- 13:39:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 69m 5790Mi am-55f77847b7-6hcmp 73m 5801Mi am-55f77847b7-8wqjg 69m 5779Mi ds-cts-0 10m 396Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 4044m 13824Mi ds-idrepo-1 2464m 13821Mi ds-idrepo-2 2504m 13810Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2104m 4379Mi idm-65858d8c4c-x6slf 1742m 4368Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 429m 581Mi 13:39:42 DEBUG --- stderr --- 13:39:42 DEBUG 13:39:43 INFO 13:39:43 INFO [loop_until]: kubectl --namespace=xlou top node 13:39:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:39:43 INFO [loop_until]: OK (rc = 0) 13:39:43 DEBUG --- stdout --- 13:39:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 135m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2309m 14% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 891m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1987m 12% 5619Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4082m 25% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2790m 17% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2168m 13% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 488m 3% 2099Mi 3% 13:39:43 DEBUG --- stderr --- 13:39:43 DEBUG 13:40:42 INFO 13:40:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:40:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:40:42 INFO [loop_until]: OK (rc = 0) 13:40:42 DEBUG --- stdout --- 13:40:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 68m 5790Mi am-55f77847b7-6hcmp 73m 5802Mi am-55f77847b7-8wqjg 67m 5779Mi ds-cts-0 10m 396Mi ds-cts-1 9m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 6216m 13810Mi ds-idrepo-1 3533m 13825Mi ds-idrepo-2 2666m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2124m 4387Mi idm-65858d8c4c-x6slf 1786m 4370Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 439m 589Mi 13:40:42 DEBUG --- stderr --- 13:40:42 DEBUG 13:40:43 INFO 13:40:43 INFO [loop_until]: kubectl --namespace=xlou top node 13:40:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:40:43 INFO [loop_until]: OK (rc = 0) 13:40:43 DEBUG --- stdout --- 13:40:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 134m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2325m 14% 5696Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 828m 5% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1925m 12% 5621Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 6148m 38% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3406m 21% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3699m 23% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 507m 3% 2107Mi 3% 13:40:43 DEBUG --- stderr --- 13:40:43 DEBUG 13:41:42 INFO 13:41:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:41:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:41:42 INFO [loop_until]: OK (rc = 0) 13:41:42 DEBUG --- stdout --- 13:41:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 71m 5790Mi am-55f77847b7-6hcmp 76m 5802Mi am-55f77847b7-8wqjg 68m 5779Mi ds-cts-0 10m 396Mi ds-cts-1 7m 377Mi ds-cts-2 8m 357Mi ds-idrepo-0 4534m 13860Mi ds-idrepo-1 3367m 13841Mi ds-idrepo-2 2902m 13849Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2142m 4396Mi idm-65858d8c4c-x6slf 1751m 4380Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 427m 588Mi 13:41:42 DEBUG --- stderr --- 13:41:42 DEBUG 13:41:43 INFO 13:41:43 INFO [loop_until]: kubectl --namespace=xlou top node 13:41:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:41:43 INFO [loop_until]: OK (rc = 0) 13:41:43 DEBUG --- stdout --- 13:41:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2331m 14% 5708Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 864m 5% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1936m 12% 5629Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4750m 29% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2538m 15% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3516m 22% 14401Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 510m 3% 2106Mi 3% 13:41:43 DEBUG --- stderr --- 13:41:43 DEBUG 13:42:42 INFO 13:42:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:42:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:42:42 INFO [loop_until]: OK (rc = 0) 13:42:42 DEBUG --- stdout --- 13:42:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 70m 5790Mi am-55f77847b7-6hcmp 75m 5801Mi am-55f77847b7-8wqjg 68m 5779Mi ds-cts-0 6m 396Mi ds-cts-1 6m 378Mi ds-cts-2 6m 357Mi ds-idrepo-0 4505m 13767Mi ds-idrepo-1 2239m 13823Mi ds-idrepo-2 3378m 13865Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2166m 4409Mi idm-65858d8c4c-x6slf 1780m 4391Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 425m 603Mi 13:42:42 DEBUG --- stderr --- 13:42:42 DEBUG 13:42:43 INFO 13:42:43 INFO [loop_until]: kubectl --namespace=xlou top node 13:42:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:42:43 INFO [loop_until]: OK (rc = 0) 13:42:43 DEBUG --- stdout --- 13:42:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 125m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2319m 14% 5722Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 828m 5% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1941m 12% 5643Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4254m 26% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3588m 22% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2708m 17% 14465Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 483m 3% 2120Mi 3% 13:42:43 DEBUG --- stderr --- 13:42:43 DEBUG 13:43:42 INFO 13:43:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:43:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:43:42 INFO [loop_until]: OK (rc = 0) 13:43:42 DEBUG --- stdout --- 13:43:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 68m 5790Mi am-55f77847b7-6hcmp 77m 5802Mi am-55f77847b7-8wqjg 73m 5779Mi ds-cts-0 7m 396Mi ds-cts-1 6m 377Mi ds-cts-2 11m 357Mi ds-idrepo-0 4280m 13823Mi ds-idrepo-1 3725m 13835Mi ds-idrepo-2 2617m 13767Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2201m 4415Mi idm-65858d8c4c-x6slf 1758m 4396Mi lodemon-9c5f9bf5b-bl4rx 7m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 437m 603Mi 13:43:42 DEBUG --- stderr --- 13:43:42 DEBUG 13:43:43 INFO 13:43:43 INFO [loop_until]: kubectl --namespace=xlou top node 13:43:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:43:43 INFO [loop_until]: OK (rc = 0) 13:43:43 DEBUG --- stdout --- 13:43:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 134m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2322m 14% 5723Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 877m 5% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1999m 12% 5660Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4334m 27% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3212m 20% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3652m 22% 14431Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 502m 3% 2120Mi 3% 13:43:43 DEBUG --- stderr --- 13:43:43 DEBUG 13:44:42 INFO 13:44:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:44:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:44:42 INFO [loop_until]: OK (rc = 0) 13:44:42 DEBUG --- stdout --- 13:44:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 72m 5790Mi am-55f77847b7-6hcmp 73m 5802Mi am-55f77847b7-8wqjg 70m 5779Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 7m 357Mi ds-idrepo-0 4357m 13842Mi ds-idrepo-1 2307m 13834Mi ds-idrepo-2 2879m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2118m 4421Mi idm-65858d8c4c-x6slf 1823m 4402Mi lodemon-9c5f9bf5b-bl4rx 5m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 429m 603Mi 13:44:42 DEBUG --- stderr --- 13:44:42 DEBUG 13:44:43 INFO 13:44:43 INFO [loop_until]: kubectl --namespace=xlou top node 13:44:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:44:43 INFO [loop_until]: OK (rc = 0) 13:44:43 DEBUG --- stdout --- 13:44:43 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 138m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 134m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2310m 14% 5730Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 840m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1974m 12% 5652Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4458m 28% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3475m 21% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2417m 15% 14469Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 501m 3% 2120Mi 3% 13:44:43 DEBUG --- stderr --- 13:44:43 DEBUG 13:45:42 INFO 13:45:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:45:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:45:42 INFO [loop_until]: OK (rc = 0) 13:45:42 DEBUG --- stdout --- 13:45:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 73m 5791Mi am-55f77847b7-6hcmp 77m 5802Mi am-55f77847b7-8wqjg 71m 5779Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 4049m 13829Mi ds-idrepo-1 2216m 13838Mi ds-idrepo-2 2766m 13820Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2134m 4431Mi idm-65858d8c4c-x6slf 1850m 4410Mi lodemon-9c5f9bf5b-bl4rx 5m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 427m 604Mi 13:45:42 DEBUG --- stderr --- 13:45:42 DEBUG 13:45:44 INFO 13:45:44 INFO [loop_until]: kubectl --namespace=xlou top node 13:45:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:45:44 INFO [loop_until]: OK (rc = 0) 13:45:44 DEBUG --- stdout --- 13:45:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2320m 14% 5738Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 879m 5% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2006m 12% 5661Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4157m 26% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2960m 18% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2278m 14% 14445Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 482m 3% 2120Mi 3% 13:45:44 DEBUG --- stderr --- 13:45:44 DEBUG 13:46:42 INFO 13:46:42 INFO [loop_until]: kubectl --namespace=xlou top pods 13:46:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:46:42 INFO [loop_until]: OK (rc = 0) 13:46:42 DEBUG --- stdout --- 13:46:42 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 73m 5791Mi am-55f77847b7-6hcmp 76m 5802Mi am-55f77847b7-8wqjg 67m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 7m 377Mi ds-cts-2 5m 357Mi ds-idrepo-0 4734m 13837Mi ds-idrepo-1 3630m 13875Mi ds-idrepo-2 1734m 13862Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2158m 4436Mi idm-65858d8c4c-x6slf 1788m 4415Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 425m 604Mi 13:46:42 DEBUG --- stderr --- 13:46:42 DEBUG 13:46:44 INFO 13:46:44 INFO [loop_until]: kubectl --namespace=xlou top node 13:46:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:46:44 INFO [loop_until]: OK (rc = 0) 13:46:44 DEBUG --- stdout --- 13:46:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2326m 14% 5747Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 837m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1933m 12% 5667Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4365m 27% 14504Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 50m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2150m 13% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3890m 24% 14470Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 495m 3% 2121Mi 3% 13:46:44 DEBUG --- stderr --- 13:46:44 DEBUG 13:47:43 INFO 13:47:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:47:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:47:43 INFO [loop_until]: OK (rc = 0) 13:47:43 DEBUG --- stdout --- 13:47:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 69m 5791Mi am-55f77847b7-6hcmp 76m 5802Mi am-55f77847b7-8wqjg 72m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 10m 358Mi ds-idrepo-0 5191m 13818Mi ds-idrepo-1 2165m 13812Mi ds-idrepo-2 1855m 13869Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2107m 4441Mi idm-65858d8c4c-x6slf 1867m 4420Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 417m 605Mi 13:47:43 DEBUG --- stderr --- 13:47:43 DEBUG 13:47:44 INFO 13:47:44 INFO [loop_until]: kubectl --namespace=xlou top node 13:47:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:47:44 INFO [loop_until]: OK (rc = 0) 13:47:44 DEBUG --- stdout --- 13:47:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 135m 0% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2303m 14% 5756Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 862m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1995m 12% 5672Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5021m 31% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1915m 12% 14485Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2142m 13% 14430Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 502m 3% 2120Mi 3% 13:47:44 DEBUG --- stderr --- 13:47:44 DEBUG 13:48:43 INFO 13:48:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:48:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:48:43 INFO [loop_until]: OK (rc = 0) 13:48:43 DEBUG --- stdout --- 13:48:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 75m 5791Mi am-55f77847b7-6hcmp 81m 5802Mi am-55f77847b7-8wqjg 75m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 3548m 13824Mi ds-idrepo-1 2382m 13869Mi ds-idrepo-2 2273m 13829Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2158m 4468Mi idm-65858d8c4c-x6slf 1907m 4447Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 438m 605Mi 13:48:43 DEBUG --- stderr --- 13:48:43 DEBUG 13:48:44 INFO 13:48:44 INFO [loop_until]: kubectl --namespace=xlou top node 13:48:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:48:44 INFO [loop_until]: OK (rc = 0) 13:48:44 DEBUG --- stdout --- 13:48:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 134m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2354m 14% 5776Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 858m 5% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2000m 12% 5697Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 3699m 23% 14461Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2317m 14% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2211m 13% 14430Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 498m 3% 2120Mi 3% 13:48:44 DEBUG --- stderr --- 13:48:44 DEBUG 13:49:43 INFO 13:49:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:49:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:49:43 INFO [loop_until]: OK (rc = 0) 13:49:43 DEBUG --- stdout --- 13:49:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 72m 5791Mi am-55f77847b7-6hcmp 77m 5802Mi am-55f77847b7-8wqjg 71m 5779Mi ds-cts-0 6m 396Mi ds-cts-1 7m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 4655m 13828Mi ds-idrepo-1 3963m 13792Mi ds-idrepo-2 2283m 13876Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2143m 4473Mi idm-65858d8c4c-x6slf 1838m 4452Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 430m 605Mi 13:49:43 DEBUG --- stderr --- 13:49:43 DEBUG 13:49:44 INFO 13:49:44 INFO [loop_until]: kubectl --namespace=xlou top node 13:49:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:49:44 INFO [loop_until]: OK (rc = 0) 13:49:44 DEBUG --- stdout --- 13:49:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 128m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 133m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2376m 14% 5785Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 882m 5% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1916m 12% 5700Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5049m 31% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2363m 14% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3904m 24% 14458Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 503m 3% 2120Mi 3% 13:49:44 DEBUG --- stderr --- 13:49:44 DEBUG 13:50:43 INFO 13:50:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:50:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:50:43 INFO [loop_until]: OK (rc = 0) 13:50:43 DEBUG --- stdout --- 13:50:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 72m 5791Mi am-55f77847b7-6hcmp 75m 5803Mi am-55f77847b7-8wqjg 71m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 7m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 5603m 13618Mi ds-idrepo-1 3295m 13638Mi ds-idrepo-2 2932m 13812Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2173m 4482Mi idm-65858d8c4c-x6slf 1705m 4457Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 431m 605Mi 13:50:43 DEBUG --- stderr --- 13:50:43 DEBUG 13:50:44 INFO 13:50:44 INFO [loop_until]: kubectl --namespace=xlou top node 13:50:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:50:44 INFO [loop_until]: OK (rc = 0) 13:50:44 DEBUG --- stdout --- 13:50:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 135m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 134m 0% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2368m 14% 5791Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 841m 5% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1960m 12% 5712Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5691m 35% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3025m 19% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3160m 19% 14242Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 502m 3% 2122Mi 3% 13:50:44 DEBUG --- stderr --- 13:50:44 DEBUG 13:51:43 INFO 13:51:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:51:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:51:43 INFO [loop_until]: OK (rc = 0) 13:51:43 DEBUG --- stdout --- 13:51:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 73m 5791Mi am-55f77847b7-6hcmp 81m 5803Mi am-55f77847b7-8wqjg 68m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 7m 377Mi ds-cts-2 5m 357Mi ds-idrepo-0 3765m 13802Mi ds-idrepo-1 2022m 13827Mi ds-idrepo-2 4823m 13806Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2262m 4487Mi idm-65858d8c4c-x6slf 1700m 4464Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 420m 606Mi 13:51:43 DEBUG --- stderr --- 13:51:43 DEBUG 13:51:44 INFO 13:51:44 INFO [loop_until]: kubectl --namespace=xlou top node 13:51:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:51:44 INFO [loop_until]: OK (rc = 0) 13:51:44 DEBUG --- stdout --- 13:51:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 131m 0% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2358m 14% 5799Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 854m 5% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1857m 11% 5717Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 3795m 23% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3954m 24% 14488Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2077m 13% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 495m 3% 2122Mi 3% 13:51:44 DEBUG --- stderr --- 13:51:44 DEBUG 13:52:43 INFO 13:52:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:52:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:52:43 INFO [loop_until]: OK (rc = 0) 13:52:43 DEBUG --- stdout --- 13:52:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 69m 5792Mi am-55f77847b7-6hcmp 74m 5803Mi am-55f77847b7-8wqjg 69m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 5m 357Mi ds-idrepo-0 4307m 13834Mi ds-idrepo-1 2850m 13829Mi ds-idrepo-2 2172m 13842Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2212m 4495Mi idm-65858d8c4c-x6slf 1754m 4472Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 444m 605Mi 13:52:43 DEBUG --- stderr --- 13:52:43 DEBUG 13:52:44 INFO 13:52:44 INFO [loop_until]: kubectl --namespace=xlou top node 13:52:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:52:44 INFO [loop_until]: OK (rc = 0) 13:52:44 DEBUG --- stdout --- 13:52:44 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 134m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2341m 14% 5806Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 839m 5% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1874m 11% 5726Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 3943m 24% 14476Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2282m 14% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2447m 15% 14394Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 511m 3% 2124Mi 3% 13:52:44 DEBUG --- stderr --- 13:52:44 DEBUG 13:53:43 INFO 13:53:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:53:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:53:43 INFO [loop_until]: OK (rc = 0) 13:53:43 DEBUG --- stdout --- 13:53:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 72m 5791Mi am-55f77847b7-6hcmp 72m 5803Mi am-55f77847b7-8wqjg 72m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 5m 357Mi ds-idrepo-0 4472m 13711Mi ds-idrepo-1 3766m 13612Mi ds-idrepo-2 2140m 13843Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2139m 4502Mi idm-65858d8c4c-x6slf 1844m 4477Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 420m 606Mi 13:53:43 DEBUG --- stderr --- 13:53:43 DEBUG 13:53:45 INFO 13:53:45 INFO [loop_until]: kubectl --namespace=xlou top node 13:53:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:53:45 INFO [loop_until]: OK (rc = 0) 13:53:45 DEBUG --- stdout --- 13:53:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 133m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2305m 14% 5815Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 879m 5% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1982m 12% 5733Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4741m 29% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2105m 13% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 4121m 25% 14225Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 503m 3% 2125Mi 3% 13:53:45 DEBUG --- stderr --- 13:53:45 DEBUG 13:54:43 INFO 13:54:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:54:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:54:43 INFO [loop_until]: OK (rc = 0) 13:54:43 DEBUG --- stdout --- 13:54:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 70m 5792Mi am-55f77847b7-6hcmp 78m 5803Mi am-55f77847b7-8wqjg 70m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 3783m 13836Mi ds-idrepo-1 2386m 13760Mi ds-idrepo-2 1949m 13813Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2126m 4509Mi idm-65858d8c4c-x6slf 1762m 4485Mi lodemon-9c5f9bf5b-bl4rx 7m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 431m 606Mi 13:54:43 DEBUG --- stderr --- 13:54:43 DEBUG 13:54:45 INFO 13:54:45 INFO [loop_until]: kubectl --namespace=xlou top node 13:54:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:54:45 INFO [loop_until]: OK (rc = 0) 13:54:45 DEBUG --- stdout --- 13:54:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 138m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 136m 0% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 130m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2312m 14% 5822Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 854m 5% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1944m 12% 5738Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 3886m 24% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2766m 17% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2319m 14% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 504m 3% 2125Mi 3% 13:54:45 DEBUG --- stderr --- 13:54:45 DEBUG 13:55:43 INFO 13:55:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:55:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:55:43 INFO [loop_until]: OK (rc = 0) 13:55:43 DEBUG --- stdout --- 13:55:43 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 69m 5792Mi am-55f77847b7-6hcmp 76m 5803Mi am-55f77847b7-8wqjg 70m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 7m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 4967m 13829Mi ds-idrepo-1 2440m 13828Mi ds-idrepo-2 1841m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2082m 4518Mi idm-65858d8c4c-x6slf 1800m 4491Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 454m 607Mi 13:55:43 DEBUG --- stderr --- 13:55:43 DEBUG 13:55:45 INFO 13:55:45 INFO [loop_until]: kubectl --namespace=xlou top node 13:55:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:55:45 INFO [loop_until]: OK (rc = 0) 13:55:45 DEBUG --- stdout --- 13:55:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2355m 14% 5829Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 859m 5% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1957m 12% 5741Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5252m 33% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2940m 18% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2203m 13% 14419Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 506m 3% 2126Mi 3% 13:55:45 DEBUG --- stderr --- 13:55:45 DEBUG 13:56:43 INFO 13:56:43 INFO [loop_until]: kubectl --namespace=xlou top pods 13:56:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:56:44 INFO [loop_until]: OK (rc = 0) 13:56:44 DEBUG --- stdout --- 13:56:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 67m 5792Mi am-55f77847b7-6hcmp 76m 5804Mi am-55f77847b7-8wqjg 72m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 6m 358Mi ds-idrepo-0 4798m 13826Mi ds-idrepo-1 3100m 13811Mi ds-idrepo-2 1921m 13827Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2019m 4524Mi idm-65858d8c4c-x6slf 1751m 4497Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 438m 606Mi 13:56:44 DEBUG --- stderr --- 13:56:44 DEBUG 13:56:45 INFO 13:56:45 INFO [loop_until]: kubectl --namespace=xlou top node 13:56:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:56:45 INFO [loop_until]: OK (rc = 0) 13:56:45 DEBUG --- stdout --- 13:56:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 138m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2230m 14% 5838Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 865m 5% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1922m 12% 5747Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4816m 30% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2115m 13% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3238m 20% 14412Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 508m 3% 2121Mi 3% 13:56:45 DEBUG --- stderr --- 13:56:45 DEBUG 13:57:44 INFO 13:57:44 INFO [loop_until]: kubectl --namespace=xlou top pods 13:57:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:57:44 INFO [loop_until]: OK (rc = 0) 13:57:44 DEBUG --- stdout --- 13:57:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 71m 5792Mi am-55f77847b7-6hcmp 72m 5804Mi am-55f77847b7-8wqjg 68m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 5m 357Mi ds-idrepo-0 5377m 13861Mi ds-idrepo-1 2562m 13811Mi ds-idrepo-2 3235m 13818Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2186m 4533Mi idm-65858d8c4c-x6slf 1811m 4506Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 429m 606Mi 13:57:44 DEBUG --- stderr --- 13:57:44 DEBUG 13:57:45 INFO 13:57:45 INFO [loop_until]: kubectl --namespace=xlou top node 13:57:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:57:45 INFO [loop_until]: OK (rc = 0) 13:57:45 DEBUG --- stdout --- 13:57:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 131m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2352m 14% 5845Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 882m 5% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2002m 12% 5757Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 7338m 46% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3270m 20% 14495Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2644m 16% 14413Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 505m 3% 2123Mi 3% 13:57:45 DEBUG --- stderr --- 13:57:45 DEBUG 13:58:44 INFO 13:58:44 INFO [loop_until]: kubectl --namespace=xlou top pods 13:58:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:58:44 INFO [loop_until]: OK (rc = 0) 13:58:44 DEBUG --- stdout --- 13:58:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 71m 5792Mi am-55f77847b7-6hcmp 73m 5804Mi am-55f77847b7-8wqjg 70m 5781Mi ds-cts-0 6m 396Mi ds-cts-1 6m 377Mi ds-cts-2 5m 357Mi ds-idrepo-0 3809m 13841Mi ds-idrepo-1 2599m 13880Mi ds-idrepo-2 3965m 13794Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2102m 4541Mi idm-65858d8c4c-x6slf 1780m 4514Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 423m 607Mi 13:58:44 DEBUG --- stderr --- 13:58:44 DEBUG 13:58:45 INFO 13:58:45 INFO [loop_until]: kubectl --namespace=xlou top node 13:58:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:58:45 INFO [loop_until]: OK (rc = 0) 13:58:45 DEBUG --- stdout --- 13:58:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 128m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2302m 14% 5851Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 859m 5% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1953m 12% 5765Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 3712m 23% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4176m 26% 14476Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2912m 18% 14436Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 504m 3% 2121Mi 3% 13:58:45 DEBUG --- stderr --- 13:58:45 DEBUG 13:59:44 INFO 13:59:44 INFO [loop_until]: kubectl --namespace=xlou top pods 13:59:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:59:44 INFO [loop_until]: OK (rc = 0) 13:59:44 DEBUG --- stdout --- 13:59:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 71m 5792Mi am-55f77847b7-6hcmp 76m 5804Mi am-55f77847b7-8wqjg 65m 5781Mi ds-cts-0 6m 396Mi ds-cts-1 7m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 4436m 13834Mi ds-idrepo-1 2903m 13789Mi ds-idrepo-2 2395m 13772Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2061m 4549Mi idm-65858d8c4c-x6slf 1824m 4519Mi lodemon-9c5f9bf5b-bl4rx 5m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 428m 607Mi 13:59:44 DEBUG --- stderr --- 13:59:44 DEBUG 13:59:45 INFO 13:59:45 INFO [loop_until]: kubectl --namespace=xlou top node 13:59:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 13:59:45 INFO [loop_until]: OK (rc = 0) 13:59:45 DEBUG --- stdout --- 13:59:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 138m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 128m 0% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2271m 14% 5858Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 872m 5% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1984m 12% 5767Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5718m 35% 14348Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2424m 15% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3006m 18% 14415Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 505m 3% 2123Mi 3% 13:59:45 DEBUG --- stderr --- 13:59:45 DEBUG 14:00:44 INFO 14:00:44 INFO [loop_until]: kubectl --namespace=xlou top pods 14:00:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:00:44 INFO [loop_until]: OK (rc = 0) 14:00:44 DEBUG --- stdout --- 14:00:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 74m 5792Mi am-55f77847b7-6hcmp 74m 5804Mi am-55f77847b7-8wqjg 68m 5780Mi ds-cts-0 6m 396Mi ds-cts-1 7m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 4268m 13830Mi ds-idrepo-1 2039m 13828Mi ds-idrepo-2 2200m 13847Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 2049m 4556Mi idm-65858d8c4c-x6slf 1866m 4527Mi lodemon-9c5f9bf5b-bl4rx 4m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 432m 607Mi 14:00:44 DEBUG --- stderr --- 14:00:44 DEBUG 14:00:45 INFO 14:00:45 INFO [loop_until]: kubectl --namespace=xlou top node 14:00:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:00:46 INFO [loop_until]: OK (rc = 0) 14:00:46 DEBUG --- stdout --- 14:00:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 127m 0% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2299m 14% 5868Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 861m 5% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1962m 12% 5776Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4255m 26% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2095m 13% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2460m 15% 14405Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 508m 3% 2122Mi 3% 14:00:46 DEBUG --- stderr --- 14:00:46 DEBUG 14:01:44 INFO 14:01:44 INFO [loop_until]: kubectl --namespace=xlou top pods 14:01:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:01:44 INFO [loop_until]: OK (rc = 0) 14:01:44 DEBUG --- stdout --- 14:01:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 55m 5792Mi am-55f77847b7-6hcmp 74m 5805Mi am-55f77847b7-8wqjg 59m 5780Mi ds-cts-0 7m 396Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 4085m 13854Mi ds-idrepo-1 2079m 13778Mi ds-idrepo-2 4511m 13872Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1323m 4564Mi idm-65858d8c4c-x6slf 1687m 4532Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 447m 607Mi 14:01:44 DEBUG --- stderr --- 14:01:44 DEBUG 14:01:46 INFO 14:01:46 INFO [loop_until]: kubectl --namespace=xlou top node 14:01:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:01:46 INFO [loop_until]: OK (rc = 0) 14:01:46 DEBUG --- stdout --- 14:01:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 115m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1897m 11% 5875Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 773m 4% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1309m 8% 5783Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 2141m 13% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3101m 19% 14506Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2162m 13% 14378Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 412m 2% 2122Mi 3% 14:01:46 DEBUG --- stderr --- 14:01:46 DEBUG 14:02:44 INFO 14:02:44 INFO [loop_until]: kubectl --namespace=xlou top pods 14:02:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:02:44 INFO [loop_until]: OK (rc = 0) 14:02:44 DEBUG --- stdout --- 14:02:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 9m 5792Mi am-55f77847b7-6hcmp 8m 5804Mi am-55f77847b7-8wqjg 9m 5781Mi ds-cts-0 6m 397Mi ds-cts-1 5m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 289m 13813Mi ds-idrepo-1 564m 13667Mi ds-idrepo-2 58m 13691Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 6m 4564Mi idm-65858d8c4c-x6slf 9m 4532Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 167Mi 14:02:44 DEBUG --- stderr --- 14:02:44 DEBUG 14:02:46 INFO 14:02:46 INFO [loop_until]: kubectl --namespace=xlou top node 14:02:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:02:46 INFO [loop_until]: OK (rc = 0) 14:02:46 DEBUG --- stdout --- 14:02:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 5878Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 5783Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 301m 1% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 100m 0% 14318Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 600m 3% 14284Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1687Mi 2% 14:02:46 DEBUG --- stderr --- 14:02:46 DEBUG 127.0.0.1 - - [12/Aug/2023 14:03:26] "GET /monitoring/average?start_time=23-08-12_12:32:54&stop_time=23-08-12_13:01:25 HTTP/1.1" 200 - 14:03:44 INFO 14:03:44 INFO [loop_until]: kubectl --namespace=xlou top pods 14:03:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:03:44 INFO [loop_until]: OK (rc = 0) 14:03:44 DEBUG --- stdout --- 14:03:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 9m 5792Mi am-55f77847b7-6hcmp 8m 5804Mi am-55f77847b7-8wqjg 8m 5781Mi ds-cts-0 12m 398Mi ds-cts-1 5m 378Mi ds-cts-2 7m 358Mi ds-idrepo-0 13m 13716Mi ds-idrepo-1 9m 13667Mi ds-idrepo-2 9m 13692Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 5m 4563Mi idm-65858d8c4c-x6slf 8m 4532Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 167Mi 14:03:44 DEBUG --- stderr --- 14:03:44 DEBUG 14:03:46 INFO 14:03:46 INFO [loop_until]: kubectl --namespace=xlou top node 14:03:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:03:46 INFO [loop_until]: OK (rc = 0) 14:03:46 DEBUG --- stdout --- 14:03:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5874Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 5785Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14321Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14281Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1688Mi 2% 14:03:46 DEBUG --- stderr --- 14:03:46 DEBUG 14:04:44 INFO 14:04:44 INFO [loop_until]: kubectl --namespace=xlou top pods 14:04:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:04:44 INFO [loop_until]: OK (rc = 0) 14:04:44 DEBUG --- stdout --- 14:04:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 231m 5791Mi am-55f77847b7-6hcmp 214m 5813Mi am-55f77847b7-8wqjg 158m 5785Mi ds-cts-0 7m 394Mi ds-cts-1 7m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 3249m 13808Mi ds-idrepo-1 864m 13760Mi ds-idrepo-2 1576m 13811Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1148m 4576Mi idm-65858d8c4c-x6slf 873m 4545Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 769m 673Mi 14:04:44 DEBUG --- stderr --- 14:04:44 DEBUG 14:04:46 INFO 14:04:46 INFO [loop_until]: kubectl --namespace=xlou top node 14:04:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:04:46 INFO [loop_until]: OK (rc = 0) 14:04:46 DEBUG --- stdout --- 14:04:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 285m 1% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 273m 1% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 259m 1% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1117m 7% 5889Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 508m 3% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1180m 7% 5801Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 3583m 22% 14489Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1422m 8% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1518m 9% 14407Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 848m 5% 2191Mi 3% 14:04:46 DEBUG --- stderr --- 14:04:46 DEBUG 14:05:44 INFO 14:05:44 INFO [loop_until]: kubectl --namespace=xlou top pods 14:05:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:05:44 INFO [loop_until]: OK (rc = 0) 14:05:44 DEBUG --- stdout --- 14:05:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 173m 5792Mi am-55f77847b7-6hcmp 191m 5816Mi am-55f77847b7-8wqjg 171m 5790Mi ds-cts-0 6m 394Mi ds-cts-1 6m 377Mi ds-cts-2 7m 358Mi ds-idrepo-0 4942m 13827Mi ds-idrepo-1 2475m 13866Mi ds-idrepo-2 2272m 13826Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1457m 4603Mi idm-65858d8c4c-x6slf 1208m 4578Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 441m 716Mi 14:05:44 DEBUG --- stderr --- 14:05:44 DEBUG 14:05:46 INFO 14:05:46 INFO [loop_until]: kubectl --namespace=xlou top node 14:05:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:05:46 INFO [loop_until]: OK (rc = 0) 14:05:46 DEBUG --- stdout --- 14:05:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 240m 1% 6834Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 235m 1% 6922Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 236m 1% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1651m 10% 5927Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 808m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1342m 8% 5830Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4756m 29% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2131m 13% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2389m 15% 14451Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 508m 3% 2243Mi 3% 14:05:46 DEBUG --- stderr --- 14:05:46 DEBUG 14:06:44 INFO 14:06:44 INFO [loop_until]: kubectl --namespace=xlou top pods 14:06:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:06:44 INFO [loop_until]: OK (rc = 0) 14:06:44 DEBUG --- stdout --- 14:06:44 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 169m 5792Mi am-55f77847b7-6hcmp 169m 5815Mi am-55f77847b7-8wqjg 170m 5790Mi ds-cts-0 6m 394Mi ds-cts-1 6m 377Mi ds-cts-2 7m 358Mi ds-idrepo-0 5947m 13877Mi ds-idrepo-1 2803m 13824Mi ds-idrepo-2 2226m 13825Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1393m 4608Mi idm-65858d8c4c-x6slf 1164m 4582Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 430m 719Mi 14:06:44 DEBUG --- stderr --- 14:06:44 DEBUG 14:06:46 INFO 14:06:46 INFO [loop_until]: kubectl --namespace=xlou top node 14:06:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:06:46 INFO [loop_until]: OK (rc = 0) 14:06:46 DEBUG --- stdout --- 14:06:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 234m 1% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 229m 1% 6923Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 224m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1551m 9% 5921Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 807m 5% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1309m 8% 5837Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 6323m 39% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2411m 15% 14506Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2569m 16% 14464Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 493m 3% 2235Mi 3% 14:06:46 DEBUG --- stderr --- 14:06:46 DEBUG 14:07:45 INFO 14:07:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:07:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:07:45 INFO [loop_until]: OK (rc = 0) 14:07:45 DEBUG --- stdout --- 14:07:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 171m 5792Mi am-55f77847b7-6hcmp 200m 5828Mi am-55f77847b7-8wqjg 162m 5790Mi ds-cts-0 6m 394Mi ds-cts-1 6m 377Mi ds-cts-2 6m 358Mi ds-idrepo-0 5043m 13779Mi ds-idrepo-1 3165m 13633Mi ds-idrepo-2 2008m 13833Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1491m 4612Mi idm-65858d8c4c-x6slf 1104m 4595Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 418m 721Mi 14:07:45 DEBUG --- stderr --- 14:07:45 DEBUG 14:07:46 INFO 14:07:46 INFO [loop_until]: kubectl --namespace=xlou top node 14:07:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:07:46 INFO [loop_until]: OK (rc = 0) 14:07:46 DEBUG --- stdout --- 14:07:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 229m 1% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 220m 1% 6923Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 223m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1606m 10% 5923Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 785m 4% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1269m 7% 5848Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5281m 33% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2226m 14% 14511Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3367m 21% 14296Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 485m 3% 2237Mi 3% 14:07:46 DEBUG --- stderr --- 14:07:46 DEBUG 14:08:45 INFO 14:08:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:08:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:08:45 INFO [loop_until]: OK (rc = 0) 14:08:45 DEBUG --- stdout --- 14:08:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 167m 5793Mi am-55f77847b7-6hcmp 168m 5829Mi am-55f77847b7-8wqjg 167m 5790Mi ds-cts-0 6m 394Mi ds-cts-1 7m 377Mi ds-cts-2 5m 358Mi ds-idrepo-0 4926m 13858Mi ds-idrepo-1 3872m 13851Mi ds-idrepo-2 3833m 13791Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1413m 4618Mi idm-65858d8c4c-x6slf 1191m 4598Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 429m 721Mi 14:08:45 DEBUG --- stderr --- 14:08:45 DEBUG 14:08:46 INFO 14:08:46 INFO [loop_until]: kubectl --namespace=xlou top node 14:08:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:08:46 INFO [loop_until]: OK (rc = 0) 14:08:46 DEBUG --- stdout --- 14:08:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 227m 1% 6845Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 224m 1% 6924Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 225m 1% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1531m 9% 5930Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 795m 5% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1325m 8% 5852Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4884m 30% 14516Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3919m 24% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3486m 21% 14512Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 485m 3% 2237Mi 3% 14:08:46 DEBUG --- stderr --- 14:08:46 DEBUG 14:09:45 INFO 14:09:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:09:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:09:45 INFO [loop_until]: OK (rc = 0) 14:09:45 DEBUG --- stdout --- 14:09:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 205m 5797Mi am-55f77847b7-6hcmp 163m 5829Mi am-55f77847b7-8wqjg 264m 5795Mi ds-cts-0 6m 394Mi ds-cts-1 7m 377Mi ds-cts-2 5m 358Mi ds-idrepo-0 4716m 13823Mi ds-idrepo-1 2838m 13826Mi ds-idrepo-2 4043m 13560Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1378m 4622Mi idm-65858d8c4c-x6slf 1187m 4601Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 437m 722Mi 14:09:45 DEBUG --- stderr --- 14:09:45 DEBUG 14:09:47 INFO 14:09:47 INFO [loop_until]: kubectl --namespace=xlou top node 14:09:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:09:47 INFO [loop_until]: OK (rc = 0) 14:09:47 DEBUG --- stdout --- 14:09:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 234m 1% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 277m 1% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 259m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1502m 9% 5932Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 797m 5% 2184Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1343m 8% 5853Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 3958m 24% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4010m 25% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3288m 20% 14507Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 513m 3% 2237Mi 3% 14:09:47 DEBUG --- stderr --- 14:09:47 DEBUG 14:10:45 INFO 14:10:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:10:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:10:45 INFO [loop_until]: OK (rc = 0) 14:10:45 DEBUG --- stdout --- 14:10:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 163m 5797Mi am-55f77847b7-6hcmp 170m 5829Mi am-55f77847b7-8wqjg 162m 5795Mi ds-cts-0 6m 394Mi ds-cts-1 7m 377Mi ds-cts-2 5m 358Mi ds-idrepo-0 3747m 13824Mi ds-idrepo-1 2248m 13873Mi ds-idrepo-2 3215m 13526Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1396m 4625Mi idm-65858d8c4c-x6slf 1146m 4604Mi lodemon-9c5f9bf5b-bl4rx 4m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 415m 722Mi 14:10:45 DEBUG --- stderr --- 14:10:45 DEBUG 14:10:47 INFO 14:10:47 INFO [loop_until]: kubectl --namespace=xlou top node 14:10:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:10:47 INFO [loop_until]: OK (rc = 0) 14:10:47 DEBUG --- stdout --- 14:10:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 233m 1% 6846Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 226m 1% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 224m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1573m 9% 5937Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 813m 5% 2189Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1312m 8% 5857Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 4015m 25% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3189m 20% 14182Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2049m 12% 14465Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 481m 3% 2236Mi 3% 14:10:47 DEBUG --- stderr --- 14:10:47 DEBUG 14:11:45 INFO 14:11:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:11:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:11:45 INFO [loop_until]: OK (rc = 0) 14:11:45 DEBUG --- stdout --- 14:11:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 156m 5797Mi am-55f77847b7-6hcmp 212m 5837Mi am-55f77847b7-8wqjg 162m 5796Mi ds-cts-0 6m 394Mi ds-cts-1 8m 377Mi ds-cts-2 6m 358Mi ds-idrepo-0 6398m 13747Mi ds-idrepo-1 3168m 13819Mi ds-idrepo-2 1751m 13818Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1377m 4629Mi idm-65858d8c4c-x6slf 1184m 4607Mi lodemon-9c5f9bf5b-bl4rx 5m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 410m 723Mi 14:11:45 DEBUG --- stderr --- 14:11:45 DEBUG 14:11:47 INFO 14:11:47 INFO [loop_until]: kubectl --namespace=xlou top node 14:11:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:11:47 INFO [loop_until]: OK (rc = 0) 14:11:47 DEBUG --- stdout --- 14:11:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 270m 1% 6853Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 223m 1% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 222m 1% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1573m 9% 5940Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 781m 4% 2188Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1291m 8% 5860Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 5954m 37% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1876m 11% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3236m 20% 14370Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 484m 3% 2239Mi 3% 14:11:47 DEBUG --- stderr --- 14:11:47 DEBUG 14:12:45 INFO 14:12:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:12:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:12:45 INFO [loop_until]: OK (rc = 0) 14:12:45 DEBUG --- stdout --- 14:12:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 167m 5797Mi am-55f77847b7-6hcmp 170m 5838Mi am-55f77847b7-8wqjg 168m 5796Mi ds-cts-0 6m 397Mi ds-cts-1 7m 377Mi ds-cts-2 6m 358Mi ds-idrepo-0 5313m 13761Mi ds-idrepo-1 1611m 13826Mi ds-idrepo-2 2065m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1392m 4635Mi idm-65858d8c4c-x6slf 1145m 4612Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 405m 723Mi 14:12:45 DEBUG --- stderr --- 14:12:45 DEBUG 14:12:47 INFO 14:12:47 INFO [loop_until]: kubectl --namespace=xlou top node 14:12:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:12:47 INFO [loop_until]: OK (rc = 0) 14:12:47 DEBUG --- stdout --- 14:12:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 228m 1% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 228m 1% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 223m 1% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1527m 9% 5947Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 816m 5% 2185Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1281m 8% 5866Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 5178m 32% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2117m 13% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2159m 13% 14423Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 491m 3% 2236Mi 3% 14:12:47 DEBUG --- stderr --- 14:12:47 DEBUG 14:13:45 INFO 14:13:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:13:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:13:45 INFO [loop_until]: OK (rc = 0) 14:13:45 DEBUG --- stdout --- 14:13:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 162m 5797Mi am-55f77847b7-6hcmp 161m 5837Mi am-55f77847b7-8wqjg 208m 5798Mi ds-cts-0 6m 397Mi ds-cts-1 6m 377Mi ds-cts-2 6m 358Mi ds-idrepo-0 4065m 13846Mi ds-idrepo-1 1586m 13824Mi ds-idrepo-2 1622m 13801Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1378m 4639Mi idm-65858d8c4c-x6slf 1147m 4616Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 408m 723Mi 14:13:45 DEBUG --- stderr --- 14:13:45 DEBUG 14:13:47 INFO 14:13:47 INFO [loop_until]: kubectl --namespace=xlou top node 14:13:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:13:47 INFO [loop_until]: OK (rc = 0) 14:13:47 DEBUG --- stdout --- 14:13:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 227m 1% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 278m 1% 6929Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 269m 1% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1514m 9% 5949Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 775m 4% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1320m 8% 5864Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 3948m 24% 14520Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1585m 9% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2293m 14% 14471Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 480m 3% 2237Mi 3% 14:13:47 DEBUG --- stderr --- 14:13:47 DEBUG 14:14:45 INFO 14:14:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:14:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:14:45 INFO [loop_until]: OK (rc = 0) 14:14:45 DEBUG --- stdout --- 14:14:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 161m 5800Mi am-55f77847b7-6hcmp 164m 5837Mi am-55f77847b7-8wqjg 166m 5798Mi ds-cts-0 6m 397Mi ds-cts-1 6m 377Mi ds-cts-2 7m 359Mi ds-idrepo-0 6110m 13812Mi ds-idrepo-1 2402m 13876Mi ds-idrepo-2 2082m 13831Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1376m 4643Mi idm-65858d8c4c-x6slf 1168m 4621Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 413m 724Mi 14:14:45 DEBUG --- stderr --- 14:14:45 DEBUG 14:14:47 INFO 14:14:47 INFO [loop_until]: kubectl --namespace=xlou top node 14:14:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:14:47 INFO [loop_until]: OK (rc = 0) 14:14:47 DEBUG --- stdout --- 14:14:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 230m 1% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 227m 1% 6942Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 219m 1% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1530m 9% 5955Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 802m 5% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1339m 8% 5869Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 6413m 40% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2458m 15% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2631m 16% 14517Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 485m 3% 2236Mi 3% 14:14:47 DEBUG --- stderr --- 14:14:47 DEBUG 14:15:45 INFO 14:15:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:15:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:15:45 INFO [loop_until]: OK (rc = 0) 14:15:45 DEBUG --- stdout --- 14:15:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 165m 5799Mi am-55f77847b7-6hcmp 223m 5840Mi am-55f77847b7-8wqjg 166m 5798Mi ds-cts-0 6m 398Mi ds-cts-1 6m 377Mi ds-cts-2 5m 359Mi ds-idrepo-0 4041m 13823Mi ds-idrepo-1 2517m 13714Mi ds-idrepo-2 1760m 13835Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1407m 4647Mi idm-65858d8c4c-x6slf 1188m 4624Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 409m 724Mi 14:15:45 DEBUG --- stderr --- 14:15:45 DEBUG 14:15:47 INFO 14:15:47 INFO [loop_until]: kubectl --namespace=xlou top node 14:15:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:15:47 INFO [loop_until]: OK (rc = 0) 14:15:47 DEBUG --- stdout --- 14:15:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 231m 1% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 229m 1% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 226m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1494m 9% 5958Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 795m 5% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1296m 8% 5873Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4132m 26% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1768m 11% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3041m 19% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 494m 3% 2239Mi 3% 14:15:47 DEBUG --- stderr --- 14:15:47 DEBUG 14:16:45 INFO 14:16:45 INFO [loop_until]: kubectl --namespace=xlou top pods 14:16:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:16:45 INFO [loop_until]: OK (rc = 0) 14:16:45 DEBUG --- stdout --- 14:16:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 168m 5799Mi am-55f77847b7-6hcmp 172m 5840Mi am-55f77847b7-8wqjg 163m 5798Mi ds-cts-0 7m 398Mi ds-cts-1 6m 377Mi ds-cts-2 5m 359Mi ds-idrepo-0 4742m 13737Mi ds-idrepo-1 2545m 13728Mi ds-idrepo-2 2438m 13746Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1413m 4652Mi idm-65858d8c4c-x6slf 1161m 4627Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 422m 724Mi 14:16:45 DEBUG --- stderr --- 14:16:45 DEBUG 14:16:47 INFO 14:16:47 INFO [loop_until]: kubectl --namespace=xlou top node 14:16:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:16:47 INFO [loop_until]: OK (rc = 0) 14:16:47 DEBUG --- stdout --- 14:16:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 221m 1% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 227m 1% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 227m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1600m 10% 5961Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 819m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1290m 8% 5874Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4920m 30% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2859m 17% 14453Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2888m 18% 14350Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 495m 3% 2240Mi 3% 14:16:47 DEBUG --- stderr --- 14:16:47 DEBUG 14:17:46 INFO 14:17:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:17:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:17:46 INFO [loop_until]: OK (rc = 0) 14:17:46 DEBUG --- stdout --- 14:17:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 171m 5799Mi am-55f77847b7-6hcmp 168m 5840Mi am-55f77847b7-8wqjg 166m 5798Mi ds-cts-0 6m 398Mi ds-cts-1 6m 377Mi ds-cts-2 6m 359Mi ds-idrepo-0 4145m 13821Mi ds-idrepo-1 2974m 13881Mi ds-idrepo-2 1713m 13874Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1394m 4656Mi idm-65858d8c4c-x6slf 1158m 4631Mi lodemon-9c5f9bf5b-bl4rx 5m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 434m 724Mi 14:17:46 DEBUG --- stderr --- 14:17:46 DEBUG 14:17:48 INFO 14:17:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:17:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:17:48 INFO [loop_until]: OK (rc = 0) 14:17:48 DEBUG --- stdout --- 14:17:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 229m 1% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 220m 1% 6930Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 231m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1579m 9% 5966Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 794m 4% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1316m 8% 5885Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4346m 27% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2315m 14% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2564m 16% 14524Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 499m 3% 2239Mi 3% 14:17:48 DEBUG --- stderr --- 14:17:48 DEBUG 14:18:46 INFO 14:18:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:18:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:18:46 INFO [loop_until]: OK (rc = 0) 14:18:46 DEBUG --- stdout --- 14:18:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 166m 5802Mi am-55f77847b7-6hcmp 165m 5840Mi am-55f77847b7-8wqjg 165m 5800Mi ds-cts-0 5m 397Mi ds-cts-1 6m 377Mi ds-cts-2 6m 359Mi ds-idrepo-0 4507m 13628Mi ds-idrepo-1 2967m 13658Mi ds-idrepo-2 1794m 13865Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1364m 4663Mi idm-65858d8c4c-x6slf 1188m 4636Mi lodemon-9c5f9bf5b-bl4rx 5m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 405m 725Mi 14:18:46 DEBUG --- stderr --- 14:18:46 DEBUG 14:18:48 INFO 14:18:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:18:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:18:48 INFO [loop_until]: OK (rc = 0) 14:18:48 DEBUG --- stdout --- 14:18:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 227m 1% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 222m 1% 6933Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 224m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1514m 9% 5974Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 809m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1276m 8% 5887Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4928m 31% 14303Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1738m 10% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2858m 17% 14224Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 480m 3% 2239Mi 3% 14:18:48 DEBUG --- stderr --- 14:18:48 DEBUG 14:19:46 INFO 14:19:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:19:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:19:46 INFO [loop_until]: OK (rc = 0) 14:19:46 DEBUG --- stdout --- 14:19:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 170m 5802Mi am-55f77847b7-6hcmp 173m 5840Mi am-55f77847b7-8wqjg 167m 5800Mi ds-cts-0 6m 397Mi ds-cts-1 6m 377Mi ds-cts-2 10m 360Mi ds-idrepo-0 4083m 13821Mi ds-idrepo-1 2453m 13870Mi ds-idrepo-2 1552m 13824Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1437m 4665Mi idm-65858d8c4c-x6slf 1184m 4638Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 418m 725Mi 14:19:46 DEBUG --- stderr --- 14:19:46 DEBUG 14:19:48 INFO 14:19:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:19:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:19:48 INFO [loop_until]: OK (rc = 0) 14:19:48 DEBUG --- stdout --- 14:19:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 235m 1% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 227m 1% 6933Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 227m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1584m 9% 5976Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 799m 5% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1303m 8% 5892Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4190m 26% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3006m 18% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2523m 15% 14480Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 473m 2% 2238Mi 3% 14:19:48 DEBUG --- stderr --- 14:19:48 DEBUG 14:20:46 INFO 14:20:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:20:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:20:46 INFO [loop_until]: OK (rc = 0) 14:20:46 DEBUG --- stdout --- 14:20:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 164m 5802Mi am-55f77847b7-6hcmp 168m 5842Mi am-55f77847b7-8wqjg 171m 5800Mi ds-cts-0 6m 397Mi ds-cts-1 6m 377Mi ds-cts-2 5m 359Mi ds-idrepo-0 3949m 13826Mi ds-idrepo-1 3569m 13723Mi ds-idrepo-2 1461m 13427Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1436m 4670Mi idm-65858d8c4c-x6slf 1132m 4641Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 412m 725Mi 14:20:46 DEBUG --- stderr --- 14:20:46 DEBUG 14:20:48 INFO 14:20:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:20:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:20:48 INFO [loop_until]: OK (rc = 0) 14:20:48 DEBUG --- stdout --- 14:20:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 220m 1% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 229m 1% 6931Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 220m 1% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1592m 10% 5979Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 826m 5% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1340m 8% 5896Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4076m 25% 14508Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1785m 11% 14173Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3047m 19% 14381Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 487m 3% 2240Mi 3% 14:20:48 DEBUG --- stderr --- 14:20:48 DEBUG 14:21:46 INFO 14:21:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:21:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:21:46 INFO [loop_until]: OK (rc = 0) 14:21:46 DEBUG --- stdout --- 14:21:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 172m 5802Mi am-55f77847b7-6hcmp 169m 5842Mi am-55f77847b7-8wqjg 168m 5800Mi ds-cts-0 6m 398Mi ds-cts-1 6m 377Mi ds-cts-2 5m 359Mi ds-idrepo-0 4974m 13861Mi ds-idrepo-1 1580m 13826Mi ds-idrepo-2 1732m 13507Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1426m 4676Mi idm-65858d8c4c-x6slf 1180m 4648Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 438m 725Mi 14:21:46 DEBUG --- stderr --- 14:21:46 DEBUG 14:21:48 INFO 14:21:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:21:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:21:48 INFO [loop_until]: OK (rc = 0) 14:21:48 DEBUG --- stdout --- 14:21:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 229m 1% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 228m 1% 6931Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 226m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1618m 10% 5984Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 795m 5% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1317m 8% 5897Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4438m 27% 14565Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 51m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1463m 9% 14186Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1634m 10% 14483Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 504m 3% 2240Mi 3% 14:21:48 DEBUG --- stderr --- 14:21:48 DEBUG 14:22:46 INFO 14:22:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:22:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:22:46 INFO [loop_until]: OK (rc = 0) 14:22:46 DEBUG --- stdout --- 14:22:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 166m 5804Mi am-55f77847b7-6hcmp 174m 5843Mi am-55f77847b7-8wqjg 210m 5803Mi ds-cts-0 5m 397Mi ds-cts-1 6m 378Mi ds-cts-2 5m 360Mi ds-idrepo-0 3832m 13823Mi ds-idrepo-1 2756m 13882Mi ds-idrepo-2 1672m 13704Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1380m 4679Mi idm-65858d8c4c-x6slf 1151m 4650Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 418m 726Mi 14:22:46 DEBUG --- stderr --- 14:22:46 DEBUG 14:22:48 INFO 14:22:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:22:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:22:48 INFO [loop_until]: OK (rc = 0) 14:22:48 DEBUG --- stdout --- 14:22:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 233m 1% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 227m 1% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 222m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1564m 9% 5989Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 817m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1310m 8% 5905Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4022m 25% 14509Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1627m 10% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3000m 18% 14467Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 474m 2% 2240Mi 3% 14:22:48 DEBUG --- stderr --- 14:22:48 DEBUG 14:23:46 INFO 14:23:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:23:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:23:46 INFO [loop_until]: OK (rc = 0) 14:23:46 DEBUG --- stdout --- 14:23:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 162m 5804Mi am-55f77847b7-6hcmp 169m 5842Mi am-55f77847b7-8wqjg 177m 5803Mi ds-cts-0 6m 397Mi ds-cts-1 6m 378Mi ds-cts-2 6m 360Mi ds-idrepo-0 5771m 13859Mi ds-idrepo-1 2092m 13874Mi ds-idrepo-2 1981m 13824Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1443m 4685Mi idm-65858d8c4c-x6slf 1211m 4656Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 427m 726Mi 14:23:46 DEBUG --- stderr --- 14:23:46 DEBUG 14:23:48 INFO 14:23:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:23:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:23:48 INFO [loop_until]: OK (rc = 0) 14:23:48 DEBUG --- stdout --- 14:23:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 235m 1% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 242m 1% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 225m 1% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1570m 9% 5994Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 817m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1359m 8% 5908Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 6098m 38% 14510Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2808m 17% 14278Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2235m 14% 14472Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 504m 3% 2243Mi 3% 14:23:48 DEBUG --- stderr --- 14:23:48 DEBUG 14:24:46 INFO 14:24:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:24:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:24:46 INFO [loop_until]: OK (rc = 0) 14:24:46 DEBUG --- stdout --- 14:24:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 171m 5804Mi am-55f77847b7-6hcmp 168m 5844Mi am-55f77847b7-8wqjg 166m 5803Mi ds-cts-0 6m 397Mi ds-cts-1 6m 378Mi ds-cts-2 6m 356Mi ds-idrepo-0 4076m 13784Mi ds-idrepo-1 2905m 13826Mi ds-idrepo-2 2383m 13819Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1389m 4688Mi idm-65858d8c4c-x6slf 1177m 4658Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 427m 726Mi 14:24:46 DEBUG --- stderr --- 14:24:46 DEBUG 14:24:48 INFO 14:24:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:24:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:24:48 INFO [loop_until]: OK (rc = 0) 14:24:48 DEBUG --- stdout --- 14:24:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 268m 1% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 227m 1% 6937Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 225m 1% 6928Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1488m 9% 5999Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 811m 5% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1316m 8% 5913Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 3979m 25% 14428Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2529m 15% 14524Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2655m 16% 14497Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 499m 3% 2241Mi 3% 14:24:48 DEBUG --- stderr --- 14:24:48 DEBUG 14:25:46 INFO 14:25:46 INFO [loop_until]: kubectl --namespace=xlou top pods 14:25:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:25:46 INFO [loop_until]: OK (rc = 0) 14:25:46 DEBUG --- stdout --- 14:25:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 175m 5804Mi am-55f77847b7-6hcmp 182m 5845Mi am-55f77847b7-8wqjg 166m 5776Mi ds-cts-0 6m 398Mi ds-cts-1 6m 377Mi ds-cts-2 6m 357Mi ds-idrepo-0 6318m 13395Mi ds-idrepo-1 2690m 13825Mi ds-idrepo-2 1575m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1439m 4706Mi idm-65858d8c4c-x6slf 1197m 4675Mi lodemon-9c5f9bf5b-bl4rx 5m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 429m 731Mi 14:25:46 DEBUG --- stderr --- 14:25:46 DEBUG 14:25:48 INFO 14:25:48 INFO [loop_until]: kubectl --namespace=xlou top node 14:25:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:25:49 INFO [loop_until]: OK (rc = 0) 14:25:49 DEBUG --- stdout --- 14:25:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 235m 1% 6866Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 230m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 250m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1601m 10% 6013Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 821m 5% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1361m 8% 5929Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4936m 31% 14108Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1648m 10% 14493Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2545m 16% 14481Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 481m 3% 2247Mi 3% 14:25:49 DEBUG --- stderr --- 14:25:49 DEBUG 14:26:47 INFO 14:26:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:26:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:26:47 INFO [loop_until]: OK (rc = 0) 14:26:47 DEBUG --- stdout --- 14:26:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 202m 5806Mi am-55f77847b7-6hcmp 167m 5845Mi am-55f77847b7-8wqjg 206m 5778Mi ds-cts-0 6m 398Mi ds-cts-1 6m 377Mi ds-cts-2 5m 357Mi ds-idrepo-0 3775m 13663Mi ds-idrepo-1 2508m 13759Mi ds-idrepo-2 1707m 13807Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1404m 4708Mi idm-65858d8c4c-x6slf 1148m 4678Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 402m 731Mi 14:26:47 DEBUG --- stderr --- 14:26:47 DEBUG 14:26:49 INFO 14:26:49 INFO [loop_until]: kubectl --namespace=xlou top node 14:26:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:26:49 INFO [loop_until]: OK (rc = 0) 14:26:49 DEBUG --- stdout --- 14:26:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 230m 1% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 280m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 271m 1% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1552m 9% 6021Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 807m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1317m 8% 5929Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4685m 29% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1468m 9% 14500Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2843m 17% 14381Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 481m 3% 2244Mi 3% 14:26:49 DEBUG --- stderr --- 14:26:49 DEBUG 14:27:47 INFO 14:27:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:27:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:27:47 INFO [loop_until]: OK (rc = 0) 14:27:47 DEBUG --- stdout --- 14:27:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 167m 5806Mi am-55f77847b7-6hcmp 170m 5845Mi am-55f77847b7-8wqjg 185m 5778Mi ds-cts-0 6m 398Mi ds-cts-1 6m 377Mi ds-cts-2 5m 356Mi ds-idrepo-0 3849m 13823Mi ds-idrepo-1 1852m 13778Mi ds-idrepo-2 1565m 13769Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1388m 4713Mi idm-65858d8c4c-x6slf 1187m 4681Mi lodemon-9c5f9bf5b-bl4rx 1m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 410m 732Mi 14:27:47 DEBUG --- stderr --- 14:27:47 DEBUG 14:27:49 INFO 14:27:49 INFO [loop_until]: kubectl --namespace=xlou top node 14:27:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:27:49 INFO [loop_until]: OK (rc = 0) 14:27:49 DEBUG --- stdout --- 14:27:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 227m 1% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 241m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 225m 1% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1543m 9% 6022Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 821m 5% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1359m 8% 5933Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4029m 25% 14522Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1853m 11% 14461Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1691m 10% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 482m 3% 2245Mi 3% 14:27:49 DEBUG --- stderr --- 14:27:49 DEBUG 14:28:47 INFO 14:28:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:28:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:28:47 INFO [loop_until]: OK (rc = 0) 14:28:47 DEBUG --- stdout --- 14:28:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 166m 5806Mi am-55f77847b7-6hcmp 208m 5846Mi am-55f77847b7-8wqjg 158m 5778Mi ds-cts-0 6m 398Mi ds-cts-1 7m 378Mi ds-cts-2 5m 357Mi ds-idrepo-0 5614m 13610Mi ds-idrepo-1 1734m 13810Mi ds-idrepo-2 4680m 13802Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1363m 4718Mi idm-65858d8c4c-x6slf 1182m 4687Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 421m 732Mi 14:28:47 DEBUG --- stderr --- 14:28:47 DEBUG 14:28:49 INFO 14:28:49 INFO [loop_until]: kubectl --namespace=xlou top node 14:28:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:28:49 INFO [loop_until]: OK (rc = 0) 14:28:49 DEBUG --- stdout --- 14:28:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 266m 1% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 222m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 221m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1507m 9% 6029Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 795m 5% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1316m 8% 5941Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 5414m 34% 14290Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4516m 28% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1504m 9% 14495Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 459m 2% 2245Mi 3% 14:28:49 DEBUG --- stderr --- 14:28:49 DEBUG 14:29:47 INFO 14:29:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:29:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:29:47 INFO [loop_until]: OK (rc = 0) 14:29:47 DEBUG --- stdout --- 14:29:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 165m 5806Mi am-55f77847b7-6hcmp 171m 5846Mi am-55f77847b7-8wqjg 167m 5778Mi ds-cts-0 6m 398Mi ds-cts-1 6m 377Mi ds-cts-2 6m 356Mi ds-idrepo-0 4232m 13771Mi ds-idrepo-1 1610m 13834Mi ds-idrepo-2 2597m 13771Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1372m 4722Mi idm-65858d8c4c-x6slf 1154m 4689Mi lodemon-9c5f9bf5b-bl4rx 8m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 421m 732Mi 14:29:47 DEBUG --- stderr --- 14:29:47 DEBUG 14:29:49 INFO 14:29:49 INFO [loop_until]: kubectl --namespace=xlou top node 14:29:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:29:49 INFO [loop_until]: OK (rc = 0) 14:29:49 DEBUG --- stdout --- 14:29:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 232m 1% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 227m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 225m 1% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1580m 9% 6028Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 813m 5% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1305m 8% 5939Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 3832m 24% 14513Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2048m 12% 14457Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1645m 10% 14499Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 482m 3% 2247Mi 3% 14:29:49 DEBUG --- stderr --- 14:29:49 DEBUG 14:30:47 INFO 14:30:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:30:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:30:47 INFO [loop_until]: OK (rc = 0) 14:30:47 DEBUG --- stdout --- 14:30:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 164m 5806Mi am-55f77847b7-6hcmp 168m 5846Mi am-55f77847b7-8wqjg 161m 5778Mi ds-cts-0 5m 400Mi ds-cts-1 7m 379Mi ds-cts-2 6m 356Mi ds-idrepo-0 4895m 13680Mi ds-idrepo-1 1574m 13813Mi ds-idrepo-2 2001m 13823Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1413m 4728Mi idm-65858d8c4c-x6slf 1121m 4695Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 420m 732Mi 14:30:47 DEBUG --- stderr --- 14:30:47 DEBUG 14:30:49 INFO 14:30:49 INFO [loop_until]: kubectl --namespace=xlou top node 14:30:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:30:49 INFO [loop_until]: OK (rc = 0) 14:30:49 DEBUG --- stdout --- 14:30:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 219m 1% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 298m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 269m 1% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1558m 9% 6035Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 807m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1287m 8% 5944Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4935m 31% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2801m 17% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1641m 10% 14481Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 490m 3% 2249Mi 3% 14:30:49 DEBUG --- stderr --- 14:30:49 DEBUG 14:31:47 INFO 14:31:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:31:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:31:47 INFO [loop_until]: OK (rc = 0) 14:31:47 DEBUG --- stdout --- 14:31:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 167m 5808Mi am-55f77847b7-6hcmp 172m 5846Mi am-55f77847b7-8wqjg 171m 5780Mi ds-cts-0 6m 399Mi ds-cts-1 6m 379Mi ds-cts-2 6m 357Mi ds-idrepo-0 4148m 13701Mi ds-idrepo-1 1791m 13819Mi ds-idrepo-2 2331m 13745Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1384m 4731Mi idm-65858d8c4c-x6slf 1207m 4698Mi lodemon-9c5f9bf5b-bl4rx 2m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 428m 732Mi 14:31:47 DEBUG --- stderr --- 14:31:47 DEBUG 14:31:49 INFO 14:31:49 INFO [loop_until]: kubectl --namespace=xlou top node 14:31:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:31:49 INFO [loop_until]: OK (rc = 0) 14:31:49 DEBUG --- stdout --- 14:31:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 229m 1% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 216m 1% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 220m 1% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1579m 9% 6035Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 818m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1276m 8% 5948Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 3994m 25% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1810m 11% 14515Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1770m 11% 14498Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 478m 3% 2250Mi 3% 14:31:49 DEBUG --- stderr --- 14:31:49 DEBUG 14:32:47 INFO 14:32:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:32:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:32:47 INFO [loop_until]: OK (rc = 0) 14:32:47 DEBUG --- stdout --- 14:32:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 167m 5808Mi am-55f77847b7-6hcmp 174m 5846Mi am-55f77847b7-8wqjg 168m 5780Mi ds-cts-0 5m 401Mi ds-cts-1 6m 379Mi ds-cts-2 5m 358Mi ds-idrepo-0 4267m 13576Mi ds-idrepo-1 4957m 13696Mi ds-idrepo-2 2971m 13641Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1425m 4737Mi idm-65858d8c4c-x6slf 1211m 4702Mi lodemon-9c5f9bf5b-bl4rx 3m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 424m 733Mi 14:32:47 DEBUG --- stderr --- 14:32:47 DEBUG 14:32:49 INFO 14:32:49 INFO [loop_until]: kubectl --namespace=xlou top node 14:32:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:32:49 INFO [loop_until]: OK (rc = 0) 14:32:49 DEBUG --- stdout --- 14:32:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 236m 1% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 229m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 229m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1616m 10% 6047Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 839m 5% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1345m 8% 5954Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 4457m 28% 14285Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2962m 18% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3900m 24% 14457Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 504m 3% 2246Mi 3% 14:32:49 DEBUG --- stderr --- 14:32:49 DEBUG 14:33:47 INFO 14:33:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:33:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:33:47 INFO [loop_until]: OK (rc = 0) 14:33:47 DEBUG --- stdout --- 14:33:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 166m 5808Mi am-55f77847b7-6hcmp 169m 5849Mi am-55f77847b7-8wqjg 168m 5780Mi ds-cts-0 5m 401Mi ds-cts-1 6m 379Mi ds-cts-2 5m 357Mi ds-idrepo-0 3901m 13795Mi ds-idrepo-1 2840m 13819Mi ds-idrepo-2 1620m 13834Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 1435m 4740Mi idm-65858d8c4c-x6slf 1205m 4705Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 409m 733Mi 14:33:47 DEBUG --- stderr --- 14:33:47 DEBUG 14:33:49 INFO 14:33:49 INFO [loop_until]: kubectl --namespace=xlou top node 14:33:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:33:50 INFO [loop_until]: OK (rc = 0) 14:33:50 DEBUG --- stdout --- 14:33:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 227m 1% 6868Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 227m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 224m 1% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1558m 9% 6050Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 814m 5% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1324m 8% 5958Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 3973m 25% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1490m 9% 14509Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2895m 18% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 495m 3% 2248Mi 3% 14:33:50 DEBUG --- stderr --- 14:33:50 DEBUG 14:34:47 INFO 14:34:47 INFO [loop_until]: kubectl --namespace=xlou top pods 14:34:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:34:47 INFO [loop_until]: OK (rc = 0) 14:34:47 DEBUG --- stdout --- 14:34:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 17m 5808Mi am-55f77847b7-6hcmp 20m 5849Mi am-55f77847b7-8wqjg 8m 5782Mi ds-cts-0 6m 399Mi ds-cts-1 5m 379Mi ds-cts-2 6m 357Mi ds-idrepo-0 958m 13609Mi ds-idrepo-1 217m 13724Mi ds-idrepo-2 238m 13593Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 4743Mi idm-65858d8c4c-x6slf 19m 4708Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 77m 171Mi 14:34:47 DEBUG --- stderr --- 14:34:47 DEBUG 14:34:50 INFO 14:34:50 INFO [loop_until]: kubectl --namespace=xlou top node 14:34:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:34:50 INFO [loop_until]: OK (rc = 0) 14:34:50 DEBUG --- stdout --- 14:34:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 6867Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6051Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5960Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 363m 2% 14121Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 267m 1% 14275Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 49m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 276m 1% 14399Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 161m 1% 1695Mi 2% 14:34:50 DEBUG --- stderr --- 14:34:50 DEBUG 14:35:48 INFO 14:35:48 INFO [loop_until]: kubectl --namespace=xlou top pods 14:35:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:35:48 INFO [loop_until]: OK (rc = 0) 14:35:48 DEBUG --- stdout --- 14:35:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 8m 5808Mi am-55f77847b7-6hcmp 7m 5848Mi am-55f77847b7-8wqjg 8m 5782Mi ds-cts-0 6m 399Mi ds-cts-1 5m 379Mi ds-cts-2 7m 357Mi ds-idrepo-0 13m 13424Mi ds-idrepo-1 9m 13724Mi ds-idrepo-2 8m 13593Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 4742Mi idm-65858d8c4c-x6slf 9m 4707Mi lodemon-9c5f9bf5b-bl4rx 6m 66Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 1m 171Mi 14:35:48 DEBUG --- stderr --- 14:35:48 DEBUG 14:35:50 INFO 14:35:50 INFO [loop_until]: kubectl --namespace=xlou top node 14:35:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:35:50 INFO [loop_until]: OK (rc = 0) 14:35:50 DEBUG --- stdout --- 14:35:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6869Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6051Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 5958Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 14123Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14274Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 51m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14400Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1699Mi 2% 14:35:50 DEBUG --- stderr --- 14:35:50 DEBUG 127.0.0.1 - - [12/Aug/2023 14:35:57] "GET /monitoring/average?start_time=23-08-12_13:05:26&stop_time=23-08-12_13:33:56 HTTP/1.1" 200 - 14:36:48 INFO 14:36:48 INFO [loop_until]: kubectl --namespace=xlou top pods 14:36:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:36:48 INFO [loop_until]: OK (rc = 0) 14:36:48 DEBUG --- stdout --- 14:36:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 4Mi am-55f77847b7-5g27b 9m 5808Mi am-55f77847b7-6hcmp 9m 5849Mi am-55f77847b7-8wqjg 8m 5782Mi ds-cts-0 5m 399Mi ds-cts-1 4m 379Mi ds-cts-2 6m 358Mi ds-idrepo-0 13m 13423Mi ds-idrepo-1 9m 13725Mi ds-idrepo-2 9m 13593Mi end-user-ui-6845bc78c7-hprgv 1m 4Mi idm-65858d8c4c-gwvpj 8m 4742Mi idm-65858d8c4c-x6slf 8m 4707Mi lodemon-9c5f9bf5b-bl4rx 4m 67Mi login-ui-74d6fb46c-ms8nm 1m 3Mi overseer-0-94cd995dc-gcl5s 2m 171Mi 14:36:48 DEBUG --- stderr --- 14:36:48 DEBUG 14:36:50 INFO 14:36:50 INFO [loop_until]: kubectl --namespace=xlou top node 14:36:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:36:50 INFO [loop_until]: OK (rc = 0) 14:36:50 DEBUG --- stdout --- 14:36:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 84m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 6867Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 87m 0% 6064Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 143m 0% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 81m 0% 5959Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 14119Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14278Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 49m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14398Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 79m 0% 1694Mi 2% 14:36:50 DEBUG --- stderr --- 14:36:50 DEBUG 14:37:48 INFO 14:37:48 INFO [loop_until]: kubectl --namespace=xlou top pods 14:37:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:37:48 INFO [loop_until]: OK (rc = 0) 14:37:48 DEBUG --- stdout --- 14:37:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 5Mi am-55f77847b7-5g27b 9m 5808Mi am-55f77847b7-6hcmp 7m 5849Mi am-55f77847b7-8wqjg 9m 5782Mi ds-cts-0 6m 400Mi ds-cts-1 5m 379Mi ds-cts-2 6m 357Mi ds-idrepo-0 259m 13424Mi ds-idrepo-1 151m 13724Mi ds-idrepo-2 182m 13593Mi end-user-ui-6845bc78c7-hprgv 1m 5Mi idm-65858d8c4c-gwvpj 8m 4742Mi idm-65858d8c4c-x6slf 8m 4707Mi lodemon-9c5f9bf5b-bl4rx 2m 67Mi login-ui-74d6fb46c-ms8nm 1m 4Mi overseer-0-94cd995dc-gcl5s 939m 296Mi 14:37:48 DEBUG --- stderr --- 14:37:48 DEBUG 14:37:50 INFO 14:37:50 INFO [loop_until]: kubectl --namespace=xlou top node 14:37:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:37:50 INFO [loop_until]: OK (rc = 0) 14:37:50 DEBUG --- stdout --- 14:37:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6867Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6054Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 134m 0% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 5960Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 283m 1% 14123Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 247m 1% 14281Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 50m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 181m 1% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 802m 5% 1817Mi 3% 14:37:50 DEBUG --- stderr --- 14:37:50 DEBUG 14:38:48 INFO 14:38:48 INFO [loop_until]: kubectl --namespace=xlou top pods 14:38:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:38:48 INFO [loop_until]: OK (rc = 0) 14:38:48 DEBUG --- stdout --- 14:38:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 5Mi am-55f77847b7-5g27b 8m 5808Mi am-55f77847b7-6hcmp 7m 5848Mi am-55f77847b7-8wqjg 8m 5782Mi ds-cts-0 5m 400Mi ds-cts-1 5m 379Mi ds-cts-2 5m 358Mi ds-idrepo-0 14m 13424Mi ds-idrepo-1 9m 13724Mi ds-idrepo-2 8m 13594Mi end-user-ui-6845bc78c7-hprgv 1m 5Mi idm-65858d8c4c-gwvpj 8m 4742Mi idm-65858d8c4c-x6slf 8m 4707Mi lodemon-9c5f9bf5b-bl4rx 7m 67Mi login-ui-74d6fb46c-ms8nm 1m 4Mi overseer-0-94cd995dc-gcl5s 1014m 705Mi 14:38:48 DEBUG --- stderr --- 14:38:48 DEBUG 14:38:50 INFO 14:38:50 INFO [loop_until]: kubectl --namespace=xlou top node 14:38:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:38:50 INFO [loop_until]: OK (rc = 0) 14:38:50 DEBUG --- stdout --- 14:38:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6870Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 6055Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5962Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 14122Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 49m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14402Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1018m 6% 1964Mi 3% 14:38:50 DEBUG --- stderr --- 14:38:50 DEBUG 14:39:48 INFO 14:39:48 INFO [loop_until]: kubectl --namespace=xlou top pods 14:39:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:39:48 INFO [loop_until]: OK (rc = 0) 14:39:48 DEBUG --- stdout --- 14:39:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-vlcrt 1m 5Mi am-55f77847b7-5g27b 9m 5808Mi am-55f77847b7-6hcmp 7m 5849Mi am-55f77847b7-8wqjg 8m 5782Mi ds-cts-0 6m 400Mi ds-cts-1 5m 379Mi ds-cts-2 5m 357Mi ds-idrepo-0 13m 13425Mi ds-idrepo-1 11m 13724Mi ds-idrepo-2 9m 13594Mi end-user-ui-6845bc78c7-hprgv 1m 5Mi idm-65858d8c4c-gwvpj 7m 4741Mi idm-65858d8c4c-x6slf 8m 4707Mi lodemon-9c5f9bf5b-bl4rx 7m 67Mi login-ui-74d6fb46c-ms8nm 1m 4Mi overseer-0-94cd995dc-gcl5s 651m 549Mi 14:39:48 DEBUG --- stderr --- 14:39:48 DEBUG 14:39:50 INFO 14:39:50 INFO [loop_until]: kubectl --namespace=xlou top node 14:39:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 14:39:50 INFO [loop_until]: OK (rc = 0) 14:39:50 DEBUG --- stdout --- 14:39:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 84m 0% 1362Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6870Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6915Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 6055Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5962Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 14124Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14280Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 52m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14404Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 761m 4% 2081Mi 3% 14:39:50 DEBUG --- stderr --- 14:39:50 DEBUG 14:40:21 INFO Finished: True 14:40:21 INFO Waiting for threads to register finish flag 14:40:50 INFO Done. Have a nice day! :) 127.0.0.1 - - [12/Aug/2023 14:40:50] "GET /monitoring/stop HTTP/1.1" 200 - 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Cpu_cores_used_per_pod.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Memory_usage_per_pod.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Disk_tps_read_per_pod.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Disk_tps_writes_per_pod.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Cpu_cores_used_per_node.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Memory_usage_used_per_node.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Cpu_iowait_per_node.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Network_receive_per_node.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Network_transmit_per_node.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/am_cts_task_count_token_session.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/am_authentication_rate.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/am_authentication_count_per_pod.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/Cts_reaper_Deletion_count.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/AM_oauth2_authorization_codes.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/ds_pods_replication_delay.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/am_cts_reaper_cache_size.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/node_disk_read_bytes_total.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/node_disk_written_bytes_total.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/ds_backend_entry_count.json does not exist. Skipping... 14:40:53 INFO File /tmp/lodemon_data-23-08-12_12:01:53/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [12/Aug/2023 14:40:55] "GET /monitoring/process HTTP/1.1" 200 -