==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-5798c88b8f-k2sv4 Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sat, 12 Aug 2023 17:58:02 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=5798c88b8f skaffold.dev/run-id=21abdef2-4e14-4124-a9d1-384e855675b6 Annotations: Status: Running IP: 10.106.45.80 IPs: IP: 10.106.45.80 Controlled By: ReplicaSet/lodemon-5798c88b8f Containers: lodemon: Container ID: containerd://c827e0223ff796dae7568a56ac3dcfaba8302025dd9585c4bc40cd99d9d3f8bd Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sat, 12 Aug 2023 17:58:05 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65lmx (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-65lmx: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 18:58:06 INFO 18:58:06 INFO --------------------- Get expected number of pods --------------------- 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 18:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG 3 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO 18:58:06 INFO ---------------------------- Get pod list ---------------------------- 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 18:58:06 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG am-55f77847b7-8t2dm am-55f77847b7-dr27z am-55f77847b7-fp459 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO 18:58:06 INFO -------------- Check pod am-55f77847b7-8t2dm is running -------------- 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-8t2dm -o=jsonpath={.status.phase} | grep "Running" 18:58:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG Running 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-8t2dm -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG true 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-8t2dm --output jsonpath={.status.startTime} 18:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG 2023-08-12T17:48:53Z 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO 18:58:06 INFO ------- Check pod am-55f77847b7-8t2dm filesystem is accessible ------- 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-8t2dm --container openam -- ls / | grep "bin" 18:58:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO 18:58:06 INFO ------------- Check pod am-55f77847b7-8t2dm restart count ------------- 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-8t2dm --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG 0 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO Pod am-55f77847b7-8t2dm has been restarted 0 times. 18:58:06 INFO 18:58:06 INFO -------------- Check pod am-55f77847b7-dr27z is running -------------- 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-dr27z -o=jsonpath={.status.phase} | grep "Running" 18:58:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG Running 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-dr27z -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:06 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:06 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:06 INFO [loop_until]: OK (rc = 0) 18:58:06 DEBUG --- stdout --- 18:58:06 DEBUG true 18:58:06 DEBUG --- stderr --- 18:58:06 DEBUG 18:58:06 INFO 18:58:06 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-dr27z --output jsonpath={.status.startTime} 18:58:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG 2023-08-12T17:48:53Z 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO ------- Check pod am-55f77847b7-dr27z filesystem is accessible ------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-dr27z --container openam -- ls / | grep "bin" 18:58:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO ------------- Check pod am-55f77847b7-dr27z restart count ------------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-dr27z --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG 0 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO Pod am-55f77847b7-dr27z has been restarted 0 times. 18:58:07 INFO 18:58:07 INFO -------------- Check pod am-55f77847b7-fp459 is running -------------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-fp459 -o=jsonpath={.status.phase} | grep "Running" 18:58:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG Running 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-fp459 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG true 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-fp459 --output jsonpath={.status.startTime} 18:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG 2023-08-12T17:48:53Z 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO ------- Check pod am-55f77847b7-fp459 filesystem is accessible ------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-fp459 --container openam -- ls / | grep "bin" 18:58:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO ------------- Check pod am-55f77847b7-fp459 restart count ------------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-fp459 --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG 0 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO Pod am-55f77847b7-fp459 has been restarted 0 times. 18:58:07 INFO 18:58:07 INFO --------------------- Get expected number of pods --------------------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 18:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG 2 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO ---------------------------- Get pod list ---------------------------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 18:58:07 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG idm-65858d8c4c-2grp9 idm-65858d8c4c-4qc5l 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO -------------- Check pod idm-65858d8c4c-2grp9 is running -------------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-2grp9 -o=jsonpath={.status.phase} | grep "Running" 18:58:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG Running 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-2grp9 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG true 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-2grp9 --output jsonpath={.status.startTime} 18:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG 2023-08-12T17:48:53Z 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO ------- Check pod idm-65858d8c4c-2grp9 filesystem is accessible ------- 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-2grp9 --container openidm -- ls / | grep "bin" 18:58:07 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:07 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:07 INFO [loop_until]: OK (rc = 0) 18:58:07 DEBUG --- stdout --- 18:58:07 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 18:58:07 DEBUG --- stderr --- 18:58:07 DEBUG 18:58:07 INFO 18:58:07 INFO ------------ Check pod idm-65858d8c4c-2grp9 restart count ------------ 18:58:07 INFO 18:58:07 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-2grp9 --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG 0 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO Pod idm-65858d8c4c-2grp9 has been restarted 0 times. 18:58:08 INFO 18:58:08 INFO -------------- Check pod idm-65858d8c4c-4qc5l is running -------------- 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-4qc5l -o=jsonpath={.status.phase} | grep "Running" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG Running 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-4qc5l -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG true 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-4qc5l --output jsonpath={.status.startTime} 18:58:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG 2023-08-12T17:48:53Z 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO ------- Check pod idm-65858d8c4c-4qc5l filesystem is accessible ------- 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-4qc5l --container openidm -- ls / | grep "bin" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO ------------ Check pod idm-65858d8c4c-4qc5l restart count ------------ 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-4qc5l --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG 0 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO Pod idm-65858d8c4c-4qc5l has been restarted 0 times. 18:58:08 INFO 18:58:08 INFO --------------------- Get expected number of pods --------------------- 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 18:58:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG 3 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO ---------------------------- Get pod list ---------------------------- 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 18:58:08 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG Running 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG true 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 18:58:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG 2023-08-12T17:14:49Z 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG 0 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO Pod ds-idrepo-0 has been restarted 0 times. 18:58:08 INFO 18:58:08 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG Running 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG true 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 18:58:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:08 INFO [loop_until]: OK (rc = 0) 18:58:08 DEBUG --- stdout --- 18:58:08 DEBUG 2023-08-12T17:26:59Z 18:58:08 DEBUG --- stderr --- 18:58:08 DEBUG 18:58:08 INFO 18:58:08 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 18:58:08 INFO 18:58:08 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 18:58:08 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG 0 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO Pod ds-idrepo-1 has been restarted 0 times. 18:58:09 INFO 18:58:09 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG Running 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG true 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 18:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG 2023-08-12T17:37:55Z 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG 0 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO Pod ds-idrepo-2 has been restarted 0 times. 18:58:09 INFO 18:58:09 INFO --------------------- Get expected number of pods --------------------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 18:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG 3 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO ---------------------------- Get pod list ---------------------------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 18:58:09 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO -------------------- Check pod ds-cts-0 is running -------------------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG Running 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG true 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 18:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG 2023-08-12T17:14:49Z 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG 0 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO Pod ds-cts-0 has been restarted 0 times. 18:58:09 INFO 18:58:09 INFO -------------------- Check pod ds-cts-1 is running -------------------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG Running 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG true 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 18:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:09 INFO [loop_until]: OK (rc = 0) 18:58:09 DEBUG --- stdout --- 18:58:09 DEBUG 2023-08-12T17:15:15Z 18:58:09 DEBUG --- stderr --- 18:58:09 DEBUG 18:58:09 INFO 18:58:09 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 18:58:09 INFO 18:58:09 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 18:58:09 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:10 INFO [loop_until]: OK (rc = 0) 18:58:10 DEBUG --- stdout --- 18:58:10 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 18:58:10 DEBUG --- stderr --- 18:58:10 DEBUG 18:58:10 INFO 18:58:10 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 18:58:10 INFO 18:58:10 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:10 INFO [loop_until]: OK (rc = 0) 18:58:10 DEBUG --- stdout --- 18:58:10 DEBUG 0 18:58:10 DEBUG --- stderr --- 18:58:10 DEBUG 18:58:10 INFO Pod ds-cts-1 has been restarted 0 times. 18:58:10 INFO 18:58:10 INFO -------------------- Check pod ds-cts-2 is running -------------------- 18:58:10 INFO 18:58:10 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 18:58:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:10 INFO [loop_until]: OK (rc = 0) 18:58:10 DEBUG --- stdout --- 18:58:10 DEBUG Running 18:58:10 DEBUG --- stderr --- 18:58:10 DEBUG 18:58:10 INFO 18:58:10 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 18:58:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:10 INFO [loop_until]: OK (rc = 0) 18:58:10 DEBUG --- stdout --- 18:58:10 DEBUG true 18:58:10 DEBUG --- stderr --- 18:58:10 DEBUG 18:58:10 INFO 18:58:10 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 18:58:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:10 INFO [loop_until]: OK (rc = 0) 18:58:10 DEBUG --- stdout --- 18:58:10 DEBUG 2023-08-12T17:15:43Z 18:58:10 DEBUG --- stderr --- 18:58:10 DEBUG 18:58:10 INFO 18:58:10 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 18:58:10 INFO 18:58:10 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 18:58:10 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 18:58:10 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 18:58:10 INFO [loop_until]: OK (rc = 0) 18:58:10 DEBUG --- stdout --- 18:58:10 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 18:58:10 DEBUG --- stderr --- 18:58:10 DEBUG 18:58:10 INFO 18:58:10 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 18:58:10 INFO 18:58:10 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 18:58:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:10 INFO [loop_until]: OK (rc = 0) 18:58:10 DEBUG --- stdout --- 18:58:10 DEBUG 0 18:58:10 DEBUG --- stderr --- 18:58:10 DEBUG 18:58:10 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.80:8080 Press CTRL+C to quit 18:58:41 INFO 18:58:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:41 INFO [loop_until]: OK (rc = 0) 18:58:41 DEBUG --- stdout --- 18:58:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:41 DEBUG --- stderr --- 18:58:41 DEBUG 18:58:41 INFO 18:58:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:41 INFO [loop_until]: OK (rc = 0) 18:58:41 DEBUG --- stdout --- 18:58:41 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:41 DEBUG --- stderr --- 18:58:41 DEBUG 18:58:41 INFO 18:58:41 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:41 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:42 INFO [loop_until]: OK (rc = 0) 18:58:42 DEBUG --- stdout --- 18:58:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:42 DEBUG --- stderr --- 18:58:42 DEBUG 18:58:42 INFO 18:58:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:42 INFO [loop_until]: OK (rc = 0) 18:58:42 DEBUG --- stdout --- 18:58:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:42 DEBUG --- stderr --- 18:58:42 DEBUG 18:58:42 INFO 18:58:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:42 INFO [loop_until]: OK (rc = 0) 18:58:42 DEBUG --- stdout --- 18:58:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:42 DEBUG --- stderr --- 18:58:42 DEBUG 18:58:42 INFO 18:58:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:42 INFO [loop_until]: OK (rc = 0) 18:58:42 DEBUG --- stdout --- 18:58:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:42 DEBUG --- stderr --- 18:58:42 DEBUG 18:58:42 INFO 18:58:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:42 INFO [loop_until]: OK (rc = 0) 18:58:42 DEBUG --- stdout --- 18:58:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:42 DEBUG --- stderr --- 18:58:42 DEBUG 18:58:42 INFO 18:58:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:42 INFO [loop_until]: OK (rc = 0) 18:58:42 DEBUG --- stdout --- 18:58:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:42 DEBUG --- stderr --- 18:58:42 DEBUG 18:58:42 INFO 18:58:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:42 INFO [loop_until]: OK (rc = 0) 18:58:42 DEBUG --- stdout --- 18:58:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:42 DEBUG --- stderr --- 18:58:42 DEBUG 18:58:42 INFO 18:58:42 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:42 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:42 INFO [loop_until]: OK (rc = 0) 18:58:42 DEBUG --- stdout --- 18:58:42 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:42 DEBUG --- stderr --- 18:58:42 DEBUG 18:58:43 INFO 18:58:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:43 INFO [loop_until]: OK (rc = 0) 18:58:43 DEBUG --- stdout --- 18:58:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:43 DEBUG --- stderr --- 18:58:43 DEBUG 18:58:43 INFO 18:58:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:43 INFO [loop_until]: OK (rc = 0) 18:58:43 DEBUG --- stdout --- 18:58:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:43 DEBUG --- stderr --- 18:58:43 DEBUG 18:58:43 INFO 18:58:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:43 INFO [loop_until]: OK (rc = 0) 18:58:43 DEBUG --- stdout --- 18:58:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:43 DEBUG --- stderr --- 18:58:43 DEBUG 18:58:43 INFO 18:58:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:43 INFO [loop_until]: OK (rc = 0) 18:58:43 DEBUG --- stdout --- 18:58:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:43 DEBUG --- stderr --- 18:58:43 DEBUG 18:58:43 INFO 18:58:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:43 INFO [loop_until]: OK (rc = 0) 18:58:43 DEBUG --- stdout --- 18:58:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:43 DEBUG --- stderr --- 18:58:43 DEBUG 18:58:43 INFO 18:58:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:43 INFO [loop_until]: OK (rc = 0) 18:58:43 DEBUG --- stdout --- 18:58:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:43 DEBUG --- stderr --- 18:58:43 DEBUG 18:58:43 INFO 18:58:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:43 INFO [loop_until]: OK (rc = 0) 18:58:43 DEBUG --- stdout --- 18:58:43 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:43 DEBUG --- stderr --- 18:58:43 DEBUG 18:58:43 INFO 18:58:43 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:43 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:44 INFO [loop_until]: OK (rc = 0) 18:58:44 DEBUG --- stdout --- 18:58:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:44 DEBUG --- stderr --- 18:58:44 DEBUG 18:58:44 INFO 18:58:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:44 INFO [loop_until]: OK (rc = 0) 18:58:44 DEBUG --- stdout --- 18:58:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:44 DEBUG --- stderr --- 18:58:44 DEBUG 18:58:44 INFO 18:58:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:44 INFO [loop_until]: OK (rc = 0) 18:58:44 DEBUG --- stdout --- 18:58:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:44 DEBUG --- stderr --- 18:58:44 DEBUG 18:58:44 INFO 18:58:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:44 INFO [loop_until]: OK (rc = 0) 18:58:44 DEBUG --- stdout --- 18:58:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:44 DEBUG --- stderr --- 18:58:44 DEBUG 18:58:44 INFO 18:58:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:44 INFO [loop_until]: OK (rc = 0) 18:58:44 DEBUG --- stdout --- 18:58:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:44 DEBUG --- stderr --- 18:58:44 DEBUG 18:58:44 INFO 18:58:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:44 INFO [loop_until]: OK (rc = 0) 18:58:44 DEBUG --- stdout --- 18:58:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:44 DEBUG --- stderr --- 18:58:44 DEBUG 18:58:44 INFO 18:58:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:44 INFO [loop_until]: OK (rc = 0) 18:58:44 DEBUG --- stdout --- 18:58:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:44 DEBUG --- stderr --- 18:58:44 DEBUG 18:58:44 INFO 18:58:44 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:44 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:44 INFO [loop_until]: OK (rc = 0) 18:58:44 DEBUG --- stdout --- 18:58:44 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:44 DEBUG --- stderr --- 18:58:44 DEBUG 18:58:45 INFO 18:58:45 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:45 INFO [loop_until]: OK (rc = 0) 18:58:45 DEBUG --- stdout --- 18:58:45 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:45 DEBUG --- stderr --- 18:58:45 DEBUG 18:58:45 INFO 18:58:45 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 18:58:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:45 INFO [loop_until]: OK (rc = 0) 18:58:45 DEBUG --- stdout --- 18:58:45 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 18:58:45 DEBUG --- stderr --- 18:58:45 DEBUG 18:58:45 INFO Initializing monitoring instance threads 18:58:45 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 18:58:45 INFO Starting instance threads 18:58:45 INFO 18:58:45 INFO Thread started 18:58:45 INFO [loop_until]: kubectl --namespace=xlou top node 18:58:45 INFO 18:58:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:45 INFO Thread started 18:58:45 INFO [loop_until]: kubectl --namespace=xlou top pods 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125" 18:58:45 INFO Thread started Exception in thread Thread-23: 18:58:45 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 18:58:45 INFO Thread started Exception in thread Thread-24: Traceback (most recent call last): 18:58:45 INFO Thread started Exception in thread Thread-25: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691863125" self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() 18:58:45 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 910, in run 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691863125" self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run 18:58:45 INFO Thread started self._target(*self._args, **self._kwargs) self._target(*self._args, **self._kwargs) 18:58:45 INFO Thread started self._target(*self._args, **self._kwargs) 18:58:45 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125" File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 18:58:45 INFO Thread started 18:58:45 INFO All threads has been started Exception in thread Thread-28: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run 127.0.0.1 - - [12/Aug/2023 18:58:45] "GET /monitoring/start HTTP/1.1" 200 - instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner instance.run() if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run KeyError: 'functions' self.run() if self.prom_data['functions']: File "/usr/local/lib/python3.9/threading.py", line 910, in run KeyError: 'functions' if self.prom_data['functions']: 18:58:45 INFO [loop_until]: OK (rc = 0) self._target(*self._args, **self._kwargs) 18:58:45 DEBUG --- stdout --- KeyError: 'functions' File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 18:58:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 18m 2296Mi am-55f77847b7-dr27z 8m 2195Mi am-55f77847b7-fp459 15m 4474Mi ds-cts-0 7m 370Mi ds-cts-1 9m 367Mi ds-cts-2 11m 361Mi ds-idrepo-0 22m 10269Mi ds-idrepo-1 20m 10266Mi ds-idrepo-2 26m 10303Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 10m 3354Mi idm-65858d8c4c-4qc5l 7m 1321Mi lodemon-5798c88b8f-k2sv4 681m 60Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 15Mi 18:58:45 DEBUG --- stderr --- 18:58:45 DEBUG instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: KeyError: 'functions' 18:58:45 INFO [loop_until]: OK (rc = 0) 18:58:45 DEBUG --- stdout --- 18:58:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 343m 2% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5498Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3336Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 3469Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2646Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 4618Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 81m 0% 10945Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 10932Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 70m 0% 10910Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1631Mi 2% 18:58:45 DEBUG --- stderr --- 18:58:45 DEBUG 18:58:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:46 WARNING Response is NONE 18:58:46 DEBUG Exception is preset. Setting retry_loop to true 18:58:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:48 WARNING Response is NONE 18:58:48 WARNING Response is NONE 18:58:48 WARNING Response is NONE 18:58:48 DEBUG Exception is preset. Setting retry_loop to true 18:58:48 DEBUG Exception is preset. Setting retry_loop to true 18:58:48 DEBUG Exception is preset. Setting retry_loop to true 18:58:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:52 WARNING Response is NONE 18:58:52 WARNING Response is NONE 18:58:52 WARNING Response is NONE 18:58:52 WARNING Response is NONE 18:58:52 DEBUG Exception is preset. Setting retry_loop to true 18:58:52 DEBUG Exception is preset. Setting retry_loop to true 18:58:52 DEBUG Exception is preset. Setting retry_loop to true 18:58:52 DEBUG Exception is preset. Setting retry_loop to true 18:58:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:57 WARNING Response is NONE 18:58:57 DEBUG Exception is preset. Setting retry_loop to true 18:58:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:58:59 WARNING Response is NONE 18:58:59 WARNING Response is NONE 18:58:59 WARNING Response is NONE 18:58:59 DEBUG Exception is preset. Setting retry_loop to true 18:58:59 DEBUG Exception is preset. Setting retry_loop to true 18:58:59 DEBUG Exception is preset. Setting retry_loop to true 18:58:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:58:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:00 WARNING Response is NONE 18:59:00 DEBUG Exception is preset. Setting retry_loop to true 18:59:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:03 WARNING Response is NONE 18:59:03 DEBUG Exception is preset. Setting retry_loop to true 18:59:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:05 WARNING Response is NONE 18:59:05 WARNING Response is NONE 18:59:05 DEBUG Exception is preset. Setting retry_loop to true 18:59:05 DEBUG Exception is preset. Setting retry_loop to true 18:59:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:08 WARNING Response is NONE 18:59:08 DEBUG Exception is preset. Setting retry_loop to true 18:59:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:10 WARNING Response is NONE 18:59:10 DEBUG Exception is preset. Setting retry_loop to true 18:59:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:11 WARNING Response is NONE 18:59:11 DEBUG Exception is preset. Setting retry_loop to true 18:59:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:12 WARNING Response is NONE 18:59:12 DEBUG Exception is preset. Setting retry_loop to true 18:59:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:14 WARNING Response is NONE 18:59:14 DEBUG Exception is preset. Setting retry_loop to true 18:59:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:16 WARNING Response is NONE 18:59:16 DEBUG Exception is preset. Setting retry_loop to true 18:59:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:17 WARNING Response is NONE 18:59:17 DEBUG Exception is preset. Setting retry_loop to true 18:59:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:19 WARNING Response is NONE 18:59:19 DEBUG Exception is preset. Setting retry_loop to true 18:59:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:21 WARNING Response is NONE 18:59:21 DEBUG Exception is preset. Setting retry_loop to true 18:59:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:23 WARNING Response is NONE 18:59:23 DEBUG Exception is preset. Setting retry_loop to true 18:59:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:25 WARNING Response is NONE 18:59:25 DEBUG Exception is preset. Setting retry_loop to true 18:59:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:28 WARNING Response is NONE 18:59:28 DEBUG Exception is preset. Setting retry_loop to true 18:59:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:30 WARNING Response is NONE 18:59:30 DEBUG Exception is preset. Setting retry_loop to true 18:59:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:32 WARNING Response is NONE 18:59:32 DEBUG Exception is preset. Setting retry_loop to true 18:59:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:34 WARNING Response is NONE 18:59:34 DEBUG Exception is preset. Setting retry_loop to true 18:59:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:36 WARNING Response is NONE 18:59:36 DEBUG Exception is preset. Setting retry_loop to true 18:59:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:39 WARNING Response is NONE 18:59:39 DEBUG Exception is preset. Setting retry_loop to true 18:59:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:41 WARNING Response is NONE 18:59:41 DEBUG Exception is preset. Setting retry_loop to true 18:59:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:41 WARNING Response is NONE 18:59:41 DEBUG Exception is preset. Setting retry_loop to true 18:59:41 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 18:59:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:43 WARNING Response is NONE 18:59:43 DEBUG Exception is preset. Setting retry_loop to true 18:59:43 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 18:59:45 INFO 18:59:45 INFO [loop_until]: kubectl --namespace=xlou top pods 18:59:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:59:45 INFO 18:59:45 INFO [loop_until]: kubectl --namespace=xlou top node 18:59:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 18:59:45 INFO [loop_until]: OK (rc = 0) 18:59:45 DEBUG --- stdout --- 18:59:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 12m 2297Mi am-55f77847b7-dr27z 6m 2195Mi am-55f77847b7-fp459 15m 4474Mi ds-cts-0 158m 378Mi ds-cts-1 275m 369Mi ds-cts-2 170m 360Mi ds-idrepo-0 1417m 10279Mi ds-idrepo-1 221m 10270Mi ds-idrepo-2 171m 10315Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 12m 3355Mi idm-65858d8c4c-4qc5l 8m 1321Mi lodemon-5798c88b8f-k2sv4 83m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 15Mi 18:59:45 DEBUG --- stderr --- 18:59:45 DEBUG 18:59:45 INFO [loop_until]: OK (rc = 0) 18:59:45 DEBUG --- stdout --- 18:59:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3337Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3470Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2646Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 131m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 4617Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 299m 1% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 259m 1% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 221m 1% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 285m 1% 10958Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 273m 1% 10940Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 1199m 7% 10922Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 244m 1% 1735Mi 2% 18:59:45 DEBUG --- stderr --- 18:59:45 DEBUG 18:59:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:45 WARNING Response is NONE 18:59:45 DEBUG Exception is preset. Setting retry_loop to true 18:59:45 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 18:59:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:47 WARNING Response is NONE 18:59:47 DEBUG Exception is preset. Setting retry_loop to true 18:59:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:49 WARNING Response is NONE 18:59:49 WARNING Response is NONE 18:59:49 DEBUG Exception is preset. Setting retry_loop to true 18:59:49 DEBUG Exception is preset. Setting retry_loop to true 18:59:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:50 WARNING Response is NONE 18:59:50 DEBUG Exception is preset. Setting retry_loop to true 18:59:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:52 WARNING Response is NONE 18:59:52 DEBUG Exception is preset. Setting retry_loop to true 18:59:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:53 WARNING Response is NONE 18:59:53 DEBUG Exception is preset. Setting retry_loop to true 18:59:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:53 WARNING Response is NONE 18:59:53 DEBUG Exception is preset. Setting retry_loop to true 18:59:53 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 18:59:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:58 WARNING Response is NONE 18:59:58 DEBUG Exception is preset. Setting retry_loop to true 18:59:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 18:59:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 18:59:58 WARNING Response is NONE 18:59:58 DEBUG Exception is preset. Setting retry_loop to true 18:59:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:00 WARNING Response is NONE 19:00:00 WARNING Response is NONE 19:00:00 DEBUG Exception is preset. Setting retry_loop to true 19:00:00 DEBUG Exception is preset. Setting retry_loop to true 19:00:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:01 WARNING Response is NONE 19:00:01 DEBUG Exception is preset. Setting retry_loop to true 19:00:01 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:00:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:03 WARNING Response is NONE 19:00:03 DEBUG Exception is preset. Setting retry_loop to true 19:00:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:04 WARNING Response is NONE 19:00:04 DEBUG Exception is preset. Setting retry_loop to true 19:00:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:09 WARNING Response is NONE 19:00:09 DEBUG Exception is preset. Setting retry_loop to true 19:00:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:09 WARNING Response is NONE 19:00:09 DEBUG Exception is preset. Setting retry_loop to true 19:00:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:11 WARNING Response is NONE 19:00:11 DEBUG Exception is preset. Setting retry_loop to true 19:00:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:13 WARNING Response is NONE 19:00:13 DEBUG Exception is preset. Setting retry_loop to true 19:00:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:14 WARNING Response is NONE 19:00:14 DEBUG Exception is preset. Setting retry_loop to true 19:00:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:00:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:15 WARNING Response is NONE 19:00:15 DEBUG Exception is preset. Setting retry_loop to true 19:00:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:20 WARNING Response is NONE 19:00:20 DEBUG Exception is preset. Setting retry_loop to true 19:00:20 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:00:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:20 WARNING Response is NONE 19:00:20 DEBUG Exception is preset. Setting retry_loop to true 19:00:20 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:00:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:22 WARNING Response is NONE 19:00:22 DEBUG Exception is preset. Setting retry_loop to true 19:00:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:24 WARNING Response is NONE 19:00:24 DEBUG Exception is preset. Setting retry_loop to true 19:00:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:26 WARNING Response is NONE 19:00:26 DEBUG Exception is preset. Setting retry_loop to true 19:00:26 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:00:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:33 WARNING Response is NONE 19:00:33 DEBUG Exception is preset. Setting retry_loop to true 19:00:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:35 WARNING Response is NONE 19:00:35 DEBUG Exception is preset. Setting retry_loop to true 19:00:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:44 WARNING Response is NONE 19:00:44 DEBUG Exception is preset. Setting retry_loop to true 19:00:44 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:00:45 INFO 19:00:45 INFO [loop_until]: kubectl --namespace=xlou top pods 19:00:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:00:45 INFO 19:00:45 INFO [loop_until]: kubectl --namespace=xlou top node 19:00:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:00:45 INFO [loop_until]: OK (rc = 0) 19:00:45 DEBUG --- stdout --- 19:00:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 13m 2305Mi am-55f77847b7-dr27z 6m 2195Mi am-55f77847b7-fp459 11m 4475Mi ds-cts-0 15m 379Mi ds-cts-1 16m 370Mi ds-cts-2 9m 361Mi ds-idrepo-0 13m 10279Mi ds-idrepo-1 17m 10273Mi ds-idrepo-2 21m 10316Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 12m 3357Mi idm-65858d8c4c-4qc5l 7m 1322Mi lodemon-5798c88b8f-k2sv4 8m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 48Mi 19:00:45 DEBUG --- stderr --- 19:00:45 DEBUG 19:00:45 INFO [loop_until]: OK (rc = 0) 19:00:45 DEBUG --- stdout --- 19:00:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3339Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 3477Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 2658Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 4617Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 71m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 10942Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 10920Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1632Mi 2% 19:00:45 DEBUG --- stderr --- 19:00:45 DEBUG 19:00:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:46 WARNING Response is NONE 19:00:46 DEBUG Exception is preset. Setting retry_loop to true 19:00:46 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:00:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 WARNING Response is NONE 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 DEBUG Exception is preset. Setting retry_loop to true 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:00:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:07 WARNING Response is NONE 19:01:07 DEBUG Exception is preset. Setting retry_loop to true 19:01:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:10 WARNING Response is NONE 19:01:10 WARNING Response is NONE 19:01:10 DEBUG Exception is preset. Setting retry_loop to true 19:01:10 DEBUG Exception is preset. Setting retry_loop to true 19:01:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:14 WARNING Response is NONE 19:01:14 DEBUG Exception is preset. Setting retry_loop to true 19:01:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:19 WARNING Response is NONE 19:01:19 DEBUG Exception is preset. Setting retry_loop to true 19:01:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:21 WARNING Response is NONE 19:01:21 WARNING Response is NONE 19:01:21 DEBUG Exception is preset. Setting retry_loop to true 19:01:21 DEBUG Exception is preset. Setting retry_loop to true 19:01:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:22 WARNING Response is NONE 19:01:22 WARNING Response is NONE 19:01:22 DEBUG Exception is preset. Setting retry_loop to true 19:01:22 DEBUG Exception is preset. Setting retry_loop to true 19:01:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:25 WARNING Response is NONE 19:01:25 DEBUG Exception is preset. Setting retry_loop to true 19:01:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:30 WARNING Response is NONE 19:01:30 DEBUG Exception is preset. Setting retry_loop to true 19:01:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:32 WARNING Response is NONE 19:01:32 WARNING Response is NONE 19:01:32 DEBUG Exception is preset. Setting retry_loop to true 19:01:32 DEBUG Exception is preset. Setting retry_loop to true 19:01:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:33 WARNING Response is NONE 19:01:33 DEBUG Exception is preset. Setting retry_loop to true 19:01:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:35 WARNING Response is NONE 19:01:35 DEBUG Exception is preset. Setting retry_loop to true 19:01:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:36 WARNING Response is NONE 19:01:36 DEBUG Exception is preset. Setting retry_loop to true 19:01:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:38 WARNING Response is NONE 19:01:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:38 DEBUG Exception is preset. Setting retry_loop to true 19:01:38 WARNING Response is NONE 19:01:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:38 DEBUG Exception is preset. Setting retry_loop to true 19:01:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:41 WARNING Response is NONE 19:01:41 DEBUG Exception is preset. Setting retry_loop to true 19:01:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:43 WARNING Response is NONE 19:01:43 DEBUG Exception is preset. Setting retry_loop to true 19:01:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:44 WARNING Response is NONE 19:01:44 DEBUG Exception is preset. Setting retry_loop to true 19:01:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:45 WARNING Response is NONE 19:01:45 DEBUG Exception is preset. Setting retry_loop to true 19:01:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:45 INFO 19:01:45 INFO [loop_until]: kubectl --namespace=xlou top pods 19:01:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:01:45 INFO [loop_until]: OK (rc = 0) 19:01:45 DEBUG --- stdout --- 19:01:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 12m 2317Mi am-55f77847b7-dr27z 6m 2196Mi am-55f77847b7-fp459 15m 4475Mi ds-cts-0 8m 380Mi ds-cts-1 10m 370Mi ds-cts-2 9m 361Mi ds-idrepo-0 16m 10282Mi ds-idrepo-1 33m 10273Mi ds-idrepo-2 34m 10313Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 11m 3357Mi idm-65858d8c4c-4qc5l 7m 1322Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 48Mi 19:01:45 DEBUG --- stderr --- 19:01:45 DEBUG 19:01:45 INFO 19:01:45 INFO [loop_until]: kubectl --namespace=xlou top node 19:01:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:01:45 INFO [loop_until]: OK (rc = 0) 19:01:45 DEBUG --- stdout --- 19:01:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3339Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3487Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2646Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4617Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 85m 0% 10956Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 83m 0% 10945Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 10923Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 136m 0% 1747Mi 2% 19:01:45 DEBUG --- stderr --- 19:01:45 DEBUG 19:01:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:47 WARNING Response is NONE 19:01:47 DEBUG Exception is preset. Setting retry_loop to true 19:01:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:48 WARNING Response is NONE 19:01:48 DEBUG Exception is preset. Setting retry_loop to true 19:01:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:49 WARNING Response is NONE 19:01:49 DEBUG Exception is preset. Setting retry_loop to true 19:01:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:52 WARNING Response is NONE 19:01:52 DEBUG Exception is preset. Setting retry_loop to true 19:01:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:01:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:52 WARNING Response is NONE 19:01:52 DEBUG Exception is preset. Setting retry_loop to true 19:01:52 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:01:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:54 WARNING Response is NONE 19:01:54 DEBUG Exception is preset. Setting retry_loop to true 19:01:54 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:01:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:56 WARNING Response is NONE 19:01:56 DEBUG Exception is preset. Setting retry_loop to true 19:01:56 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:01:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:58 WARNING Response is NONE 19:01:58 DEBUG Exception is preset. Setting retry_loop to true 19:01:58 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:01:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:01:59 WARNING Response is NONE 19:01:59 DEBUG Exception is preset. Setting retry_loop to true 19:01:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:01 WARNING Response is NONE 19:02:01 DEBUG Exception is preset. Setting retry_loop to true 19:02:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:03 WARNING Response is NONE 19:02:03 DEBUG Exception is preset. Setting retry_loop to true 19:02:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:10 WARNING Response is NONE 19:02:10 DEBUG Exception is preset. Setting retry_loop to true 19:02:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:10 WARNING Response is NONE 19:02:10 DEBUG Exception is preset. Setting retry_loop to true 19:02:10 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:02:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:11 WARNING Response is NONE 19:02:11 WARNING Response is NONE 19:02:11 WARNING Response is NONE 19:02:11 DEBUG Exception is preset. Setting retry_loop to true 19:02:11 DEBUG Exception is preset. Setting retry_loop to true 19:02:11 DEBUG Exception is preset. Setting retry_loop to true 19:02:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:12 WARNING Response is NONE 19:02:12 DEBUG Exception is preset. Setting retry_loop to true 19:02:12 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:14 WARNING Response is NONE 19:02:14 DEBUG Exception is preset. Setting retry_loop to true 19:02:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:21 WARNING Response is NONE 19:02:21 DEBUG Exception is preset. Setting retry_loop to true 19:02:21 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:02:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:22 WARNING Response is NONE 19:02:22 WARNING Response is NONE 19:02:22 WARNING Response is NONE 19:02:22 DEBUG Exception is preset. Setting retry_loop to true 19:02:22 DEBUG Exception is preset. Setting retry_loop to true 19:02:22 DEBUG Exception is preset. Setting retry_loop to true 19:02:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:23 WARNING Response is NONE 19:02:23 DEBUG Exception is preset. Setting retry_loop to true 19:02:23 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:02:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:25 WARNING Response is NONE 19:02:25 DEBUG Exception is preset. Setting retry_loop to true 19:02:25 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:02:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:33 WARNING Response is NONE 19:02:33 WARNING Response is NONE 19:02:33 WARNING Response is NONE 19:02:33 DEBUG Exception is preset. Setting retry_loop to true 19:02:33 DEBUG Exception is preset. Setting retry_loop to true 19:02:33 DEBUG Exception is preset. Setting retry_loop to true 19:02:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:44 WARNING Response is NONE 19:02:44 WARNING Response is NONE 19:02:44 WARNING Response is NONE 19:02:44 DEBUG Exception is preset. Setting retry_loop to true 19:02:44 DEBUG Exception is preset. Setting retry_loop to true 19:02:44 DEBUG Exception is preset. Setting retry_loop to true 19:02:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:02:45 INFO 19:02:45 INFO [loop_until]: kubectl --namespace=xlou top pods 19:02:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:02:45 INFO [loop_until]: OK (rc = 0) 19:02:45 DEBUG --- stdout --- 19:02:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 11m 2323Mi am-55f77847b7-dr27z 13m 2197Mi am-55f77847b7-fp459 12m 4476Mi ds-cts-0 8m 380Mi ds-cts-1 14m 370Mi ds-cts-2 6m 362Mi ds-idrepo-0 16m 10282Mi ds-idrepo-1 15m 10274Mi ds-idrepo-2 19m 10318Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 9m 3357Mi idm-65858d8c4c-4qc5l 7m 1322Mi lodemon-5798c88b8f-k2sv4 8m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 98Mi 19:02:45 DEBUG --- stderr --- 19:02:45 DEBUG 19:02:45 INFO 19:02:45 INFO [loop_until]: kubectl --namespace=xlou top node 19:02:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:02:45 INFO [loop_until]: OK (rc = 0) 19:02:45 DEBUG --- stdout --- 19:02:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5499Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3345Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3499Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2643Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2113Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 4620Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10962Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 10941Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 10927Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1629Mi 2% 19:02:45 DEBUG --- stderr --- 19:02:45 DEBUG 19:02:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:02:55 WARNING Response is NONE 19:02:55 WARNING Response is NONE 19:02:55 WARNING Response is NONE 19:02:55 DEBUG Exception is preset. Setting retry_loop to true 19:02:55 DEBUG Exception is preset. Setting retry_loop to true 19:02:55 DEBUG Exception is preset. Setting retry_loop to true 19:02:55 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: 19:02:55 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. 19:02:55 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) response = http_cmd.get(url=url_encoded, retries=5) response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) raise FailException('Failed to obtain response from server...') raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd Traceback (most recent call last): Traceback (most recent call last): raise FailException('Failed to obtain response from server...') File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() self.logger(f'Query: {query} failed with: {e}') File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run TypeError: 'LodestarLogger' object is not callable self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:03:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 19:03:16 WARNING Response is NONE 19:03:16 DEBUG Exception is preset. Setting retry_loop to true 19:03:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:03:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:03:27 WARNING Response is NONE 19:03:27 DEBUG Exception is preset. Setting retry_loop to true 19:03:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:03:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:03:38 WARNING Response is NONE 19:03:38 DEBUG Exception is preset. Setting retry_loop to true 19:03:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:03:45 INFO 19:03:45 INFO [loop_until]: kubectl --namespace=xlou top pods 19:03:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:03:45 INFO [loop_until]: OK (rc = 0) 19:03:45 DEBUG --- stdout --- 19:03:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 11m 2338Mi am-55f77847b7-dr27z 6m 2197Mi am-55f77847b7-fp459 10m 4476Mi ds-cts-0 9m 381Mi ds-cts-1 10m 371Mi ds-cts-2 7m 362Mi ds-idrepo-0 25m 10279Mi ds-idrepo-1 31m 10276Mi ds-idrepo-2 19m 10320Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 9m 3357Mi idm-65858d8c4c-4qc5l 7m 1323Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 98Mi 19:03:45 DEBUG --- stderr --- 19:03:45 DEBUG 19:03:45 INFO 19:03:45 INFO [loop_until]: kubectl --namespace=xlou top node 19:03:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:03:45 INFO [loop_until]: OK (rc = 0) 19:03:45 DEBUG --- stdout --- 19:03:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5502Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3341Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3512Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2645Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4616Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 74m 0% 10947Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 10919Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 72m 0% 1630Mi 2% 19:03:45 DEBUG --- stderr --- 19:03:45 DEBUG 19:03:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:03:49 WARNING Response is NONE 19:03:49 DEBUG Exception is preset. Setting retry_loop to true 19:03:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 19:04:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691863125 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 19:04:00 WARNING Response is NONE 19:04:00 DEBUG Exception is preset. Setting retry_loop to true 19:04:00 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 19:04:45 INFO 19:04:45 INFO [loop_until]: kubectl --namespace=xlou top pods 19:04:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:04:45 INFO 19:04:45 INFO [loop_until]: kubectl --namespace=xlou top node 19:04:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:04:45 INFO [loop_until]: OK (rc = 0) 19:04:45 DEBUG --- stdout --- 19:04:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 15m 2603Mi am-55f77847b7-dr27z 14m 2270Mi am-55f77847b7-fp459 17m 4509Mi ds-cts-0 502m 383Mi ds-cts-1 347m 373Mi ds-cts-2 234m 363Mi ds-idrepo-0 3045m 13271Mi ds-idrepo-1 234m 10282Mi ds-idrepo-2 209m 10327Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 12m 3358Mi idm-65858d8c4c-4qc5l 9m 1356Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1057m 359Mi 19:04:45 DEBUG --- stderr --- 19:04:45 DEBUG 19:04:45 INFO [loop_until]: OK (rc = 0) 19:04:45 DEBUG --- stdout --- 19:04:45 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5531Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 3415Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 3777Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 2680Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 4617Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 318m 2% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 320m 2% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 484m 3% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 276m 1% 10972Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 154m 0% 10953Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 3081m 19% 13838Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1167m 7% 1887Mi 3% 19:04:45 DEBUG --- stderr --- 19:04:45 DEBUG 19:05:45 INFO 19:05:45 INFO [loop_until]: kubectl --namespace=xlou top pods 19:05:45 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:05:45 INFO [loop_until]: OK (rc = 0) 19:05:45 DEBUG --- stdout --- 19:05:45 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 11m 2618Mi am-55f77847b7-dr27z 19m 2266Mi am-55f77847b7-fp459 15m 4509Mi ds-cts-0 7m 383Mi ds-cts-1 11m 376Mi ds-cts-2 7m 365Mi ds-idrepo-0 2762m 13363Mi ds-idrepo-1 24m 10291Mi ds-idrepo-2 23m 10329Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 11m 3359Mi idm-65858d8c4c-4qc5l 10m 1356Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1178m 359Mi 19:05:45 DEBUG --- stderr --- 19:05:45 DEBUG 19:05:46 INFO 19:05:46 INFO [loop_until]: kubectl --namespace=xlou top node 19:05:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:05:46 INFO [loop_until]: OK (rc = 0) 19:05:46 DEBUG --- stdout --- 19:05:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5531Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 3412Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3786Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 2682Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2125Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4619Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 73m 0% 10973Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 75m 0% 10964Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 2936m 18% 13944Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1298m 8% 1884Mi 3% 19:05:46 DEBUG --- stderr --- 19:05:46 DEBUG 19:06:46 INFO 19:06:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:06:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:06:46 INFO [loop_until]: OK (rc = 0) 19:06:46 DEBUG --- stdout --- 19:06:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 11m 2625Mi am-55f77847b7-dr27z 12m 2266Mi am-55f77847b7-fp459 10m 4509Mi ds-cts-0 8m 383Mi ds-cts-1 10m 375Mi ds-cts-2 9m 364Mi ds-idrepo-0 2899m 13458Mi ds-idrepo-1 20m 10280Mi ds-idrepo-2 17m 10329Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 11m 3360Mi idm-65858d8c4c-4qc5l 14m 1356Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1259m 360Mi 19:06:46 DEBUG --- stderr --- 19:06:46 DEBUG 19:06:46 INFO 19:06:46 INFO [loop_until]: kubectl --namespace=xlou top node 19:06:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:06:46 INFO [loop_until]: OK (rc = 0) 19:06:46 DEBUG --- stdout --- 19:06:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3411Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3798Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 2683Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2127Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 4622Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10975Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 10953Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 2916m 18% 14014Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1330m 8% 1887Mi 3% 19:06:46 DEBUG --- stderr --- 19:06:46 DEBUG 19:07:46 INFO 19:07:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:07:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:07:46 INFO [loop_until]: OK (rc = 0) 19:07:46 DEBUG --- stdout --- 19:07:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 12m 2637Mi am-55f77847b7-dr27z 16m 2266Mi am-55f77847b7-fp459 13m 4509Mi ds-cts-0 8m 383Mi ds-cts-1 11m 376Mi ds-cts-2 8m 364Mi ds-idrepo-0 2969m 13489Mi ds-idrepo-1 23m 10286Mi ds-idrepo-2 17m 10330Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 11m 3361Mi idm-65858d8c4c-4qc5l 10m 1356Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1336m 360Mi 19:07:46 DEBUG --- stderr --- 19:07:46 DEBUG 19:07:46 INFO 19:07:46 INFO [loop_until]: kubectl --namespace=xlou top node 19:07:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:07:46 INFO [loop_until]: OK (rc = 0) 19:07:46 DEBUG --- stdout --- 19:07:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5536Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 3412Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3810Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2680Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 4625Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10976Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 2953m 18% 14042Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1393m 8% 1885Mi 3% 19:07:46 DEBUG --- stderr --- 19:07:46 DEBUG 19:08:46 INFO 19:08:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:08:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:08:46 INFO [loop_until]: OK (rc = 0) 19:08:46 DEBUG --- stdout --- 19:08:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 10m 2650Mi am-55f77847b7-dr27z 27m 2277Mi am-55f77847b7-fp459 11m 4510Mi ds-cts-0 9m 383Mi ds-cts-1 10m 376Mi ds-cts-2 11m 365Mi ds-idrepo-0 2995m 13639Mi ds-idrepo-1 16m 10288Mi ds-idrepo-2 16m 10332Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 11m 3361Mi idm-65858d8c4c-4qc5l 9m 1357Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1381m 360Mi 19:08:46 DEBUG --- stderr --- 19:08:46 DEBUG 19:08:46 INFO 19:08:46 INFO [loop_until]: kubectl --namespace=xlou top node 19:08:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:08:46 INFO [loop_until]: OK (rc = 0) 19:08:46 DEBUG --- stdout --- 19:08:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 3422Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3822Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 2682Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 4621Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 10977Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 3086m 19% 14182Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1472m 9% 1888Mi 3% 19:08:46 DEBUG --- stderr --- 19:08:46 DEBUG 19:09:46 INFO 19:09:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:09:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:09:46 INFO [loop_until]: OK (rc = 0) 19:09:46 DEBUG --- stdout --- 19:09:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 11m 2661Mi am-55f77847b7-dr27z 28m 2274Mi am-55f77847b7-fp459 13m 4511Mi ds-cts-0 7m 383Mi ds-cts-1 11m 376Mi ds-cts-2 7m 365Mi ds-idrepo-0 11m 13639Mi ds-idrepo-1 16m 10288Mi ds-idrepo-2 22m 10334Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 13m 3361Mi idm-65858d8c4c-4qc5l 7m 1357Mi lodemon-5798c88b8f-k2sv4 5m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 98Mi 19:09:46 DEBUG --- stderr --- 19:09:46 DEBUG 19:09:46 INFO 19:09:46 INFO [loop_until]: kubectl --namespace=xlou top node 19:09:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:09:46 INFO [loop_until]: OK (rc = 0) 19:09:46 DEBUG --- stdout --- 19:09:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 75m 0% 5535Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 3415Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3830Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2683Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 4624Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10979Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14185Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1628Mi 2% 19:09:46 DEBUG --- stderr --- 19:09:46 DEBUG 19:10:46 INFO 19:10:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:10:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:10:46 INFO [loop_until]: OK (rc = 0) 19:10:46 DEBUG --- stdout --- 19:10:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 8m 2669Mi am-55f77847b7-dr27z 18m 2278Mi am-55f77847b7-fp459 10m 4512Mi ds-cts-0 7m 384Mi ds-cts-1 11m 376Mi ds-cts-2 6m 365Mi ds-idrepo-0 14m 13639Mi ds-idrepo-1 2707m 12698Mi ds-idrepo-2 18m 10335Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 21m 3365Mi idm-65858d8c4c-4qc5l 7m 1354Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 963m 366Mi 19:10:46 DEBUG --- stderr --- 19:10:46 DEBUG 19:10:46 INFO 19:10:46 INFO [loop_until]: kubectl --namespace=xlou top node 19:10:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:10:46 INFO [loop_until]: OK (rc = 0) 19:10:46 DEBUG --- stdout --- 19:10:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5539Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3420Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3841Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2679Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2128Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 4618Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 10978Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 2671m 16% 13304Mi 22% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14188Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1126m 7% 1893Mi 3% 19:10:46 DEBUG --- stderr --- 19:10:46 DEBUG 19:11:46 INFO 19:11:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:11:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:11:46 INFO [loop_until]: OK (rc = 0) 19:11:46 DEBUG --- stdout --- 19:11:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 9m 2681Mi am-55f77847b7-dr27z 16m 2285Mi am-55f77847b7-fp459 11m 4512Mi ds-cts-0 10m 383Mi ds-cts-1 10m 376Mi ds-cts-2 6m 366Mi ds-idrepo-0 12m 13639Mi ds-idrepo-1 2765m 13341Mi ds-idrepo-2 21m 10335Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 10m 3359Mi idm-65858d8c4c-4qc5l 8m 1362Mi lodemon-5798c88b8f-k2sv4 8m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1129m 366Mi 19:11:46 DEBUG --- stderr --- 19:11:46 DEBUG 19:11:46 INFO 19:11:46 INFO [loop_until]: kubectl --namespace=xlou top node 19:11:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:11:46 INFO [loop_until]: OK (rc = 0) 19:11:46 DEBUG --- stdout --- 19:11:46 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 3429Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 3850Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 2689Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4621Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 73m 0% 10982Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 2684m 16% 13924Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14189Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1208m 7% 1895Mi 3% 19:11:46 DEBUG --- stderr --- 19:11:46 DEBUG 19:12:46 INFO 19:12:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:12:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:12:46 INFO [loop_until]: OK (rc = 0) 19:12:46 DEBUG --- stdout --- 19:12:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 17m 2692Mi am-55f77847b7-dr27z 19m 2295Mi am-55f77847b7-fp459 13m 4512Mi ds-cts-0 6m 384Mi ds-cts-1 19m 376Mi ds-cts-2 12m 366Mi ds-idrepo-0 11m 13639Mi ds-idrepo-1 2627m 13343Mi ds-idrepo-2 32m 10338Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 11m 3359Mi idm-65858d8c4c-4qc5l 7m 1372Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1231m 368Mi 19:12:46 DEBUG --- stderr --- 19:12:46 DEBUG 19:12:46 INFO 19:12:46 INFO [loop_until]: kubectl --namespace=xlou top node 19:12:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:12:47 INFO [loop_until]: OK (rc = 0) 19:12:47 DEBUG --- stdout --- 19:12:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3442Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3862Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2698Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4622Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 82m 0% 10985Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 2726m 17% 13930Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1315m 8% 1896Mi 3% 19:12:47 DEBUG --- stderr --- 19:12:47 DEBUG 19:13:46 INFO 19:13:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:13:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:13:46 INFO [loop_until]: OK (rc = 0) 19:13:46 DEBUG --- stdout --- 19:13:46 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 8m 2703Mi am-55f77847b7-dr27z 11m 2308Mi am-55f77847b7-fp459 14m 4513Mi ds-cts-0 6m 390Mi ds-cts-1 10m 376Mi ds-cts-2 15m 373Mi ds-idrepo-0 16m 13639Mi ds-idrepo-1 2947m 13490Mi ds-idrepo-2 23m 10342Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 10m 3359Mi idm-65858d8c4c-4qc5l 10m 1382Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1306m 369Mi 19:13:46 DEBUG --- stderr --- 19:13:46 DEBUG 19:13:47 INFO 19:13:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:13:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:13:47 INFO [loop_until]: OK (rc = 0) 19:13:47 DEBUG --- stdout --- 19:13:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3453Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 3875Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 2712Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2131Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4622Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 81m 0% 10998Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 3031m 19% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14185Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1388m 8% 1897Mi 3% 19:13:47 DEBUG --- stderr --- 19:13:47 DEBUG 19:14:46 INFO 19:14:46 INFO [loop_until]: kubectl --namespace=xlou top pods 19:14:46 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:14:47 INFO [loop_until]: OK (rc = 0) 19:14:47 DEBUG --- stdout --- 19:14:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 12m 2714Mi am-55f77847b7-dr27z 10m 2319Mi am-55f77847b7-fp459 10m 4512Mi ds-cts-0 7m 391Mi ds-cts-1 10m 378Mi ds-cts-2 7m 373Mi ds-idrepo-0 11m 13638Mi ds-idrepo-1 2927m 13506Mi ds-idrepo-2 14m 10342Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 13m 3359Mi idm-65858d8c4c-4qc5l 7m 1395Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1333m 369Mi 19:14:47 DEBUG --- stderr --- 19:14:47 DEBUG 19:14:47 INFO 19:14:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:14:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:14:47 INFO [loop_until]: OK (rc = 0) 19:14:47 DEBUG --- stdout --- 19:14:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5538Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3464Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3886Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2723Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2132Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 4619Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 73m 0% 10984Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 3010m 18% 14087Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14191Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1451m 9% 1896Mi 3% 19:14:47 DEBUG --- stderr --- 19:14:47 DEBUG 19:15:47 INFO 19:15:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:15:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:15:47 INFO [loop_until]: OK (rc = 0) 19:15:47 DEBUG --- stdout --- 19:15:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 8m 2727Mi am-55f77847b7-dr27z 10m 2327Mi am-55f77847b7-fp459 19m 4517Mi ds-cts-0 8m 390Mi ds-cts-1 10m 376Mi ds-cts-2 8m 374Mi ds-idrepo-0 11m 13639Mi ds-idrepo-1 12m 13506Mi ds-idrepo-2 13m 10342Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 9m 3360Mi idm-65858d8c4c-4qc5l 8m 1405Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 98Mi 19:15:47 DEBUG --- stderr --- 19:15:47 DEBUG 19:15:47 INFO 19:15:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:15:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:15:47 INFO [loop_until]: OK (rc = 0) 19:15:47 DEBUG --- stdout --- 19:15:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 76m 0% 5543Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3475Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 3898Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 2730Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2129Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4622Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 10986Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 14085Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1629Mi 2% 19:15:47 DEBUG --- stderr --- 19:15:47 DEBUG 19:16:47 INFO 19:16:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:16:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:16:47 INFO [loop_until]: OK (rc = 0) 19:16:47 DEBUG --- stdout --- 19:16:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 10m 2735Mi am-55f77847b7-dr27z 9m 2339Mi am-55f77847b7-fp459 10m 4517Mi ds-cts-0 7m 390Mi ds-cts-1 13m 376Mi ds-cts-2 7m 373Mi ds-idrepo-0 13m 13638Mi ds-idrepo-1 23m 13506Mi ds-idrepo-2 2193m 11730Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 16m 3357Mi idm-65858d8c4c-4qc5l 5m 1438Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 825m 365Mi 19:16:47 DEBUG --- stderr --- 19:16:47 DEBUG 19:16:47 INFO 19:16:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:16:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:16:47 INFO [loop_until]: OK (rc = 0) 19:16:47 DEBUG --- stdout --- 19:16:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5545Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3485Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3908Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2762Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2129Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 4621Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2150m 13% 12117Mi 20% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14085Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 996m 6% 1890Mi 3% 19:16:47 DEBUG --- stderr --- 19:16:47 DEBUG 19:17:47 INFO 19:17:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:17:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:17:47 INFO [loop_until]: OK (rc = 0) 19:17:47 DEBUG --- stdout --- 19:17:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 10m 2748Mi am-55f77847b7-dr27z 8m 2348Mi am-55f77847b7-fp459 11m 4518Mi ds-cts-0 7m 391Mi ds-cts-1 10m 377Mi ds-cts-2 6m 373Mi ds-idrepo-0 14m 13639Mi ds-idrepo-1 13m 13507Mi ds-idrepo-2 2738m 13380Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 8m 3358Mi idm-65858d8c4c-4qc5l 7m 1439Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1207m 365Mi 19:17:47 DEBUG --- stderr --- 19:17:47 DEBUG 19:17:47 INFO 19:17:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:17:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:17:47 INFO [loop_until]: OK (rc = 0) 19:17:47 DEBUG --- stdout --- 19:17:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1364Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3491Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3920Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 2763Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2129Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 4621Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2829m 17% 13950Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14083Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 14187Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1266m 7% 1891Mi 3% 19:17:47 DEBUG --- stderr --- 19:17:47 DEBUG 19:18:47 INFO 19:18:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:18:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:18:47 INFO [loop_until]: OK (rc = 0) 19:18:47 DEBUG --- stdout --- 19:18:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 10m 2756Mi am-55f77847b7-dr27z 15m 2361Mi am-55f77847b7-fp459 9m 4521Mi ds-cts-0 7m 391Mi ds-cts-1 10m 377Mi ds-cts-2 6m 373Mi ds-idrepo-0 15m 13639Mi ds-idrepo-1 13m 13506Mi ds-idrepo-2 2694m 13314Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 9m 3358Mi idm-65858d8c4c-4qc5l 6m 1439Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1182m 368Mi 19:18:47 DEBUG --- stderr --- 19:18:47 DEBUG 19:18:47 INFO 19:18:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:18:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:18:47 INFO [loop_until]: OK (rc = 0) 19:18:47 DEBUG --- stdout --- 19:18:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 3506Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3928Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 2767Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4619Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2892m 18% 13921Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14084Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14189Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1294m 8% 1894Mi 3% 19:18:47 DEBUG --- stderr --- 19:18:47 DEBUG 19:19:47 INFO 19:19:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:19:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:19:47 INFO [loop_until]: OK (rc = 0) 19:19:47 DEBUG --- stdout --- 19:19:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 8m 2768Mi am-55f77847b7-dr27z 11m 2372Mi am-55f77847b7-fp459 10m 4521Mi ds-cts-0 8m 392Mi ds-cts-1 10m 377Mi ds-cts-2 6m 373Mi ds-idrepo-0 12m 13639Mi ds-idrepo-1 14m 13506Mi ds-idrepo-2 2734m 13428Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 14m 3358Mi idm-65858d8c4c-4qc5l 13m 1439Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1296m 368Mi 19:19:47 DEBUG --- stderr --- 19:19:47 DEBUG 19:19:47 INFO 19:19:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:19:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:19:47 INFO [loop_until]: OK (rc = 0) 19:19:47 DEBUG --- stdout --- 19:19:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 3515Mi 5% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3942Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 2763Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 4620Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2870m 18% 13986Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14087Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14188Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1366m 8% 1895Mi 3% 19:19:47 DEBUG --- stderr --- 19:19:47 DEBUG 19:20:47 INFO 19:20:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:20:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:20:47 INFO [loop_until]: OK (rc = 0) 19:20:47 DEBUG --- stdout --- 19:20:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 10m 2807Mi am-55f77847b7-dr27z 8m 2381Mi am-55f77847b7-fp459 11m 4523Mi ds-cts-0 7m 392Mi ds-cts-1 9m 377Mi ds-cts-2 7m 373Mi ds-idrepo-0 11m 13639Mi ds-idrepo-1 19m 13507Mi ds-idrepo-2 3147m 13649Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 12m 3358Mi idm-65858d8c4c-4qc5l 6m 1439Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1320m 368Mi 19:20:47 DEBUG --- stderr --- 19:20:47 DEBUG 19:20:47 INFO 19:20:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:20:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:20:47 INFO [loop_until]: OK (rc = 0) 19:20:47 DEBUG --- stdout --- 19:20:47 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 3526Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 3977Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 68m 0% 2761Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2128Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 4622Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3127m 19% 14205Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 69m 0% 14089Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14190Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1426m 8% 1896Mi 3% 19:20:47 DEBUG --- stderr --- 19:20:47 DEBUG 19:21:47 INFO 19:21:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:21:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:21:47 INFO [loop_until]: OK (rc = 0) 19:21:47 DEBUG --- stdout --- 19:21:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 8m 2823Mi am-55f77847b7-dr27z 9m 2396Mi am-55f77847b7-fp459 9m 4523Mi ds-cts-0 10m 392Mi ds-cts-1 10m 377Mi ds-cts-2 8m 373Mi ds-idrepo-0 11m 13639Mi ds-idrepo-1 10m 13506Mi ds-idrepo-2 628m 13681Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 16m 3360Mi idm-65858d8c4c-4qc5l 8m 1440Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 400m 99Mi 19:21:47 DEBUG --- stderr --- 19:21:47 DEBUG 19:21:47 INFO 19:21:47 INFO [loop_until]: kubectl --namespace=xlou top node 19:21:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:21:48 INFO [loop_until]: OK (rc = 0) 19:21:48 DEBUG --- stdout --- 19:21:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 5548Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3540Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 3993Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2764Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 130m 0% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 4625Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 674m 4% 14232Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 14090Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14189Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 576m 3% 1632Mi 2% 19:21:48 DEBUG --- stderr --- 19:21:48 DEBUG 19:22:47 INFO 19:22:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:22:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:22:47 INFO [loop_until]: OK (rc = 0) 19:22:47 DEBUG --- stdout --- 19:22:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 8m 2832Mi am-55f77847b7-dr27z 8m 2407Mi am-55f77847b7-fp459 9m 4523Mi ds-cts-0 7m 392Mi ds-cts-1 9m 377Mi ds-cts-2 9m 373Mi ds-idrepo-0 11m 13638Mi ds-idrepo-1 10m 13507Mi ds-idrepo-2 11m 13681Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 11m 3360Mi idm-65858d8c4c-4qc5l 6m 1440Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1208m 484Mi 19:22:47 DEBUG --- stderr --- 19:22:47 DEBUG 19:22:48 INFO 19:22:48 INFO [loop_until]: kubectl --namespace=xlou top node 19:22:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:22:48 INFO [loop_until]: OK (rc = 0) 19:22:48 DEBUG --- stdout --- 19:22:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 3550Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 4003Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 2764Mi 4% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2129Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4622Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14229Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14090Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14188Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1622m 10% 2041Mi 3% 19:22:48 DEBUG --- stderr --- 19:22:48 DEBUG 19:23:47 INFO 19:23:47 INFO [loop_until]: kubectl --namespace=xlou top pods 19:23:47 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:23:47 INFO [loop_until]: OK (rc = 0) 19:23:47 DEBUG --- stdout --- 19:23:47 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 102m 3454Mi am-55f77847b7-dr27z 109m 3300Mi am-55f77847b7-fp459 83m 4567Mi ds-cts-0 7m 394Mi ds-cts-1 10m 378Mi ds-cts-2 7m 374Mi ds-idrepo-0 5399m 13641Mi ds-idrepo-1 1399m 13784Mi ds-idrepo-2 1191m 13624Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6645m 4243Mi idm-65858d8c4c-4qc5l 6502m 3662Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1051m 517Mi 19:23:47 DEBUG --- stderr --- 19:23:47 DEBUG 19:23:48 INFO 19:23:48 INFO [loop_until]: kubectl --namespace=xlou top node 19:23:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:23:48 INFO [loop_until]: OK (rc = 0) 19:23:48 DEBUG --- stdout --- 19:23:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 5598Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 169m 1% 4360Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 144m 0% 4781Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 6942m 43% 4983Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1573m 9% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6978m 43% 5496Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1285m 8% 14172Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1497m 9% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 5419m 34% 14186Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1064m 6% 2040Mi 3% 19:23:48 DEBUG --- stderr --- 19:23:48 DEBUG 19:24:48 INFO 19:24:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:24:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:24:48 INFO [loop_until]: OK (rc = 0) 19:24:48 DEBUG --- stdout --- 19:24:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 96m 4672Mi am-55f77847b7-dr27z 94m 4473Mi am-55f77847b7-fp459 79m 4748Mi ds-cts-0 6m 393Mi ds-cts-1 10m 378Mi ds-cts-2 7m 374Mi ds-idrepo-0 5967m 13817Mi ds-idrepo-1 1533m 13801Mi ds-idrepo-2 1444m 13632Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6137m 4257Mi idm-65858d8c4c-4qc5l 5602m 3747Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 876m 517Mi 19:24:48 DEBUG --- stderr --- 19:24:48 DEBUG 19:24:48 INFO 19:24:48 INFO [loop_until]: kubectl --namespace=xlou top node 19:24:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:24:48 INFO [loop_until]: OK (rc = 0) 19:24:48 DEBUG --- stdout --- 19:24:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 5775Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 5581Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 5999Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 5678m 35% 4983Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1621m 10% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6345m 39% 5510Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1775m 11% 14350Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1616m 10% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6119m 38% 14352Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 931m 5% 2040Mi 3% 19:24:48 DEBUG --- stderr --- 19:24:48 DEBUG 19:25:48 INFO 19:25:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:25:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:25:48 INFO [loop_until]: OK (rc = 0) 19:25:48 DEBUG --- stdout --- 19:25:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 87m 5720Mi am-55f77847b7-dr27z 92m 5651Mi am-55f77847b7-fp459 75m 4748Mi ds-cts-0 9m 393Mi ds-cts-1 6m 378Mi ds-cts-2 6m 374Mi ds-idrepo-0 6230m 13823Mi ds-idrepo-1 1376m 13823Mi ds-idrepo-2 1399m 13751Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6069m 4276Mi idm-65858d8c4c-4qc5l 5685m 3705Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 862m 518Mi 19:25:48 DEBUG --- stderr --- 19:25:48 DEBUG 19:25:48 INFO 19:25:48 INFO [loop_until]: kubectl --namespace=xlou top node 19:25:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:25:48 INFO [loop_until]: OK (rc = 0) 19:25:48 DEBUG --- stdout --- 19:25:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 135m 0% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 156m 0% 6789Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 150m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5912m 37% 5016Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1556m 9% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6325m 39% 5530Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 53m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1318m 8% 14283Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1419m 8% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6068m 38% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 936m 5% 2043Mi 3% 19:25:48 DEBUG --- stderr --- 19:25:48 DEBUG 19:26:48 INFO 19:26:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:26:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:26:48 INFO [loop_until]: OK (rc = 0) 19:26:48 DEBUG --- stdout --- 19:26:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 73m 5724Mi am-55f77847b7-dr27z 71m 5654Mi am-55f77847b7-fp459 88m 4749Mi ds-cts-0 7m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 374Mi ds-idrepo-0 6622m 13823Mi ds-idrepo-1 1654m 13813Mi ds-idrepo-2 1472m 13754Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6108m 4284Mi idm-65858d8c4c-4qc5l 5691m 3720Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 855m 518Mi 19:26:48 DEBUG --- stderr --- 19:26:48 DEBUG 19:26:48 INFO 19:26:48 INFO [loop_until]: kubectl --namespace=xlou top node 19:26:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:26:48 INFO [loop_until]: OK (rc = 0) 19:26:48 DEBUG --- stdout --- 19:26:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 5774Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 133m 0% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 130m 0% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5904m 37% 5034Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1615m 10% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6185m 38% 5536Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 55m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1825m 11% 14252Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1729m 10% 14353Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6588m 41% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 904m 5% 2041Mi 3% 19:26:48 DEBUG --- stderr --- 19:26:48 DEBUG 19:27:48 INFO 19:27:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:27:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:27:48 INFO [loop_until]: OK (rc = 0) 19:27:48 DEBUG --- stdout --- 19:27:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 73m 5753Mi am-55f77847b7-dr27z 74m 5665Mi am-55f77847b7-fp459 89m 5590Mi ds-cts-0 7m 393Mi ds-cts-1 7m 378Mi ds-cts-2 6m 375Mi ds-idrepo-0 6289m 13821Mi ds-idrepo-1 1480m 13825Mi ds-idrepo-2 1462m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6028m 4300Mi idm-65858d8c4c-4qc5l 5771m 3734Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 890m 518Mi 19:27:48 DEBUG --- stderr --- 19:27:48 DEBUG 19:27:48 INFO 19:27:48 INFO [loop_until]: kubectl --namespace=xlou top node 19:27:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:27:48 INFO [loop_until]: OK (rc = 0) 19:27:48 DEBUG --- stdout --- 19:27:48 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 150m 0% 6736Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 131m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 131m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6055m 38% 5050Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1588m 9% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6374m 40% 5554Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 54m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1493m 9% 14350Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1629m 10% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6616m 41% 14362Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 953m 5% 2044Mi 3% 19:27:48 DEBUG --- stderr --- 19:27:48 DEBUG 19:28:48 INFO 19:28:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:28:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:28:48 INFO [loop_until]: OK (rc = 0) 19:28:48 DEBUG --- stdout --- 19:28:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 73m 5757Mi am-55f77847b7-dr27z 72m 5667Mi am-55f77847b7-fp459 73m 5835Mi ds-cts-0 7m 393Mi ds-cts-1 10m 379Mi ds-cts-2 6m 374Mi ds-idrepo-0 7229m 13833Mi ds-idrepo-1 2157m 13823Mi ds-idrepo-2 2500m 13830Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6143m 4312Mi idm-65858d8c4c-4qc5l 5498m 3750Mi lodemon-5798c88b8f-k2sv4 8m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 844m 519Mi 19:28:48 DEBUG --- stderr --- 19:28:48 DEBUG 19:28:48 INFO 19:28:48 INFO [loop_until]: kubectl --namespace=xlou top node 19:28:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:28:48 INFO [loop_until]: OK (rc = 0) 19:28:49 DEBUG --- stdout --- 19:28:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 133m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5904m 37% 5065Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1607m 10% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6340m 39% 5563Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2743m 17% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2195m 13% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7007m 44% 14341Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 924m 5% 2043Mi 3% 19:28:49 DEBUG --- stderr --- 19:28:49 DEBUG 19:29:48 INFO 19:29:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:29:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:29:48 INFO [loop_until]: OK (rc = 0) 19:29:48 DEBUG --- stdout --- 19:29:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 73m 5793Mi am-55f77847b7-dr27z 82m 5698Mi am-55f77847b7-fp459 85m 5835Mi ds-cts-0 6m 393Mi ds-cts-1 17m 378Mi ds-cts-2 6m 374Mi ds-idrepo-0 6627m 13817Mi ds-idrepo-1 1658m 13764Mi ds-idrepo-2 1763m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6018m 4322Mi idm-65858d8c4c-4qc5l 5686m 3912Mi lodemon-5798c88b8f-k2sv4 5m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 853m 520Mi 19:29:48 DEBUG --- stderr --- 19:29:48 DEBUG 19:29:49 INFO 19:29:49 INFO [loop_until]: kubectl --namespace=xlou top node 19:29:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:29:49 INFO [loop_until]: OK (rc = 0) 19:29:49 DEBUG --- stdout --- 19:29:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5925m 37% 5227Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1555m 9% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6355m 39% 5572Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 76m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1910m 12% 14293Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1823m 11% 14307Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6737m 42% 14291Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 903m 5% 2043Mi 3% 19:29:49 DEBUG --- stderr --- 19:29:49 DEBUG 19:30:48 INFO 19:30:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:30:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:30:48 INFO [loop_until]: OK (rc = 0) 19:30:48 DEBUG --- stdout --- 19:30:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 68m 5793Mi am-55f77847b7-dr27z 71m 5698Mi am-55f77847b7-fp459 78m 5835Mi ds-cts-0 7m 393Mi ds-cts-1 11m 379Mi ds-cts-2 7m 374Mi ds-idrepo-0 6284m 13822Mi ds-idrepo-1 1563m 13828Mi ds-idrepo-2 1759m 13833Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6001m 4332Mi idm-65858d8c4c-4qc5l 5604m 3930Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 838m 520Mi 19:30:48 DEBUG --- stderr --- 19:30:48 DEBUG 19:30:49 INFO 19:30:49 INFO [loop_until]: kubectl --namespace=xlou top node 19:30:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:30:49 INFO [loop_until]: OK (rc = 0) 19:30:49 DEBUG --- stdout --- 19:30:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6866Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6836Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 123m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5949m 37% 5240Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1609m 10% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6272m 39% 5596Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1775m 11% 14353Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1747m 10% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6554m 41% 14353Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 952m 5% 2046Mi 3% 19:30:49 DEBUG --- stderr --- 19:30:49 DEBUG 19:31:48 INFO 19:31:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:31:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:31:48 INFO [loop_until]: OK (rc = 0) 19:31:48 DEBUG --- stdout --- 19:31:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 69m 5793Mi am-55f77847b7-dr27z 71m 5698Mi am-55f77847b7-fp459 78m 5846Mi ds-cts-0 6m 393Mi ds-cts-1 9m 378Mi ds-cts-2 7m 376Mi ds-idrepo-0 7706m 13831Mi ds-idrepo-1 2447m 13838Mi ds-idrepo-2 2414m 13841Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 5998m 4341Mi idm-65858d8c4c-4qc5l 5848m 3959Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 876m 520Mi 19:31:48 DEBUG --- stderr --- 19:31:48 DEBUG 19:31:49 INFO 19:31:49 INFO [loop_until]: kubectl --namespace=xlou top node 19:31:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:31:49 INFO [loop_until]: OK (rc = 0) 19:31:49 DEBUG --- stdout --- 19:31:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6870Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 131m 0% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 128m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5915m 37% 5269Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 1569m 9% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6417m 40% 5594Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2329m 14% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2363m 14% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7874m 49% 14298Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 950m 5% 2046Mi 3% 19:31:49 DEBUG --- stderr --- 19:31:49 DEBUG 19:32:48 INFO 19:32:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:32:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:32:48 INFO [loop_until]: OK (rc = 0) 19:32:48 DEBUG --- stdout --- 19:32:48 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 74m 5793Mi am-55f77847b7-dr27z 72m 5698Mi am-55f77847b7-fp459 75m 5843Mi ds-cts-0 6m 393Mi ds-cts-1 10m 378Mi ds-cts-2 8m 374Mi ds-idrepo-0 6877m 13813Mi ds-idrepo-1 2087m 13826Mi ds-idrepo-2 1848m 13829Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6087m 4352Mi idm-65858d8c4c-4qc5l 5781m 3975Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 850m 521Mi 19:32:48 DEBUG --- stderr --- 19:32:48 DEBUG 19:32:49 INFO 19:32:49 INFO [loop_until]: kubectl --namespace=xlou top node 19:32:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:32:49 INFO [loop_until]: OK (rc = 0) 19:32:49 DEBUG --- stdout --- 19:32:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6868Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 133m 0% 6834Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6013m 37% 5289Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1621m 10% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6326m 39% 5603Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1890m 11% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2190m 13% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7062m 44% 14334Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 916m 5% 2046Mi 3% 19:32:49 DEBUG --- stderr --- 19:32:49 DEBUG 19:33:48 INFO 19:33:48 INFO [loop_until]: kubectl --namespace=xlou top pods 19:33:48 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:33:49 INFO [loop_until]: OK (rc = 0) 19:33:49 DEBUG --- stdout --- 19:33:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 69m 5797Mi am-55f77847b7-dr27z 72m 5703Mi am-55f77847b7-fp459 77m 5844Mi ds-cts-0 6m 393Mi ds-cts-1 11m 378Mi ds-cts-2 6m 374Mi ds-idrepo-0 7268m 13798Mi ds-idrepo-1 2253m 13823Mi ds-idrepo-2 2384m 13843Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6239m 4363Mi idm-65858d8c4c-4qc5l 5663m 3996Mi lodemon-5798c88b8f-k2sv4 8m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 846m 522Mi 19:33:49 DEBUG --- stderr --- 19:33:49 DEBUG 19:33:49 INFO 19:33:49 INFO [loop_until]: kubectl --namespace=xlou top node 19:33:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:33:49 INFO [loop_until]: OK (rc = 0) 19:33:49 DEBUG --- stdout --- 19:33:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6868Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 128m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 126m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5996m 37% 5319Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1615m 10% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6148m 38% 5617Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2673m 16% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2545m 16% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7260m 45% 14327Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 938m 5% 2045Mi 3% 19:33:49 DEBUG --- stderr --- 19:33:49 DEBUG 19:34:49 INFO 19:34:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:34:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:34:49 INFO [loop_until]: OK (rc = 0) 19:34:49 DEBUG --- stdout --- 19:34:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 72m 5798Mi am-55f77847b7-dr27z 71m 5708Mi am-55f77847b7-fp459 79m 5848Mi ds-cts-0 6m 393Mi ds-cts-1 10m 379Mi ds-cts-2 6m 374Mi ds-idrepo-0 6895m 13816Mi ds-idrepo-1 1955m 13797Mi ds-idrepo-2 2294m 13793Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6111m 4374Mi idm-65858d8c4c-4qc5l 5685m 4013Mi lodemon-5798c88b8f-k2sv4 8m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 850m 522Mi 19:34:49 DEBUG --- stderr --- 19:34:49 DEBUG 19:34:49 INFO 19:34:49 INFO [loop_until]: kubectl --namespace=xlou top node 19:34:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:34:49 INFO [loop_until]: OK (rc = 0) 19:34:49 DEBUG --- stdout --- 19:34:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6869Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 131m 0% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 128m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5842m 36% 5325Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1625m 10% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6468m 40% 5628Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2132m 13% 14338Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1987m 12% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7218m 45% 14344Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 929m 5% 2043Mi 3% 19:34:49 DEBUG --- stderr --- 19:34:49 DEBUG 19:35:49 INFO 19:35:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:35:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:35:49 INFO [loop_until]: OK (rc = 0) 19:35:49 DEBUG --- stdout --- 19:35:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 80m 5797Mi am-55f77847b7-dr27z 72m 5711Mi am-55f77847b7-fp459 74m 5850Mi ds-cts-0 7m 394Mi ds-cts-1 12m 379Mi ds-cts-2 5m 374Mi ds-idrepo-0 7369m 13829Mi ds-idrepo-1 2007m 13855Mi ds-idrepo-2 1923m 13851Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 5963m 4382Mi idm-65858d8c4c-4qc5l 5687m 4029Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 854m 522Mi 19:35:49 DEBUG --- stderr --- 19:35:49 DEBUG 19:35:49 INFO 19:35:49 INFO [loop_until]: kubectl --namespace=xlou top node 19:35:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:35:49 INFO [loop_until]: OK (rc = 0) 19:35:49 DEBUG --- stdout --- 19:35:49 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6871Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 134m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5939m 37% 5344Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1607m 10% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6391m 40% 5639Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1839m 11% 14348Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1975m 12% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7400m 46% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 917m 5% 2046Mi 3% 19:35:49 DEBUG --- stderr --- 19:35:49 DEBUG 19:36:49 INFO 19:36:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:36:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:36:49 INFO [loop_until]: OK (rc = 0) 19:36:49 DEBUG --- stdout --- 19:36:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 75m 5800Mi am-55f77847b7-dr27z 74m 5710Mi am-55f77847b7-fp459 79m 5851Mi ds-cts-0 6m 393Mi ds-cts-1 8m 379Mi ds-cts-2 6m 374Mi ds-idrepo-0 7299m 13827Mi ds-idrepo-1 2256m 13792Mi ds-idrepo-2 2190m 13792Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6211m 4395Mi idm-65858d8c4c-4qc5l 5745m 4052Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 826m 523Mi 19:36:49 DEBUG --- stderr --- 19:36:49 DEBUG 19:36:49 INFO 19:36:49 INFO [loop_until]: kubectl --namespace=xlou top node 19:36:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:36:50 INFO [loop_until]: OK (rc = 0) 19:36:50 DEBUG --- stdout --- 19:36:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6872Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6846Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 133m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6024m 37% 5363Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1605m 10% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6308m 39% 5650Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2181m 13% 14318Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2239m 14% 14344Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7461m 46% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 916m 5% 2045Mi 3% 19:36:50 DEBUG --- stderr --- 19:36:50 DEBUG 19:37:49 INFO 19:37:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:37:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:37:49 INFO [loop_until]: OK (rc = 0) 19:37:49 DEBUG --- stdout --- 19:37:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 73m 5800Mi am-55f77847b7-dr27z 71m 5713Mi am-55f77847b7-fp459 76m 5851Mi ds-cts-0 8m 393Mi ds-cts-1 9m 379Mi ds-cts-2 6m 376Mi ds-idrepo-0 6886m 13818Mi ds-idrepo-1 2021m 13807Mi ds-idrepo-2 1858m 13806Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6389m 4405Mi idm-65858d8c4c-4qc5l 5723m 4067Mi lodemon-5798c88b8f-k2sv4 5m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 845m 523Mi 19:37:49 DEBUG --- stderr --- 19:37:49 DEBUG 19:37:50 INFO 19:37:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:37:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:37:50 INFO [loop_until]: OK (rc = 0) 19:37:50 DEBUG --- stdout --- 19:37:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 130m 0% 6884Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 130m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6026m 37% 5381Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1636m 10% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6277m 39% 5660Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1874m 11% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1985m 12% 14332Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7121m 44% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 934m 5% 2044Mi 3% 19:37:50 DEBUG --- stderr --- 19:37:50 DEBUG 19:38:49 INFO 19:38:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:38:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:38:49 INFO [loop_until]: OK (rc = 0) 19:38:49 DEBUG --- stdout --- 19:38:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 74m 5800Mi am-55f77847b7-dr27z 72m 5713Mi am-55f77847b7-fp459 79m 5851Mi ds-cts-0 6m 393Mi ds-cts-1 9m 379Mi ds-cts-2 6m 374Mi ds-idrepo-0 6645m 13820Mi ds-idrepo-1 1579m 13812Mi ds-idrepo-2 1717m 13851Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 5778m 4414Mi idm-65858d8c4c-4qc5l 5695m 4083Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 848m 524Mi 19:38:49 DEBUG --- stderr --- 19:38:49 DEBUG 19:38:50 INFO 19:38:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:38:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:38:50 INFO [loop_until]: OK (rc = 0) 19:38:50 DEBUG --- stdout --- 19:38:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 137m 0% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5905m 37% 5398Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1593m 10% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6225m 39% 5668Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1779m 11% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1567m 9% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6585m 41% 14357Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 913m 5% 2044Mi 3% 19:38:50 DEBUG --- stderr --- 19:38:50 DEBUG 19:39:49 INFO 19:39:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:39:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:39:49 INFO [loop_until]: OK (rc = 0) 19:39:49 DEBUG --- stdout --- 19:39:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 72m 5800Mi am-55f77847b7-dr27z 76m 5713Mi am-55f77847b7-fp459 73m 5851Mi ds-cts-0 7m 394Mi ds-cts-1 9m 379Mi ds-cts-2 6m 374Mi ds-idrepo-0 6789m 13811Mi ds-idrepo-1 1587m 13848Mi ds-idrepo-2 1575m 13827Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 5895m 4422Mi idm-65858d8c4c-4qc5l 5638m 4101Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 880m 523Mi 19:39:49 DEBUG --- stderr --- 19:39:49 DEBUG 19:39:50 INFO 19:39:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:39:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:39:50 INFO [loop_until]: OK (rc = 0) 19:39:50 DEBUG --- stdout --- 19:39:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 138m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 131m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5914m 37% 5417Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1604m 10% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6313m 39% 5677Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1683m 10% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1586m 9% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6975m 43% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 929m 5% 2047Mi 3% 19:39:50 DEBUG --- stderr --- 19:39:50 DEBUG 19:40:49 INFO 19:40:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:40:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:40:49 INFO [loop_until]: OK (rc = 0) 19:40:49 DEBUG --- stdout --- 19:40:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 73m 5800Mi am-55f77847b7-dr27z 73m 5714Mi am-55f77847b7-fp459 75m 5851Mi ds-cts-0 6m 394Mi ds-cts-1 15m 379Mi ds-cts-2 6m 374Mi ds-idrepo-0 7052m 13793Mi ds-idrepo-1 2084m 13816Mi ds-idrepo-2 2294m 13825Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6156m 4433Mi idm-65858d8c4c-4qc5l 5829m 4121Mi lodemon-5798c88b8f-k2sv4 4m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 830m 523Mi 19:40:49 DEBUG --- stderr --- 19:40:49 DEBUG 19:40:50 INFO 19:40:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:40:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:40:50 INFO [loop_until]: OK (rc = 0) 19:40:50 DEBUG --- stdout --- 19:40:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 136m 0% 6872Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 133m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6053m 38% 5436Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1626m 10% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6439m 40% 5687Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2541m 15% 14338Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2019m 12% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7022m 44% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 921m 5% 2047Mi 3% 19:40:50 DEBUG --- stderr --- 19:40:50 DEBUG 19:41:49 INFO 19:41:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:41:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:41:49 INFO [loop_until]: OK (rc = 0) 19:41:49 DEBUG --- stdout --- 19:41:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 71m 5801Mi am-55f77847b7-dr27z 73m 5714Mi am-55f77847b7-fp459 76m 5851Mi ds-cts-0 7m 390Mi ds-cts-1 10m 379Mi ds-cts-2 5m 374Mi ds-idrepo-0 7266m 13823Mi ds-idrepo-1 1700m 13840Mi ds-idrepo-2 1546m 13838Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6225m 4442Mi idm-65858d8c4c-4qc5l 5761m 4136Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 887m 524Mi 19:41:49 DEBUG --- stderr --- 19:41:49 DEBUG 19:41:50 INFO 19:41:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:41:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:41:50 INFO [loop_until]: OK (rc = 0) 19:41:50 DEBUG --- stdout --- 19:41:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 128m 0% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 124m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5841m 36% 5452Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1630m 10% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6293m 39% 5696Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1809m 11% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1835m 11% 14397Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7005m 44% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 943m 5% 2048Mi 3% 19:41:50 DEBUG --- stderr --- 19:41:50 DEBUG 19:42:49 INFO 19:42:49 INFO [loop_until]: kubectl --namespace=xlou top pods 19:42:49 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:42:49 INFO [loop_until]: OK (rc = 0) 19:42:49 DEBUG --- stdout --- 19:42:49 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 72m 5801Mi am-55f77847b7-dr27z 86m 5714Mi am-55f77847b7-fp459 79m 5852Mi ds-cts-0 6m 390Mi ds-cts-1 9m 379Mi ds-cts-2 5m 374Mi ds-idrepo-0 7605m 13831Mi ds-idrepo-1 2210m 13828Mi ds-idrepo-2 2822m 13848Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6149m 4451Mi idm-65858d8c4c-4qc5l 5797m 4156Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 837m 525Mi 19:42:49 DEBUG --- stderr --- 19:42:49 DEBUG 19:42:50 INFO 19:42:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:42:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:42:50 INFO [loop_until]: OK (rc = 0) 19:42:50 DEBUG --- stdout --- 19:42:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 135m 0% 6870Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6026m 37% 5468Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1623m 10% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6378m 40% 5705Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3051m 19% 14324Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2033m 12% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7683m 48% 14358Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 925m 5% 2046Mi 3% 19:42:50 DEBUG --- stderr --- 19:42:50 DEBUG 19:43:50 INFO 19:43:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:43:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:43:50 INFO [loop_until]: OK (rc = 0) 19:43:50 DEBUG --- stdout --- 19:43:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 72m 5801Mi am-55f77847b7-dr27z 73m 5714Mi am-55f77847b7-fp459 70m 5851Mi ds-cts-0 6m 390Mi ds-cts-1 17m 376Mi ds-cts-2 7m 374Mi ds-idrepo-0 6835m 13823Mi ds-idrepo-1 1715m 13838Mi ds-idrepo-2 1930m 13817Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6060m 4461Mi idm-65858d8c4c-4qc5l 5838m 4171Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 845m 525Mi 19:43:50 DEBUG --- stderr --- 19:43:50 DEBUG 19:43:50 INFO 19:43:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:43:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:43:50 INFO [loop_until]: OK (rc = 0) 19:43:50 DEBUG --- stdout --- 19:43:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 131m 0% 6874Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 133m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 132m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6007m 37% 5487Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1625m 10% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6356m 40% 5716Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1901m 11% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2015m 12% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7144m 44% 14350Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 936m 5% 2048Mi 3% 19:43:50 DEBUG --- stderr --- 19:43:50 DEBUG 19:44:50 INFO 19:44:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:44:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:44:50 INFO [loop_until]: OK (rc = 0) 19:44:50 DEBUG --- stdout --- 19:44:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 71m 5801Mi am-55f77847b7-dr27z 72m 5714Mi am-55f77847b7-fp459 72m 5852Mi ds-cts-0 6m 390Mi ds-cts-1 8m 376Mi ds-cts-2 6m 374Mi ds-idrepo-0 7547m 13824Mi ds-idrepo-1 2151m 13842Mi ds-idrepo-2 1710m 13833Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 5918m 4473Mi idm-65858d8c4c-4qc5l 5737m 4190Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 835m 526Mi 19:44:50 DEBUG --- stderr --- 19:44:50 DEBUG 19:44:50 INFO 19:44:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:44:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:44:50 INFO [loop_until]: OK (rc = 0) 19:44:50 DEBUG --- stdout --- 19:44:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6873Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 129m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6052m 38% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1627m 10% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6387m 40% 5728Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1695m 10% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2041m 12% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7601m 47% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 937m 5% 2046Mi 3% 19:44:50 DEBUG --- stderr --- 19:44:50 DEBUG 19:45:50 INFO 19:45:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:45:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:45:50 INFO [loop_until]: OK (rc = 0) 19:45:50 DEBUG --- stdout --- 19:45:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 71m 5801Mi am-55f77847b7-dr27z 70m 5715Mi am-55f77847b7-fp459 81m 5852Mi ds-cts-0 10m 390Mi ds-cts-1 11m 376Mi ds-cts-2 6m 375Mi ds-idrepo-0 6888m 13845Mi ds-idrepo-1 2219m 13853Mi ds-idrepo-2 1545m 13837Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6237m 4481Mi idm-65858d8c4c-4qc5l 5685m 4208Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 872m 526Mi 19:45:50 DEBUG --- stderr --- 19:45:50 DEBUG 19:45:50 INFO 19:45:50 INFO [loop_until]: kubectl --namespace=xlou top node 19:45:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:45:50 INFO [loop_until]: OK (rc = 0) 19:45:50 DEBUG --- stdout --- 19:45:50 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6872Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 125m 0% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6031m 37% 5523Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1620m 10% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6159m 38% 5739Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1635m 10% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2385m 15% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7449m 46% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 932m 5% 2045Mi 3% 19:45:50 DEBUG --- stderr --- 19:45:50 DEBUG 19:46:50 INFO 19:46:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:46:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:46:50 INFO [loop_until]: OK (rc = 0) 19:46:50 DEBUG --- stdout --- 19:46:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 69m 5801Mi am-55f77847b7-dr27z 70m 5715Mi am-55f77847b7-fp459 76m 5851Mi ds-cts-0 9m 390Mi ds-cts-1 9m 376Mi ds-cts-2 11m 375Mi ds-idrepo-0 7221m 13782Mi ds-idrepo-1 1905m 13762Mi ds-idrepo-2 1499m 13847Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6056m 4490Mi idm-65858d8c4c-4qc5l 5948m 4224Mi lodemon-5798c88b8f-k2sv4 8m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 849m 526Mi 19:46:50 DEBUG --- stderr --- 19:46:50 DEBUG 19:46:51 INFO 19:46:51 INFO [loop_until]: kubectl --namespace=xlou top node 19:46:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:46:51 INFO [loop_until]: OK (rc = 0) 19:46:51 DEBUG --- stdout --- 19:46:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 131m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 130m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6000m 37% 5540Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1595m 10% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6215m 39% 5747Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1956m 12% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2151m 13% 14336Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7206m 45% 14345Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 923m 5% 2045Mi 3% 19:46:51 DEBUG --- stderr --- 19:46:51 DEBUG 19:47:50 INFO 19:47:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:47:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:47:50 INFO [loop_until]: OK (rc = 0) 19:47:50 DEBUG --- stdout --- 19:47:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 71m 5801Mi am-55f77847b7-dr27z 74m 5715Mi am-55f77847b7-fp459 74m 5852Mi ds-cts-0 8m 390Mi ds-cts-1 8m 377Mi ds-cts-2 6m 375Mi ds-idrepo-0 6730m 13827Mi ds-idrepo-1 1559m 13835Mi ds-idrepo-2 2226m 13843Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6210m 4504Mi idm-65858d8c4c-4qc5l 5788m 4241Mi lodemon-5798c88b8f-k2sv4 5m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 860m 527Mi 19:47:50 DEBUG --- stderr --- 19:47:50 DEBUG 19:47:51 INFO 19:47:51 INFO [loop_until]: kubectl --namespace=xlou top node 19:47:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:47:51 INFO [loop_until]: OK (rc = 0) 19:47:51 DEBUG --- stdout --- 19:47:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5967m 37% 5556Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1616m 10% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6388m 40% 5758Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2374m 14% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1641m 10% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6786m 42% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 935m 5% 2046Mi 3% 19:47:51 DEBUG --- stderr --- 19:47:51 DEBUG 19:48:50 INFO 19:48:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:48:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:48:50 INFO [loop_until]: OK (rc = 0) 19:48:50 DEBUG --- stdout --- 19:48:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 70m 5801Mi am-55f77847b7-dr27z 73m 5715Mi am-55f77847b7-fp459 78m 5853Mi ds-cts-0 7m 391Mi ds-cts-1 8m 376Mi ds-cts-2 6m 375Mi ds-idrepo-0 6557m 13824Mi ds-idrepo-1 1678m 13843Mi ds-idrepo-2 1581m 13848Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6020m 4512Mi idm-65858d8c4c-4qc5l 5862m 4257Mi lodemon-5798c88b8f-k2sv4 5m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 855m 528Mi 19:48:50 DEBUG --- stderr --- 19:48:50 DEBUG 19:48:51 INFO 19:48:51 INFO [loop_until]: kubectl --namespace=xlou top node 19:48:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:48:51 INFO [loop_until]: OK (rc = 0) 19:48:51 DEBUG --- stdout --- 19:48:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 135m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 132m 0% 6854Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 129m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5982m 37% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1622m 10% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6301m 39% 5767Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1637m 10% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1797m 11% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6589m 41% 14356Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 889m 5% 2048Mi 3% 19:48:51 DEBUG --- stderr --- 19:48:51 DEBUG 19:49:50 INFO 19:49:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:49:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:49:50 INFO [loop_until]: OK (rc = 0) 19:49:50 DEBUG --- stdout --- 19:49:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 76m 5802Mi am-55f77847b7-dr27z 70m 5715Mi am-55f77847b7-fp459 77m 5854Mi ds-cts-0 9m 390Mi ds-cts-1 8m 376Mi ds-cts-2 6m 375Mi ds-idrepo-0 6968m 13849Mi ds-idrepo-1 1760m 13849Mi ds-idrepo-2 1566m 13838Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6110m 4521Mi idm-65858d8c4c-4qc5l 5632m 4278Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 853m 528Mi 19:49:50 DEBUG --- stderr --- 19:49:50 DEBUG 19:49:51 INFO 19:49:51 INFO [loop_until]: kubectl --namespace=xlou top node 19:49:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:49:51 INFO [loop_until]: OK (rc = 0) 19:49:51 DEBUG --- stdout --- 19:49:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 133m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 130m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5866m 36% 5590Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1612m 10% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6431m 40% 5777Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 52m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1884m 11% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1658m 10% 14407Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6827m 42% 14385Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 928m 5% 2050Mi 3% 19:49:51 DEBUG --- stderr --- 19:49:51 DEBUG 19:50:50 INFO 19:50:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:50:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:50:50 INFO [loop_until]: OK (rc = 0) 19:50:50 DEBUG --- stdout --- 19:50:50 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 71m 5802Mi am-55f77847b7-dr27z 73m 5715Mi am-55f77847b7-fp459 75m 5854Mi ds-cts-0 7m 395Mi ds-cts-1 9m 376Mi ds-cts-2 6m 375Mi ds-idrepo-0 7405m 13824Mi ds-idrepo-1 1767m 13831Mi ds-idrepo-2 2037m 13853Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6397m 4531Mi idm-65858d8c4c-4qc5l 5782m 4297Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 848m 528Mi 19:50:50 DEBUG --- stderr --- 19:50:50 DEBUG 19:50:51 INFO 19:50:51 INFO [loop_until]: kubectl --namespace=xlou top node 19:50:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:50:51 INFO [loop_until]: OK (rc = 0) 19:50:51 DEBUG --- stdout --- 19:50:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6876Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 130m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 127m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5968m 37% 5613Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1614m 10% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6412m 40% 5787Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1857m 11% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1808m 11% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7291m 45% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 917m 5% 2051Mi 3% 19:50:51 DEBUG --- stderr --- 19:50:51 DEBUG 19:51:50 INFO 19:51:50 INFO [loop_until]: kubectl --namespace=xlou top pods 19:51:50 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:51:51 INFO [loop_until]: OK (rc = 0) 19:51:51 DEBUG --- stdout --- 19:51:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 72m 5802Mi am-55f77847b7-dr27z 72m 5715Mi am-55f77847b7-fp459 74m 5854Mi ds-cts-0 6m 395Mi ds-cts-1 9m 376Mi ds-cts-2 5m 375Mi ds-idrepo-0 7304m 13859Mi ds-idrepo-1 2028m 13834Mi ds-idrepo-2 1579m 13846Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6546m 4539Mi idm-65858d8c4c-4qc5l 5825m 4314Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 824m 528Mi 19:51:51 DEBUG --- stderr --- 19:51:51 DEBUG 19:51:51 INFO 19:51:51 INFO [loop_until]: kubectl --namespace=xlou top node 19:51:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:51:51 INFO [loop_until]: OK (rc = 0) 19:51:51 DEBUG --- stdout --- 19:51:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 133m 0% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 133m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 128m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6037m 37% 5626Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1624m 10% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6471m 40% 5792Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1658m 10% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2144m 13% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7166m 45% 14360Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 886m 5% 2046Mi 3% 19:51:51 DEBUG --- stderr --- 19:51:51 DEBUG 19:52:51 INFO 19:52:51 INFO [loop_until]: kubectl --namespace=xlou top pods 19:52:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:52:51 INFO [loop_until]: OK (rc = 0) 19:52:51 DEBUG --- stdout --- 19:52:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 65m 5802Mi am-55f77847b7-dr27z 69m 5716Mi am-55f77847b7-fp459 72m 5854Mi ds-cts-0 7m 395Mi ds-cts-1 9m 376Mi ds-cts-2 6m 375Mi ds-idrepo-0 6403m 13843Mi ds-idrepo-1 1514m 13847Mi ds-idrepo-2 1735m 13850Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 5573m 4549Mi idm-65858d8c4c-4qc5l 5500m 4327Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 818m 529Mi 19:52:51 DEBUG --- stderr --- 19:52:51 DEBUG 19:52:51 INFO 19:52:51 INFO [loop_until]: kubectl --namespace=xlou top node 19:52:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:52:51 INFO [loop_until]: OK (rc = 0) 19:52:51 DEBUG --- stdout --- 19:52:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 134m 0% 6875Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 119m 0% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 114m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 5369m 33% 5642Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1417m 8% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 5749m 36% 5802Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1733m 10% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1596m 10% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6537m 41% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 899m 5% 2048Mi 3% 19:52:51 DEBUG --- stderr --- 19:52:51 DEBUG 19:53:51 INFO 19:53:51 INFO [loop_until]: kubectl --namespace=xlou top pods 19:53:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:53:51 INFO [loop_until]: OK (rc = 0) 19:53:51 DEBUG --- stdout --- 19:53:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 7m 5802Mi am-55f77847b7-dr27z 7m 5716Mi am-55f77847b7-fp459 10m 5854Mi ds-cts-0 6m 395Mi ds-cts-1 9m 376Mi ds-cts-2 6m 376Mi ds-idrepo-0 10m 13850Mi ds-idrepo-1 10m 13847Mi ds-idrepo-2 10m 13833Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 8m 4549Mi idm-65858d8c4c-4qc5l 8m 4329Mi lodemon-5798c88b8f-k2sv4 8m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 21m 108Mi 19:53:51 DEBUG --- stderr --- 19:53:51 DEBUG 19:53:51 INFO 19:53:51 INFO [loop_until]: kubectl --namespace=xlou top node 19:53:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:53:51 INFO [loop_until]: OK (rc = 0) 19:53:51 DEBUG --- stdout --- 19:53:51 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6877Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5642Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5804Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 100m 0% 1637Mi 2% 19:53:51 DEBUG --- stderr --- 19:53:51 DEBUG 127.0.0.1 - - [12/Aug/2023 19:54:35] "GET /monitoring/average?start_time=23-08-12_18:24:03&stop_time=23-08-12_18:52:34 HTTP/1.1" 200 - 19:54:51 INFO 19:54:51 INFO [loop_until]: kubectl --namespace=xlou top pods 19:54:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:54:51 INFO [loop_until]: OK (rc = 0) 19:54:51 DEBUG --- stdout --- 19:54:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 6m 5802Mi am-55f77847b7-dr27z 8m 5716Mi am-55f77847b7-fp459 9m 5854Mi ds-cts-0 6m 395Mi ds-cts-1 9m 376Mi ds-cts-2 8m 375Mi ds-idrepo-0 10m 13850Mi ds-idrepo-1 9m 13847Mi ds-idrepo-2 9m 13833Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 8m 4548Mi idm-65858d8c4c-4qc5l 7m 4328Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 108Mi 19:54:51 DEBUG --- stderr --- 19:54:51 DEBUG 19:54:52 INFO 19:54:52 INFO [loop_until]: kubectl --namespace=xlou top node 19:54:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:54:52 INFO [loop_until]: OK (rc = 0) 19:54:52 DEBUG --- stdout --- 19:54:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5641Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 118m 0% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 5804Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14385Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1637Mi 2% 19:54:52 DEBUG --- stderr --- 19:54:52 DEBUG 19:55:51 INFO 19:55:51 INFO [loop_until]: kubectl --namespace=xlou top pods 19:55:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:55:51 INFO [loop_until]: OK (rc = 0) 19:55:51 DEBUG --- stdout --- 19:55:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 43m 5802Mi am-55f77847b7-dr27z 82m 5725Mi am-55f77847b7-fp459 35m 5854Mi ds-cts-0 11m 397Mi ds-cts-1 10m 376Mi ds-cts-2 8m 375Mi ds-idrepo-0 1607m 13855Mi ds-idrepo-1 170m 13853Mi ds-idrepo-2 216m 13836Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 1599m 4569Mi idm-65858d8c4c-4qc5l 1733m 4361Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1165m 531Mi 19:55:51 DEBUG --- stderr --- 19:55:51 DEBUG 19:55:52 INFO 19:55:52 INFO [loop_until]: kubectl --namespace=xlou top node 19:55:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:55:52 INFO [loop_until]: OK (rc = 0) 19:55:52 DEBUG --- stdout --- 19:55:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 94m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3369m 21% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 519m 3% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3256m 20% 5835Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 709m 4% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 856m 5% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 3145m 19% 14390Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1367m 8% 2051Mi 3% 19:55:52 DEBUG --- stderr --- 19:55:52 DEBUG 19:56:51 INFO 19:56:51 INFO [loop_until]: kubectl --namespace=xlou top pods 19:56:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:56:51 INFO [loop_until]: OK (rc = 0) 19:56:51 DEBUG --- stdout --- 19:56:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 81m 5802Mi am-55f77847b7-dr27z 79m 5725Mi am-55f77847b7-fp459 83m 5855Mi ds-cts-0 6m 396Mi ds-cts-1 8m 376Mi ds-cts-2 13m 371Mi ds-idrepo-0 7935m 13855Mi ds-idrepo-1 2024m 13852Mi ds-idrepo-2 2081m 13851Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6594m 4602Mi idm-65858d8c4c-4qc5l 6132m 4393Mi lodemon-5798c88b8f-k2sv4 1m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 960m 537Mi 19:56:51 DEBUG --- stderr --- 19:56:51 DEBUG 19:56:52 INFO 19:56:52 INFO [loop_until]: kubectl --namespace=xlou top node 19:56:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:56:52 INFO [loop_until]: OK (rc = 0) 19:56:52 DEBUG --- stdout --- 19:56:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 136m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 145m 0% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6456m 40% 5704Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1878m 11% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6885m 43% 5857Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2359m 14% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1908m 12% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7499m 47% 14382Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1046m 6% 2056Mi 3% 19:56:52 DEBUG --- stderr --- 19:56:52 DEBUG 19:57:51 INFO 19:57:51 INFO [loop_until]: kubectl --namespace=xlou top pods 19:57:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:57:51 INFO [loop_until]: OK (rc = 0) 19:57:51 DEBUG --- stdout --- 19:57:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 78m 5802Mi am-55f77847b7-dr27z 80m 5725Mi am-55f77847b7-fp459 81m 5855Mi ds-cts-0 7m 396Mi ds-cts-1 8m 376Mi ds-cts-2 8m 371Mi ds-idrepo-0 8605m 13798Mi ds-idrepo-1 1805m 13835Mi ds-idrepo-2 2746m 13819Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6279m 4623Mi idm-65858d8c4c-4qc5l 6233m 4415Mi lodemon-5798c88b8f-k2sv4 8m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 923m 540Mi 19:57:51 DEBUG --- stderr --- 19:57:51 DEBUG 19:57:52 INFO 19:57:52 INFO [loop_until]: kubectl --namespace=xlou top node 19:57:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:57:52 INFO [loop_until]: OK (rc = 0) 19:57:52 DEBUG --- stdout --- 19:57:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 6876Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 137m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6345m 39% 5727Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1868m 11% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6832m 42% 5881Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3034m 19% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2169m 13% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9300m 58% 14397Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1016m 6% 2060Mi 3% 19:57:52 DEBUG --- stderr --- 19:57:52 DEBUG 19:58:51 INFO 19:58:51 INFO [loop_until]: kubectl --namespace=xlou top pods 19:58:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:58:51 INFO [loop_until]: OK (rc = 0) 19:58:51 DEBUG --- stdout --- 19:58:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 78m 5803Mi am-55f77847b7-dr27z 80m 5725Mi am-55f77847b7-fp459 82m 5855Mi ds-cts-0 6m 396Mi ds-cts-1 9m 377Mi ds-cts-2 5m 371Mi ds-idrepo-0 7904m 13811Mi ds-idrepo-1 2108m 13823Mi ds-idrepo-2 2594m 13774Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6607m 4653Mi idm-65858d8c4c-4qc5l 6106m 4438Mi lodemon-5798c88b8f-k2sv4 1m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 976m 545Mi 19:58:51 DEBUG --- stderr --- 19:58:51 DEBUG 19:58:52 INFO 19:58:52 INFO [loop_until]: kubectl --namespace=xlou top node 19:58:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:58:52 INFO [loop_until]: OK (rc = 0) 19:58:52 DEBUG --- stdout --- 19:58:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1366Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 142m 0% 6862Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 134m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6313m 39% 5752Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1840m 11% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6836m 43% 5910Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2493m 15% 14318Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1914m 12% 14355Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8155m 51% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1037m 6% 2066Mi 3% 19:58:52 DEBUG --- stderr --- 19:58:52 DEBUG 19:59:51 INFO 19:59:51 INFO [loop_until]: kubectl --namespace=xlou top pods 19:59:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:59:51 INFO [loop_until]: OK (rc = 0) 19:59:51 DEBUG --- stdout --- 19:59:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 78m 5803Mi am-55f77847b7-dr27z 81m 5725Mi am-55f77847b7-fp459 85m 5855Mi ds-cts-0 7m 395Mi ds-cts-1 10m 377Mi ds-cts-2 6m 371Mi ds-idrepo-0 7089m 13857Mi ds-idrepo-1 1928m 13852Mi ds-idrepo-2 2387m 13699Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6598m 4675Mi idm-65858d8c4c-4qc5l 6008m 4454Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1051m 548Mi 19:59:51 DEBUG --- stderr --- 19:59:51 DEBUG 19:59:52 INFO 19:59:52 INFO [loop_until]: kubectl --namespace=xlou top node 19:59:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 19:59:52 INFO [loop_until]: OK (rc = 0) 19:59:52 DEBUG --- stdout --- 19:59:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6876Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6451m 40% 5773Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1864m 11% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6879m 43% 5931Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2426m 15% 14237Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1646m 10% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7042m 44% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1093m 6% 2072Mi 3% 19:59:52 DEBUG --- stderr --- 19:59:52 DEBUG 20:00:51 INFO 20:00:51 INFO [loop_until]: kubectl --namespace=xlou top pods 20:00:51 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:00:51 INFO [loop_until]: OK (rc = 0) 20:00:51 DEBUG --- stdout --- 20:00:51 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 78m 5803Mi am-55f77847b7-dr27z 82m 5725Mi am-55f77847b7-fp459 80m 5855Mi ds-cts-0 9m 396Mi ds-cts-1 9m 377Mi ds-cts-2 6m 372Mi ds-idrepo-0 8036m 13785Mi ds-idrepo-1 2360m 13422Mi ds-idrepo-2 2403m 13611Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6770m 4698Mi idm-65858d8c4c-4qc5l 6320m 4475Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1009m 555Mi 20:00:51 DEBUG --- stderr --- 20:00:51 DEBUG 20:00:52 INFO 20:00:52 INFO [loop_until]: kubectl --namespace=xlou top node 20:00:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:00:52 INFO [loop_until]: OK (rc = 0) 20:00:52 DEBUG --- stdout --- 20:00:52 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 6876Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 138m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6463m 40% 5792Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1877m 11% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6986m 43% 5956Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 1140Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2717m 17% 14129Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2299m 14% 13999Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 8141m 51% 14341Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1065m 6% 2075Mi 3% 20:00:52 DEBUG --- stderr --- 20:00:52 DEBUG 20:01:52 INFO 20:01:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:01:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:01:52 INFO [loop_until]: OK (rc = 0) 20:01:52 DEBUG --- stdout --- 20:01:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 77m 5803Mi am-55f77847b7-dr27z 77m 5727Mi am-55f77847b7-fp459 81m 5855Mi ds-cts-0 7m 392Mi ds-cts-1 10m 377Mi ds-cts-2 7m 372Mi ds-idrepo-0 7997m 13681Mi ds-idrepo-1 2194m 13560Mi ds-idrepo-2 1979m 13676Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6559m 4722Mi idm-65858d8c4c-4qc5l 6017m 4494Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 975m 559Mi 20:01:52 DEBUG --- stderr --- 20:01:52 DEBUG 20:01:52 INFO 20:01:52 INFO [loop_until]: kubectl --namespace=xlou top node 20:01:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:01:53 INFO [loop_until]: OK (rc = 0) 20:01:53 DEBUG --- stdout --- 20:01:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 142m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6472m 40% 5810Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1891m 11% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6973m 43% 5977Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2028m 12% 14235Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2256m 14% 14140Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8073m 50% 14278Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1059m 6% 2078Mi 3% 20:01:53 DEBUG --- stderr --- 20:01:53 DEBUG 20:02:52 INFO 20:02:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:02:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:02:52 INFO [loop_until]: OK (rc = 0) 20:02:52 DEBUG --- stdout --- 20:02:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 78m 5803Mi am-55f77847b7-dr27z 82m 5727Mi am-55f77847b7-fp459 80m 5855Mi ds-cts-0 7m 391Mi ds-cts-1 9m 376Mi ds-cts-2 6m 372Mi ds-idrepo-0 7694m 13685Mi ds-idrepo-1 2240m 13382Mi ds-idrepo-2 2003m 13582Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6521m 4746Mi idm-65858d8c4c-4qc5l 6167m 4520Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 982m 563Mi 20:02:52 DEBUG --- stderr --- 20:02:52 DEBUG 20:02:53 INFO 20:02:53 INFO [loop_until]: kubectl --namespace=xlou top node 20:02:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:02:53 INFO [loop_until]: OK (rc = 0) 20:02:53 DEBUG --- stdout --- 20:02:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 137m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6399m 40% 5831Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1861m 11% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6859m 43% 6001Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2048m 12% 14147Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2248m 14% 13957Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 7838m 49% 14259Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1054m 6% 2083Mi 3% 20:02:53 DEBUG --- stderr --- 20:02:53 DEBUG 20:03:52 INFO 20:03:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:03:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:03:52 INFO [loop_until]: OK (rc = 0) 20:03:52 DEBUG --- stdout --- 20:03:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 79m 5803Mi am-55f77847b7-dr27z 84m 5727Mi am-55f77847b7-fp459 81m 5855Mi ds-cts-0 6m 391Mi ds-cts-1 9m 377Mi ds-cts-2 6m 373Mi ds-idrepo-0 6838m 13800Mi ds-idrepo-1 1564m 13461Mi ds-idrepo-2 1449m 13650Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6402m 4766Mi idm-65858d8c4c-4qc5l 6018m 4540Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 995m 568Mi 20:03:52 DEBUG --- stderr --- 20:03:52 DEBUG 20:03:53 INFO 20:03:53 INFO [loop_until]: kubectl --namespace=xlou top node 20:03:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:03:53 INFO [loop_until]: OK (rc = 0) 20:03:53 DEBUG --- stdout --- 20:03:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6866Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 134m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6373m 40% 5852Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 1871m 11% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6668m 41% 6017Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1642m 10% 14228Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1792m 11% 14034Mi 23% gke-xlou-cdm-ds-32e4dcb1-x4wx 6950m 43% 14345Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1056m 6% 2086Mi 3% 20:03:53 DEBUG --- stderr --- 20:03:53 DEBUG 20:04:52 INFO 20:04:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:04:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:04:52 INFO [loop_until]: OK (rc = 0) 20:04:52 DEBUG --- stdout --- 20:04:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 80m 5803Mi am-55f77847b7-dr27z 88m 5728Mi am-55f77847b7-fp459 85m 5855Mi ds-cts-0 6m 391Mi ds-cts-1 8m 377Mi ds-cts-2 5m 372Mi ds-idrepo-0 7733m 13741Mi ds-idrepo-1 1820m 13519Mi ds-idrepo-2 2378m 13650Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6744m 4790Mi idm-65858d8c4c-4qc5l 6300m 4560Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 977m 571Mi 20:04:52 DEBUG --- stderr --- 20:04:52 DEBUG 20:04:53 INFO 20:04:53 INFO [loop_until]: kubectl --namespace=xlou top node 20:04:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:04:53 INFO [loop_until]: OK (rc = 0) 20:04:53 DEBUG --- stdout --- 20:04:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 146m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 150m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6198m 39% 5875Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1877m 11% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6914m 43% 6041Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2435m 15% 14221Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1881m 11% 14085Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7647m 48% 14285Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1047m 6% 2091Mi 3% 20:04:53 DEBUG --- stderr --- 20:04:53 DEBUG 20:05:52 INFO 20:05:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:05:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:05:52 INFO [loop_until]: OK (rc = 0) 20:05:52 DEBUG --- stdout --- 20:05:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 81m 5803Mi am-55f77847b7-dr27z 83m 5728Mi am-55f77847b7-fp459 82m 5855Mi ds-cts-0 7m 392Mi ds-cts-1 8m 377Mi ds-cts-2 5m 372Mi ds-idrepo-0 7676m 13660Mi ds-idrepo-1 1865m 13533Mi ds-idrepo-2 2289m 13569Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6843m 4809Mi idm-65858d8c4c-4qc5l 6251m 4579Mi lodemon-5798c88b8f-k2sv4 5m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 971m 575Mi 20:05:52 DEBUG --- stderr --- 20:05:52 DEBUG 20:05:53 INFO 20:05:53 INFO [loop_until]: kubectl --namespace=xlou top node 20:05:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:05:53 INFO [loop_until]: OK (rc = 0) 20:05:53 DEBUG --- stdout --- 20:05:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6546m 41% 5896Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1883m 11% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6926m 43% 6063Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2189m 13% 14143Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1919m 12% 14117Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7865m 49% 14198Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1068m 6% 2107Mi 3% 20:05:53 DEBUG --- stderr --- 20:05:53 DEBUG 20:06:52 INFO 20:06:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:06:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:06:52 INFO [loop_until]: OK (rc = 0) 20:06:52 DEBUG --- stdout --- 20:06:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 75m 5803Mi am-55f77847b7-dr27z 85m 5728Mi am-55f77847b7-fp459 84m 5856Mi ds-cts-0 6m 391Mi ds-cts-1 8m 378Mi ds-cts-2 7m 372Mi ds-idrepo-0 7797m 13762Mi ds-idrepo-1 1958m 13623Mi ds-idrepo-2 2147m 13679Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6806m 4828Mi idm-65858d8c4c-4qc5l 6245m 4599Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 978m 579Mi 20:06:52 DEBUG --- stderr --- 20:06:52 DEBUG 20:06:53 INFO 20:06:53 INFO [loop_until]: kubectl --namespace=xlou top node 20:06:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:06:53 INFO [loop_until]: OK (rc = 0) 20:06:53 DEBUG --- stdout --- 20:06:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6881Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6866Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 134m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6321m 39% 5911Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1881m 11% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7084m 44% 6087Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2135m 13% 14247Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2238m 14% 14199Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7715m 48% 14321Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1054m 6% 2094Mi 3% 20:06:53 DEBUG --- stderr --- 20:06:53 DEBUG 20:07:52 INFO 20:07:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:07:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:07:52 INFO [loop_until]: OK (rc = 0) 20:07:52 DEBUG --- stdout --- 20:07:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 84m 5803Mi am-55f77847b7-dr27z 85m 5729Mi am-55f77847b7-fp459 84m 5856Mi ds-cts-0 6m 391Mi ds-cts-1 9m 377Mi ds-cts-2 5m 372Mi ds-idrepo-0 7167m 13860Mi ds-idrepo-1 2041m 13701Mi ds-idrepo-2 1919m 13742Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6603m 4852Mi idm-65858d8c4c-4qc5l 6139m 4619Mi lodemon-5798c88b8f-k2sv4 5m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 971m 583Mi 20:07:52 DEBUG --- stderr --- 20:07:52 DEBUG 20:07:53 INFO 20:07:53 INFO [loop_until]: kubectl --namespace=xlou top node 20:07:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:07:53 INFO [loop_until]: OK (rc = 0) 20:07:53 DEBUG --- stdout --- 20:07:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 148m 0% 6866Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 142m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6209m 39% 5928Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1826m 11% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6949m 43% 6120Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 49m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2052m 12% 14326Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2096m 13% 14281Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7100m 44% 14414Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1035m 6% 2102Mi 3% 20:07:53 DEBUG --- stderr --- 20:07:53 DEBUG 20:08:52 INFO 20:08:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:08:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:08:52 INFO [loop_until]: OK (rc = 0) 20:08:52 DEBUG --- stdout --- 20:08:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 82m 5804Mi am-55f77847b7-dr27z 84m 5729Mi am-55f77847b7-fp459 85m 5855Mi ds-cts-0 6m 391Mi ds-cts-1 14m 377Mi ds-cts-2 12m 372Mi ds-idrepo-0 6831m 13859Mi ds-idrepo-1 2005m 13762Mi ds-idrepo-2 1854m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6772m 4872Mi idm-65858d8c4c-4qc5l 6182m 4638Mi lodemon-5798c88b8f-k2sv4 2m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 959m 587Mi 20:08:52 DEBUG --- stderr --- 20:08:52 DEBUG 20:08:53 INFO 20:08:53 INFO [loop_until]: kubectl --namespace=xlou top node 20:08:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:08:53 INFO [loop_until]: OK (rc = 0) 20:08:53 DEBUG --- stdout --- 20:08:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6879Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 138m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6414m 40% 5949Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1863m 11% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6944m 43% 6132Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1958m 12% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1952m 12% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7111m 44% 14417Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1046m 6% 2106Mi 3% 20:08:53 DEBUG --- stderr --- 20:08:53 DEBUG 20:09:52 INFO 20:09:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:09:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:09:52 INFO [loop_until]: OK (rc = 0) 20:09:52 DEBUG --- stdout --- 20:09:52 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 78m 5803Mi am-55f77847b7-dr27z 83m 5729Mi am-55f77847b7-fp459 85m 5856Mi ds-cts-0 6m 391Mi ds-cts-1 12m 377Mi ds-cts-2 6m 372Mi ds-idrepo-0 6899m 13859Mi ds-idrepo-1 1905m 13839Mi ds-idrepo-2 1939m 13822Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6637m 4900Mi idm-65858d8c4c-4qc5l 6208m 4655Mi lodemon-5798c88b8f-k2sv4 7m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 978m 591Mi 20:09:52 DEBUG --- stderr --- 20:09:52 DEBUG 20:09:53 INFO 20:09:53 INFO [loop_until]: kubectl --namespace=xlou top node 20:09:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:09:53 INFO [loop_until]: OK (rc = 0) 20:09:53 DEBUG --- stdout --- 20:09:53 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 6878Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6867Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 138m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6390m 40% 5971Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1827m 11% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7064m 44% 6154Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2013m 12% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1823m 11% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6978m 43% 14420Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1064m 6% 2109Mi 3% 20:09:53 DEBUG --- stderr --- 20:09:53 DEBUG 20:10:52 INFO 20:10:52 INFO [loop_until]: kubectl --namespace=xlou top pods 20:10:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:10:53 INFO [loop_until]: OK (rc = 0) 20:10:53 DEBUG --- stdout --- 20:10:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 94m 5804Mi am-55f77847b7-dr27z 92m 5731Mi am-55f77847b7-fp459 84m 5868Mi ds-cts-0 6m 392Mi ds-cts-1 18m 377Mi ds-cts-2 5m 373Mi ds-idrepo-0 6893m 13858Mi ds-idrepo-1 1609m 13858Mi ds-idrepo-2 1616m 13847Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6656m 4920Mi idm-65858d8c4c-4qc5l 6379m 4678Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 967m 595Mi 20:10:53 DEBUG --- stderr --- 20:10:53 DEBUG 20:10:54 INFO 20:10:54 INFO [loop_until]: kubectl --namespace=xlou top node 20:10:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:10:54 INFO [loop_until]: OK (rc = 0) 20:10:54 DEBUG --- stdout --- 20:10:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 149m 0% 6869Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 149m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6522m 41% 5987Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1893m 11% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6886m 43% 6173Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1858m 11% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1936m 12% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7023m 44% 14418Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1062m 6% 2115Mi 3% 20:10:54 DEBUG --- stderr --- 20:10:54 DEBUG 20:11:53 INFO 20:11:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:11:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:11:53 INFO [loop_until]: OK (rc = 0) 20:11:53 DEBUG --- stdout --- 20:11:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 78m 5804Mi am-55f77847b7-dr27z 78m 5731Mi am-55f77847b7-fp459 81m 5868Mi ds-cts-0 6m 391Mi ds-cts-1 10m 377Mi ds-cts-2 7m 372Mi ds-idrepo-0 7796m 13839Mi ds-idrepo-1 1818m 13859Mi ds-idrepo-2 2565m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6630m 4940Mi idm-65858d8c4c-4qc5l 6103m 4697Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 950m 599Mi 20:11:53 DEBUG --- stderr --- 20:11:53 DEBUG 20:11:54 INFO 20:11:54 INFO [loop_until]: kubectl --namespace=xlou top node 20:11:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:11:54 INFO [loop_until]: OK (rc = 0) 20:11:54 DEBUG --- stdout --- 20:11:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 6888Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 137m 0% 6869Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 133m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6181m 38% 6010Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1818m 11% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6911m 43% 6197Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1828m 11% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1597m 10% 14445Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7767m 48% 14402Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1037m 6% 2121Mi 3% 20:11:54 DEBUG --- stderr --- 20:11:54 DEBUG 20:12:53 INFO 20:12:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:12:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:12:53 INFO [loop_until]: OK (rc = 0) 20:12:53 DEBUG --- stdout --- 20:12:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 77m 5804Mi am-55f77847b7-dr27z 80m 5731Mi am-55f77847b7-fp459 83m 5868Mi ds-cts-0 6m 391Mi ds-cts-1 11m 377Mi ds-cts-2 6m 372Mi ds-idrepo-0 7113m 13866Mi ds-idrepo-1 1913m 13858Mi ds-idrepo-2 2013m 13845Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6408m 4963Mi idm-65858d8c4c-4qc5l 6155m 4716Mi lodemon-5798c88b8f-k2sv4 6m 65Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 956m 603Mi 20:12:53 DEBUG --- stderr --- 20:12:53 DEBUG 20:12:54 INFO 20:12:54 INFO [loop_until]: kubectl --namespace=xlou top node 20:12:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:12:54 INFO [loop_until]: OK (rc = 0) 20:12:54 DEBUG --- stdout --- 20:12:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6887Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6497m 40% 6030Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1876m 11% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6848m 43% 6219Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2032m 12% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2081m 13% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7323m 46% 14433Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1058m 6% 2125Mi 3% 20:12:54 DEBUG --- stderr --- 20:12:54 DEBUG 20:13:53 INFO 20:13:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:13:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:13:53 INFO [loop_until]: OK (rc = 0) 20:13:53 DEBUG --- stdout --- 20:13:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 80m 5804Mi am-55f77847b7-dr27z 82m 5731Mi am-55f77847b7-fp459 82m 5868Mi ds-cts-0 6m 391Mi ds-cts-1 10m 378Mi ds-cts-2 8m 372Mi ds-idrepo-0 7256m 13822Mi ds-idrepo-1 1887m 13843Mi ds-idrepo-2 2098m 13836Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6754m 4983Mi idm-65858d8c4c-4qc5l 6325m 4729Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1018m 608Mi 20:13:53 DEBUG --- stderr --- 20:13:53 DEBUG 20:13:54 INFO 20:13:54 INFO [loop_until]: kubectl --namespace=xlou top node 20:13:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:13:54 INFO [loop_until]: OK (rc = 0) 20:13:54 DEBUG --- stdout --- 20:13:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 138m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6565m 41% 6050Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1836m 11% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 7015m 44% 6238Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1898m 11% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2106m 13% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7395m 46% 14405Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1035m 6% 2129Mi 3% 20:13:54 DEBUG --- stderr --- 20:13:54 DEBUG 20:14:53 INFO 20:14:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:14:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:14:53 INFO [loop_until]: OK (rc = 0) 20:14:53 DEBUG --- stdout --- 20:14:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 80m 5804Mi am-55f77847b7-dr27z 85m 5731Mi am-55f77847b7-fp459 81m 5868Mi ds-cts-0 6m 391Mi ds-cts-1 9m 377Mi ds-cts-2 6m 372Mi ds-idrepo-0 6974m 13781Mi ds-idrepo-1 1743m 13757Mi ds-idrepo-2 1964m 13753Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6455m 5001Mi idm-65858d8c4c-4qc5l 6027m 4755Mi lodemon-5798c88b8f-k2sv4 7m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 971m 612Mi 20:14:53 DEBUG --- stderr --- 20:14:53 DEBUG 20:14:54 INFO 20:14:54 INFO [loop_until]: kubectl --namespace=xlou top node 20:14:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:14:54 INFO [loop_until]: OK (rc = 0) 20:14:54 DEBUG --- stdout --- 20:14:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 140m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 153m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 134m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6359m 40% 6069Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1869m 11% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6560m 41% 6259Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2095m 13% 14340Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1805m 11% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6903m 43% 14336Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1042m 6% 2132Mi 3% 20:14:54 DEBUG --- stderr --- 20:14:54 DEBUG 20:15:53 INFO 20:15:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:15:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:15:53 INFO [loop_until]: OK (rc = 0) 20:15:53 DEBUG --- stdout --- 20:15:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 80m 5804Mi am-55f77847b7-dr27z 85m 5731Mi am-55f77847b7-fp459 85m 5868Mi ds-cts-0 6m 393Mi ds-cts-1 8m 377Mi ds-cts-2 7m 373Mi ds-idrepo-0 6657m 13847Mi ds-idrepo-1 1684m 13815Mi ds-idrepo-2 1642m 13822Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6607m 5025Mi idm-65858d8c4c-4qc5l 6154m 4774Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 994m 616Mi 20:15:53 DEBUG --- stderr --- 20:15:53 DEBUG 20:15:54 INFO 20:15:54 INFO [loop_until]: kubectl --namespace=xlou top node 20:15:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:15:54 INFO [loop_until]: OK (rc = 0) 20:15:54 DEBUG --- stdout --- 20:15:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 144m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6867Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6495m 40% 6088Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1813m 11% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6836m 43% 6277Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1619m 10% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1584m 9% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6835m 43% 14417Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1064m 6% 2134Mi 3% 20:15:54 DEBUG --- stderr --- 20:15:54 DEBUG 20:16:53 INFO 20:16:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:16:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:16:53 INFO [loop_until]: OK (rc = 0) 20:16:53 DEBUG --- stdout --- 20:16:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 79m 5804Mi am-55f77847b7-dr27z 84m 5731Mi am-55f77847b7-fp459 79m 5868Mi ds-cts-0 6m 391Mi ds-cts-1 8m 378Mi ds-cts-2 6m 372Mi ds-idrepo-0 6642m 13857Mi ds-idrepo-1 1626m 13877Mi ds-idrepo-2 1517m 13865Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6720m 5044Mi idm-65858d8c4c-4qc5l 6218m 4792Mi lodemon-5798c88b8f-k2sv4 11m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 965m 619Mi 20:16:53 DEBUG --- stderr --- 20:16:53 DEBUG 20:16:54 INFO 20:16:54 INFO [loop_until]: kubectl --namespace=xlou top node 20:16:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:16:54 INFO [loop_until]: OK (rc = 0) 20:16:54 DEBUG --- stdout --- 20:16:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6870Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6514m 40% 6111Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1882m 11% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6799m 42% 6300Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1831m 11% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1666m 10% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6855m 43% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1017m 6% 2137Mi 3% 20:16:54 DEBUG --- stderr --- 20:16:54 DEBUG 20:17:53 INFO 20:17:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:17:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:17:53 INFO [loop_until]: OK (rc = 0) 20:17:53 DEBUG --- stdout --- 20:17:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 84m 5804Mi am-55f77847b7-dr27z 84m 5732Mi am-55f77847b7-fp459 79m 5868Mi ds-cts-0 7m 391Mi ds-cts-1 11m 378Mi ds-cts-2 6m 372Mi ds-idrepo-0 7274m 13859Mi ds-idrepo-1 1959m 13851Mi ds-idrepo-2 1947m 13851Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6586m 5068Mi idm-65858d8c4c-4qc5l 6251m 4815Mi lodemon-5798c88b8f-k2sv4 8m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 991m 624Mi 20:17:53 DEBUG --- stderr --- 20:17:53 DEBUG 20:17:54 INFO 20:17:54 INFO [loop_until]: kubectl --namespace=xlou top node 20:17:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:17:54 INFO [loop_until]: OK (rc = 0) 20:17:54 DEBUG --- stdout --- 20:17:54 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 141m 0% 6868Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 140m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6617m 41% 6129Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1894m 11% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6925m 43% 6321Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 53m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1630m 10% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2031m 12% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7275m 45% 14425Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1061m 6% 2138Mi 3% 20:17:54 DEBUG --- stderr --- 20:17:54 DEBUG 20:18:53 INFO 20:18:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:18:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:18:53 INFO [loop_until]: OK (rc = 0) 20:18:53 DEBUG --- stdout --- 20:18:53 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 78m 5805Mi am-55f77847b7-dr27z 80m 5731Mi am-55f77847b7-fp459 81m 5868Mi ds-cts-0 6m 391Mi ds-cts-1 8m 377Mi ds-cts-2 7m 372Mi ds-idrepo-0 7214m 13581Mi ds-idrepo-1 1989m 13642Mi ds-idrepo-2 2070m 13598Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6418m 5089Mi idm-65858d8c4c-4qc5l 6216m 4828Mi lodemon-5798c88b8f-k2sv4 7m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 964m 627Mi 20:18:53 DEBUG --- stderr --- 20:18:53 DEBUG 20:18:55 INFO 20:18:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:18:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:18:55 INFO [loop_until]: OK (rc = 0) 20:18:55 DEBUG --- stdout --- 20:18:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 142m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 143m 0% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 135m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6499m 40% 6148Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1880m 11% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6616m 41% 6343Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1990m 12% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1988m 12% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7397m 46% 14140Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1055m 6% 2146Mi 3% 20:18:55 DEBUG --- stderr --- 20:18:55 DEBUG 20:19:53 INFO 20:19:53 INFO [loop_until]: kubectl --namespace=xlou top pods 20:19:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:19:54 INFO [loop_until]: OK (rc = 0) 20:19:54 DEBUG --- stdout --- 20:19:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 80m 5805Mi am-55f77847b7-dr27z 84m 5732Mi am-55f77847b7-fp459 87m 5868Mi ds-cts-0 6m 392Mi ds-cts-1 8m 377Mi ds-cts-2 6m 372Mi ds-idrepo-0 6889m 13647Mi ds-idrepo-1 1606m 13705Mi ds-idrepo-2 1649m 13671Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6625m 5105Mi idm-65858d8c4c-4qc5l 5966m 4849Mi lodemon-5798c88b8f-k2sv4 7m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 977m 631Mi 20:19:54 DEBUG --- stderr --- 20:19:54 DEBUG 20:19:55 INFO 20:19:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:19:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:19:55 INFO [loop_until]: OK (rc = 0) 20:19:55 DEBUG --- stdout --- 20:19:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6866Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 137m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6399m 40% 6167Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1878m 11% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6985m 43% 6363Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1593m 10% 14255Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1763m 11% 14302Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6787m 42% 14211Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1068m 6% 2149Mi 3% 20:19:55 DEBUG --- stderr --- 20:19:55 DEBUG 20:20:54 INFO 20:20:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:20:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:20:54 INFO [loop_until]: OK (rc = 0) 20:20:54 DEBUG --- stdout --- 20:20:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 81m 5804Mi am-55f77847b7-dr27z 86m 5731Mi am-55f77847b7-fp459 82m 5869Mi ds-cts-0 9m 392Mi ds-cts-1 9m 377Mi ds-cts-2 12m 376Mi ds-idrepo-0 7195m 13710Mi ds-idrepo-1 2379m 13750Mi ds-idrepo-2 1699m 13734Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6716m 5129Mi idm-65858d8c4c-4qc5l 6492m 4871Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 976m 635Mi 20:20:54 DEBUG --- stderr --- 20:20:54 DEBUG 20:20:55 INFO 20:20:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:20:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:20:55 INFO [loop_until]: OK (rc = 0) 20:20:55 DEBUG --- stdout --- 20:20:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 138m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 147m 0% 6867Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6496m 40% 6188Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1902m 11% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6979m 43% 6387Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1732m 10% 14310Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2082m 13% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7405m 46% 14299Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1070m 6% 2152Mi 3% 20:20:55 DEBUG --- stderr --- 20:20:55 DEBUG 20:21:54 INFO 20:21:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:21:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:21:54 INFO [loop_until]: OK (rc = 0) 20:21:54 DEBUG --- stdout --- 20:21:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 77m 5805Mi am-55f77847b7-dr27z 83m 5732Mi am-55f77847b7-fp459 79m 5869Mi ds-cts-0 6m 395Mi ds-cts-1 8m 377Mi ds-cts-2 6m 376Mi ds-idrepo-0 7427m 13797Mi ds-idrepo-1 2292m 13809Mi ds-idrepo-2 2075m 13815Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6512m 5151Mi idm-65858d8c4c-4qc5l 6062m 4887Mi lodemon-5798c88b8f-k2sv4 8m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 954m 637Mi 20:21:54 DEBUG --- stderr --- 20:21:54 DEBUG 20:21:55 INFO 20:21:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:21:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:21:55 INFO [loop_until]: OK (rc = 0) 20:21:55 DEBUG --- stdout --- 20:21:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 141m 0% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6870Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 134m 0% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6251m 39% 6209Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1873m 11% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6842m 43% 6410Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 52m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1820m 11% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 2535m 15% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7366m 46% 14389Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1041m 6% 2154Mi 3% 20:21:55 DEBUG --- stderr --- 20:21:55 DEBUG 20:22:54 INFO 20:22:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:22:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:22:54 INFO [loop_until]: OK (rc = 0) 20:22:54 DEBUG --- stdout --- 20:22:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 80m 5805Mi am-55f77847b7-dr27z 83m 5732Mi am-55f77847b7-fp459 85m 5869Mi ds-cts-0 7m 395Mi ds-cts-1 15m 378Mi ds-cts-2 9m 376Mi ds-idrepo-0 7492m 13821Mi ds-idrepo-1 1904m 13503Mi ds-idrepo-2 1765m 13824Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6338m 5176Mi idm-65858d8c4c-4qc5l 6222m 4909Mi lodemon-5798c88b8f-k2sv4 9m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 959m 637Mi 20:22:54 DEBUG --- stderr --- 20:22:54 DEBUG 20:22:55 INFO 20:22:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:22:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:22:55 INFO [loop_until]: OK (rc = 0) 20:22:55 DEBUG --- stdout --- 20:22:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 145m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6871Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6513m 40% 6228Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1877m 11% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6841m 43% 6431Mi 10% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1691m 10% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1992m 12% 14122Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7339m 46% 14401Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1036m 6% 2154Mi 3% 20:22:55 DEBUG --- stderr --- 20:22:55 DEBUG 20:23:54 INFO 20:23:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:23:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:23:54 INFO [loop_until]: OK (rc = 0) 20:23:54 DEBUG --- stdout --- 20:23:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 81m 5805Mi am-55f77847b7-dr27z 85m 5732Mi am-55f77847b7-fp459 80m 5869Mi ds-cts-0 6m 395Mi ds-cts-1 9m 378Mi ds-cts-2 6m 376Mi ds-idrepo-0 8337m 13810Mi ds-idrepo-1 1701m 13587Mi ds-idrepo-2 2661m 13824Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6633m 5202Mi idm-65858d8c4c-4qc5l 6079m 4930Mi lodemon-5798c88b8f-k2sv4 7m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 997m 639Mi 20:23:54 DEBUG --- stderr --- 20:23:54 DEBUG 20:23:55 INFO 20:23:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:23:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:23:55 INFO [loop_until]: OK (rc = 0) 20:23:55 DEBUG --- stdout --- 20:23:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 139m 0% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 145m 0% 6867Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 134m 0% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6483m 40% 6251Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1873m 11% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6885m 43% 6458Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3058m 19% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1719m 10% 14207Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8154m 51% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1059m 6% 2155Mi 3% 20:23:55 DEBUG --- stderr --- 20:23:55 DEBUG 20:24:54 INFO 20:24:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:24:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:24:54 INFO [loop_until]: OK (rc = 0) 20:24:54 DEBUG --- stdout --- 20:24:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 80m 5805Mi am-55f77847b7-dr27z 79m 5732Mi am-55f77847b7-fp459 80m 5869Mi ds-cts-0 6m 395Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 8100m 13823Mi ds-idrepo-1 1852m 13670Mi ds-idrepo-2 1967m 13824Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6785m 5228Mi idm-65858d8c4c-4qc5l 6062m 4952Mi lodemon-5798c88b8f-k2sv4 5m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 961m 639Mi 20:24:54 DEBUG --- stderr --- 20:24:54 DEBUG 20:24:55 INFO 20:24:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:24:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:24:55 INFO [loop_until]: OK (rc = 0) 20:24:55 DEBUG --- stdout --- 20:24:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 143m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 136m 0% 6871Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 136m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 6194m 38% 6267Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1877m 11% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 6968m 43% 6483Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2070m 13% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1734m 10% 14291Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7915m 49% 14383Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1058m 6% 2152Mi 3% 20:24:55 DEBUG --- stderr --- 20:24:55 DEBUG 20:25:54 INFO 20:25:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:25:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:25:54 INFO [loop_until]: OK (rc = 0) 20:25:54 DEBUG --- stdout --- 20:25:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 7m 5805Mi am-55f77847b7-dr27z 60m 5763Mi am-55f77847b7-fp459 16m 5869Mi ds-cts-0 7m 396Mi ds-cts-1 11m 378Mi ds-cts-2 7m 377Mi ds-idrepo-0 317m 13813Mi ds-idrepo-1 149m 13656Mi ds-idrepo-2 183m 13774Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 9m 5240Mi idm-65858d8c4c-4qc5l 817m 4960Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 188m 641Mi 20:25:54 DEBUG --- stderr --- 20:25:54 DEBUG 20:25:55 INFO 20:25:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:25:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:25:55 INFO [loop_until]: OK (rc = 0) 20:25:55 DEBUG --- stdout --- 20:25:55 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 122m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6277Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 254m 1% 2165Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 288m 1% 6492Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 277m 1% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 381m 2% 14289Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 886m 5% 14401Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 389m 2% 2157Mi 3% 20:25:55 DEBUG --- stderr --- 20:25:55 DEBUG 20:26:54 INFO 20:26:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:26:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:26:54 INFO [loop_until]: OK (rc = 0) 20:26:54 DEBUG --- stdout --- 20:26:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 6m 5805Mi am-55f77847b7-dr27z 8m 5763Mi am-55f77847b7-fp459 10m 5869Mi ds-cts-0 7m 396Mi ds-cts-1 14m 378Mi ds-cts-2 5m 376Mi ds-idrepo-0 11m 13770Mi ds-idrepo-1 270m 13704Mi ds-idrepo-2 72m 13728Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 9m 5239Mi idm-65858d8c4c-4qc5l 7m 4960Mi lodemon-5798c88b8f-k2sv4 12m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 207Mi 20:26:54 DEBUG --- stderr --- 20:26:54 DEBUG 20:26:55 INFO 20:26:55 INFO [loop_until]: kubectl --namespace=xlou top node 20:26:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:26:56 INFO [loop_until]: OK (rc = 0) 20:26:56 DEBUG --- stdout --- 20:26:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6894Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 6278Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2166Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 6496Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 14344Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 14322Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 56m 0% 14360Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1731Mi 2% 20:26:56 DEBUG --- stderr --- 20:26:56 DEBUG 127.0.0.1 - - [12/Aug/2023 20:27:06] "GET /monitoring/average?start_time=23-08-12_18:56:35&stop_time=23-08-12_19:25:05 HTTP/1.1" 200 - 20:27:54 INFO 20:27:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:27:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:27:54 INFO [loop_until]: OK (rc = 0) 20:27:54 DEBUG --- stdout --- 20:27:54 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 7m 5805Mi am-55f77847b7-dr27z 8m 5763Mi am-55f77847b7-fp459 10m 5869Mi ds-cts-0 6m 397Mi ds-cts-1 8m 377Mi ds-cts-2 6m 376Mi ds-idrepo-0 9m 13770Mi ds-idrepo-1 10m 13704Mi ds-idrepo-2 10m 13727Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 8m 5239Mi idm-65858d8c4c-4qc5l 7m 4960Mi lodemon-5798c88b8f-k2sv4 8m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1288m 504Mi 20:27:54 DEBUG --- stderr --- 20:27:54 DEBUG 20:27:56 INFO 20:27:56 INFO [loop_until]: kubectl --namespace=xlou top node 20:27:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:27:56 INFO [loop_until]: OK (rc = 0) 20:27:56 DEBUG --- stdout --- 20:27:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 120m 0% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 116m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1662m 10% 6317Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 558m 3% 2204Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1550m 9% 6535Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 70m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1639m 10% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1941m 12% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 2576m 16% 14411Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1271m 7% 2132Mi 3% 20:27:56 DEBUG --- stderr --- 20:27:56 DEBUG 20:28:54 INFO 20:28:54 INFO [loop_until]: kubectl --namespace=xlou top pods 20:28:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:28:55 INFO [loop_until]: OK (rc = 0) 20:28:55 DEBUG --- stdout --- 20:28:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 74m 5805Mi am-55f77847b7-dr27z 82m 5762Mi am-55f77847b7-fp459 82m 5870Mi ds-cts-0 9m 394Mi ds-cts-1 9m 378Mi ds-cts-2 7m 376Mi ds-idrepo-0 5429m 13835Mi ds-idrepo-1 3394m 13797Mi ds-idrepo-2 3213m 13837Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2412m 5338Mi idm-65858d8c4c-4qc5l 2331m 5045Mi lodemon-5798c88b8f-k2sv4 14m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 992m 857Mi 20:28:55 DEBUG --- stderr --- 20:28:55 DEBUG 20:28:56 INFO 20:28:56 INFO [loop_until]: kubectl --namespace=xlou top node 20:28:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:28:56 INFO [loop_until]: OK (rc = 0) 20:28:56 DEBUG --- stdout --- 20:28:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 132m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 146m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 138m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2522m 15% 6385Mi 10% gke-xlou-cdm-default-pool-f05840a3-h81k 1387m 8% 2753Mi 4% gke-xlou-cdm-default-pool-f05840a3-tnc9 2532m 15% 6629Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 77m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3270m 20% 14457Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 3701m 23% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 5444m 34% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 843m 5% 2516Mi 4% 20:28:56 DEBUG --- stderr --- 20:28:56 DEBUG 20:29:55 INFO 20:29:55 INFO [loop_until]: kubectl --namespace=xlou top pods 20:29:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:29:55 INFO [loop_until]: OK (rc = 0) 20:29:55 DEBUG --- stdout --- 20:29:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 84m 5805Mi am-55f77847b7-dr27z 90m 5763Mi am-55f77847b7-fp459 89m 5870Mi ds-cts-0 8m 393Mi ds-cts-1 9m 378Mi ds-cts-2 6m 376Mi ds-idrepo-0 6386m 13810Mi ds-idrepo-1 3651m 13823Mi ds-idrepo-2 3673m 13832Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2668m 5446Mi idm-65858d8c4c-4qc5l 2564m 5130Mi lodemon-5798c88b8f-k2sv4 5m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1107m 1153Mi 20:29:55 DEBUG --- stderr --- 20:29:55 DEBUG 20:29:56 INFO 20:29:56 INFO [loop_until]: kubectl --namespace=xlou top node 20:29:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:29:56 INFO [loop_until]: OK (rc = 0) 20:29:56 DEBUG --- stdout --- 20:29:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 102m 0% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1907m 12% 6511Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1383m 8% 2188Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1331m 8% 6704Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3248m 20% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1992m 12% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 3955m 24% 14401Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1093m 6% 2638Mi 4% 20:29:56 DEBUG --- stderr --- 20:29:56 DEBUG 20:30:55 INFO 20:30:55 INFO [loop_until]: kubectl --namespace=xlou top pods 20:30:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:30:55 INFO [loop_until]: OK (rc = 0) 20:30:55 DEBUG --- stdout --- 20:30:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 96m 5805Mi am-55f77847b7-dr27z 75m 5763Mi am-55f77847b7-fp459 71m 5871Mi ds-cts-0 7m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 376Mi ds-idrepo-0 5737m 13800Mi ds-idrepo-1 2718m 13807Mi ds-idrepo-2 5907m 13765Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3354m 5450Mi idm-65858d8c4c-4qc5l 2413m 5221Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 772m 1133Mi 20:30:55 DEBUG --- stderr --- 20:30:55 DEBUG 20:30:56 INFO 20:30:56 INFO [loop_until]: kubectl --namespace=xlou top node 20:30:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:30:56 INFO [loop_until]: OK (rc = 0) 20:30:56 DEBUG --- stdout --- 20:30:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 165m 1% 6891Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 167m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 161m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3587m 22% 6545Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1410m 8% 2221Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3861m 24% 6703Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5478m 34% 14472Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6779m 42% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7968m 50% 14414Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 827m 5% 2636Mi 4% 20:30:56 DEBUG --- stderr --- 20:30:56 DEBUG 20:31:55 INFO 20:31:55 INFO [loop_until]: kubectl --namespace=xlou top pods 20:31:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:31:55 INFO [loop_until]: OK (rc = 0) 20:31:55 DEBUG --- stdout --- 20:31:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 109m 5805Mi am-55f77847b7-dr27z 112m 5763Mi am-55f77847b7-fp459 117m 5873Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 376Mi ds-idrepo-0 9026m 13818Mi ds-idrepo-1 6008m 13823Mi ds-idrepo-2 6460m 13834Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3510m 5447Mi idm-65858d8c4c-4qc5l 3483m 5255Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 728m 1134Mi 20:31:55 DEBUG --- stderr --- 20:31:55 DEBUG 20:31:56 INFO 20:31:56 INFO [loop_until]: kubectl --namespace=xlou top node 20:31:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:31:56 INFO [loop_until]: OK (rc = 0) 20:31:56 DEBUG --- stdout --- 20:31:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 169m 1% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 166m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 165m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3656m 23% 6575Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1395m 8% 2184Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3873m 24% 6704Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6768m 42% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6677m 42% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8050m 50% 14349Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 810m 5% 2638Mi 4% 20:31:56 DEBUG --- stderr --- 20:31:56 DEBUG 20:32:55 INFO 20:32:55 INFO [loop_until]: kubectl --namespace=xlou top pods 20:32:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:32:55 INFO [loop_until]: OK (rc = 0) 20:32:55 DEBUG --- stdout --- 20:32:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 106m 5805Mi am-55f77847b7-dr27z 118m 5763Mi am-55f77847b7-fp459 113m 5872Mi ds-cts-0 8m 393Mi ds-cts-1 8m 378Mi ds-cts-2 7m 376Mi ds-idrepo-0 7115m 13823Mi ds-idrepo-1 5665m 13824Mi ds-idrepo-2 4603m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3484m 5446Mi idm-65858d8c4c-4qc5l 3367m 5271Mi lodemon-5798c88b8f-k2sv4 8m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 717m 1134Mi 20:32:55 DEBUG --- stderr --- 20:32:55 DEBUG 20:32:56 INFO 20:32:56 INFO [loop_until]: kubectl --namespace=xlou top node 20:32:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:32:56 INFO [loop_until]: OK (rc = 0) 20:32:56 DEBUG --- stdout --- 20:32:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 170m 1% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 167m 1% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 166m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3617m 22% 6599Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1393m 8% 2185Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3835m 24% 6703Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6164m 38% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6409m 40% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8644m 54% 14413Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 805m 5% 2635Mi 4% 20:32:56 DEBUG --- stderr --- 20:32:56 DEBUG 20:33:55 INFO 20:33:55 INFO [loop_until]: kubectl --namespace=xlou top pods 20:33:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:33:55 INFO [loop_until]: OK (rc = 0) 20:33:55 DEBUG --- stdout --- 20:33:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 107m 5806Mi am-55f77847b7-dr27z 111m 5763Mi am-55f77847b7-fp459 114m 5873Mi ds-cts-0 8m 394Mi ds-cts-1 8m 378Mi ds-cts-2 6m 376Mi ds-idrepo-0 8876m 13802Mi ds-idrepo-1 7480m 13874Mi ds-idrepo-2 6916m 13722Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3659m 5447Mi idm-65858d8c4c-4qc5l 3364m 5291Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 723m 1134Mi 20:33:55 DEBUG --- stderr --- 20:33:55 DEBUG 20:33:56 INFO 20:33:56 INFO [loop_until]: kubectl --namespace=xlou top node 20:33:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:33:56 INFO [loop_until]: OK (rc = 0) 20:33:56 DEBUG --- stdout --- 20:33:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 169m 1% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 169m 1% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 160m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3622m 22% 6614Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1373m 8% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3805m 23% 6701Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5276m 33% 14491Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4634m 29% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7662m 48% 14445Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 797m 5% 2635Mi 4% 20:33:56 DEBUG --- stderr --- 20:33:56 DEBUG 20:34:55 INFO 20:34:55 INFO [loop_until]: kubectl --namespace=xlou top pods 20:34:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:34:55 INFO [loop_until]: OK (rc = 0) 20:34:55 DEBUG --- stdout --- 20:34:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 110m 5806Mi am-55f77847b7-dr27z 113m 5763Mi am-55f77847b7-fp459 112m 5873Mi ds-cts-0 6m 393Mi ds-cts-1 8m 379Mi ds-cts-2 5m 378Mi ds-idrepo-0 7223m 13806Mi ds-idrepo-1 6033m 13782Mi ds-idrepo-2 6091m 13816Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3580m 5446Mi idm-65858d8c4c-4qc5l 3360m 5299Mi lodemon-5798c88b8f-k2sv4 6m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 721m 1135Mi 20:34:55 DEBUG --- stderr --- 20:34:55 DEBUG 20:34:56 INFO 20:34:56 INFO [loop_until]: kubectl --namespace=xlou top node 20:34:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:34:57 INFO [loop_until]: OK (rc = 0) 20:34:57 DEBUG --- stdout --- 20:34:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 171m 1% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 172m 1% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3575m 22% 6614Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1343m 8% 2183Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3876m 24% 6703Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6490m 40% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6033m 37% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9478m 59% 14380Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 802m 5% 2636Mi 4% 20:34:57 DEBUG --- stderr --- 20:34:57 DEBUG 20:35:55 INFO 20:35:55 INFO [loop_until]: kubectl --namespace=xlou top pods 20:35:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:35:55 INFO [loop_until]: OK (rc = 0) 20:35:55 DEBUG --- stdout --- 20:35:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 108m 5806Mi am-55f77847b7-dr27z 115m 5763Mi am-55f77847b7-fp459 114m 5873Mi ds-cts-0 6m 394Mi ds-cts-1 9m 378Mi ds-cts-2 6m 376Mi ds-idrepo-0 7374m 13868Mi ds-idrepo-1 4959m 13824Mi ds-idrepo-2 5501m 13793Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3580m 5446Mi idm-65858d8c4c-4qc5l 3422m 5298Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 694m 1135Mi 20:35:55 DEBUG --- stderr --- 20:35:55 DEBUG 20:35:57 INFO 20:35:57 INFO [loop_until]: kubectl --namespace=xlou top node 20:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:35:57 INFO [loop_until]: OK (rc = 0) 20:35:57 DEBUG --- stdout --- 20:35:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 172m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 158m 0% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3586m 22% 6614Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1378m 8% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3852m 24% 6699Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5405m 34% 14431Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5196m 32% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7307m 45% 14393Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 790m 4% 2637Mi 4% 20:35:57 DEBUG --- stderr --- 20:35:57 DEBUG 20:36:55 INFO 20:36:55 INFO [loop_until]: kubectl --namespace=xlou top pods 20:36:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:36:55 INFO [loop_until]: OK (rc = 0) 20:36:55 DEBUG --- stdout --- 20:36:55 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 109m 5806Mi am-55f77847b7-dr27z 108m 5763Mi am-55f77847b7-fp459 111m 5873Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 376Mi ds-idrepo-0 7661m 13824Mi ds-idrepo-1 4613m 13853Mi ds-idrepo-2 5843m 13822Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3616m 5455Mi idm-65858d8c4c-4qc5l 3346m 5299Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 709m 1136Mi 20:36:55 DEBUG --- stderr --- 20:36:55 DEBUG 20:36:57 INFO 20:36:57 INFO [loop_until]: kubectl --namespace=xlou top node 20:36:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:36:57 INFO [loop_until]: OK (rc = 0) 20:36:57 DEBUG --- stdout --- 20:36:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 170m 1% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 171m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 167m 1% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3512m 22% 6616Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1329m 8% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3875m 24% 6711Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5510m 34% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5400m 33% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7502m 47% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 766m 4% 2650Mi 4% 20:36:57 DEBUG --- stderr --- 20:36:57 DEBUG 20:37:56 INFO 20:37:56 INFO [loop_until]: kubectl --namespace=xlou top pods 20:37:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:37:56 INFO [loop_until]: OK (rc = 0) 20:37:56 DEBUG --- stdout --- 20:37:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 112m 5806Mi am-55f77847b7-dr27z 113m 5763Mi am-55f77847b7-fp459 112m 5873Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 376Mi ds-idrepo-0 6396m 13829Mi ds-idrepo-1 4193m 13785Mi ds-idrepo-2 4951m 13683Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3577m 5454Mi idm-65858d8c4c-4qc5l 3347m 5298Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 728m 1136Mi 20:37:56 DEBUG --- stderr --- 20:37:56 DEBUG 20:37:57 INFO 20:37:57 INFO [loop_until]: kubectl --namespace=xlou top node 20:37:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:37:57 INFO [loop_until]: OK (rc = 0) 20:37:57 DEBUG --- stdout --- 20:37:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 173m 1% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 165m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3545m 22% 6611Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1369m 8% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3854m 24% 6713Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4933m 31% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4832m 30% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7040m 44% 14309Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 787m 4% 2637Mi 4% 20:37:57 DEBUG --- stderr --- 20:37:57 DEBUG 20:38:56 INFO 20:38:56 INFO [loop_until]: kubectl --namespace=xlou top pods 20:38:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:38:56 INFO [loop_until]: OK (rc = 0) 20:38:56 DEBUG --- stdout --- 20:38:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 108m 5806Mi am-55f77847b7-dr27z 114m 5763Mi am-55f77847b7-fp459 115m 5873Mi ds-cts-0 6m 393Mi ds-cts-1 9m 379Mi ds-cts-2 7m 376Mi ds-idrepo-0 6755m 13829Mi ds-idrepo-1 4861m 13772Mi ds-idrepo-2 4934m 13827Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3459m 5455Mi idm-65858d8c4c-4qc5l 3252m 5299Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 703m 1136Mi 20:38:56 DEBUG --- stderr --- 20:38:56 DEBUG 20:38:57 INFO 20:38:57 INFO [loop_until]: kubectl --namespace=xlou top node 20:38:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:38:57 INFO [loop_until]: OK (rc = 0) 20:38:57 DEBUG --- stdout --- 20:38:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 170m 1% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 164m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 164m 1% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3577m 22% 6614Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1329m 8% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3791m 23% 6711Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 86m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4149m 26% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5147m 32% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6752m 42% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 785m 4% 2639Mi 4% 20:38:57 DEBUG --- stderr --- 20:38:57 DEBUG 20:39:56 INFO 20:39:56 INFO [loop_until]: kubectl --namespace=xlou top pods 20:39:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:39:56 INFO [loop_until]: OK (rc = 0) 20:39:56 DEBUG --- stdout --- 20:39:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 108m 5806Mi am-55f77847b7-dr27z 109m 5763Mi am-55f77847b7-fp459 113m 5873Mi ds-cts-0 6m 394Mi ds-cts-1 8m 379Mi ds-cts-2 6m 378Mi ds-idrepo-0 7010m 13814Mi ds-idrepo-1 4463m 13821Mi ds-idrepo-2 6216m 13817Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3510m 5454Mi idm-65858d8c4c-4qc5l 3278m 5298Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 689m 1136Mi 20:39:56 DEBUG --- stderr --- 20:39:56 DEBUG 20:39:57 INFO 20:39:57 INFO [loop_until]: kubectl --namespace=xlou top node 20:39:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:39:57 INFO [loop_until]: OK (rc = 0) 20:39:57 DEBUG --- stdout --- 20:39:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 168m 1% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 164m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3566m 22% 6612Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1369m 8% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3637m 22% 6708Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5538m 34% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4445m 27% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7515m 47% 14400Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 777m 4% 2639Mi 4% 20:39:57 DEBUG --- stderr --- 20:39:57 DEBUG 20:40:56 INFO 20:40:56 INFO [loop_until]: kubectl --namespace=xlou top pods 20:40:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:40:56 INFO [loop_until]: OK (rc = 0) 20:40:56 DEBUG --- stdout --- 20:40:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 109m 5806Mi am-55f77847b7-dr27z 103m 5763Mi am-55f77847b7-fp459 110m 5873Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 9m 376Mi ds-idrepo-0 8205m 13840Mi ds-idrepo-1 5589m 13847Mi ds-idrepo-2 5141m 13875Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3473m 5454Mi idm-65858d8c4c-4qc5l 3322m 5298Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 704m 1137Mi 20:40:56 DEBUG --- stderr --- 20:40:56 DEBUG 20:40:57 INFO 20:40:57 INFO [loop_until]: kubectl --namespace=xlou top node 20:40:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:40:57 INFO [loop_until]: OK (rc = 0) 20:40:57 DEBUG --- stdout --- 20:40:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 164m 1% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 166m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 160m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3553m 22% 6612Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1321m 8% 2181Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3753m 23% 6711Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5385m 33% 14433Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5689m 35% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9044m 56% 14406Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 775m 4% 2640Mi 4% 20:40:57 DEBUG --- stderr --- 20:40:57 DEBUG 20:41:56 INFO 20:41:56 INFO [loop_until]: kubectl --namespace=xlou top pods 20:41:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:41:56 INFO [loop_until]: OK (rc = 0) 20:41:56 DEBUG --- stdout --- 20:41:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 111m 5807Mi am-55f77847b7-dr27z 114m 5763Mi am-55f77847b7-fp459 112m 5873Mi ds-cts-0 5m 393Mi ds-cts-1 8m 378Mi ds-cts-2 8m 376Mi ds-idrepo-0 7317m 13834Mi ds-idrepo-1 5452m 13826Mi ds-idrepo-2 5752m 13834Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3672m 5455Mi idm-65858d8c4c-4qc5l 3316m 5299Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 690m 1137Mi 20:41:56 DEBUG --- stderr --- 20:41:56 DEBUG 20:41:57 INFO 20:41:57 INFO [loop_until]: kubectl --namespace=xlou top node 20:41:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:41:57 INFO [loop_until]: OK (rc = 0) 20:41:57 DEBUG --- stdout --- 20:41:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1383Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 165m 1% 6893Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 172m 1% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 167m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3589m 22% 6611Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1389m 8% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3841m 24% 6710Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 78m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4648m 29% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4987m 31% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7832m 49% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 778m 4% 2638Mi 4% 20:41:57 DEBUG --- stderr --- 20:41:57 DEBUG 20:42:56 INFO 20:42:56 INFO [loop_until]: kubectl --namespace=xlou top pods 20:42:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:42:56 INFO [loop_until]: OK (rc = 0) 20:42:56 DEBUG --- stdout --- 20:42:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 104m 5807Mi am-55f77847b7-dr27z 103m 5763Mi am-55f77847b7-fp459 110m 5874Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 14m 377Mi ds-idrepo-0 8550m 13790Mi ds-idrepo-1 5252m 13802Mi ds-idrepo-2 6503m 13689Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3567m 5456Mi idm-65858d8c4c-4qc5l 3415m 5299Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 703m 1137Mi 20:42:56 DEBUG --- stderr --- 20:42:56 DEBUG 20:42:57 INFO 20:42:57 INFO [loop_until]: kubectl --namespace=xlou top node 20:42:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:42:58 INFO [loop_until]: OK (rc = 0) 20:42:58 DEBUG --- stdout --- 20:42:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 166m 1% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 166m 1% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 157m 0% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3682m 23% 6609Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1386m 8% 2215Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3599m 22% 6709Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6283m 39% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5292m 33% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8874m 55% 14390Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 795m 5% 2641Mi 4% 20:42:58 DEBUG --- stderr --- 20:42:58 DEBUG 20:43:56 INFO 20:43:56 INFO [loop_until]: kubectl --namespace=xlou top pods 20:43:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:43:56 INFO [loop_until]: OK (rc = 0) 20:43:56 DEBUG --- stdout --- 20:43:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 110m 5806Mi am-55f77847b7-dr27z 111m 5763Mi am-55f77847b7-fp459 116m 5873Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 7m 376Mi ds-idrepo-0 7757m 13828Mi ds-idrepo-1 5647m 13764Mi ds-idrepo-2 5192m 13805Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3426m 5455Mi idm-65858d8c4c-4qc5l 3333m 5298Mi lodemon-5798c88b8f-k2sv4 1m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 713m 1138Mi 20:43:56 DEBUG --- stderr --- 20:43:56 DEBUG 20:43:58 INFO 20:43:58 INFO [loop_until]: kubectl --namespace=xlou top node 20:43:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:43:58 INFO [loop_until]: OK (rc = 0) 20:43:58 DEBUG --- stdout --- 20:43:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 173m 1% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 173m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 168m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3637m 22% 6609Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1372m 8% 2167Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3755m 23% 6722Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5323m 33% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5707m 35% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7455m 46% 14305Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 782m 4% 2640Mi 4% 20:43:58 DEBUG --- stderr --- 20:43:58 DEBUG 20:44:56 INFO 20:44:56 INFO [loop_until]: kubectl --namespace=xlou top pods 20:44:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:44:56 INFO [loop_until]: OK (rc = 0) 20:44:56 DEBUG --- stdout --- 20:44:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 107m 5807Mi am-55f77847b7-dr27z 114m 5763Mi am-55f77847b7-fp459 114m 5873Mi ds-cts-0 5m 393Mi ds-cts-1 8m 378Mi ds-cts-2 5m 377Mi ds-idrepo-0 7953m 13802Mi ds-idrepo-1 6321m 13743Mi ds-idrepo-2 5681m 13732Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3529m 5455Mi idm-65858d8c4c-4qc5l 3322m 5305Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 735m 1138Mi 20:44:56 DEBUG --- stderr --- 20:44:56 DEBUG 20:44:58 INFO 20:44:58 INFO [loop_until]: kubectl --namespace=xlou top node 20:44:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:44:58 INFO [loop_until]: OK (rc = 0) 20:44:58 DEBUG --- stdout --- 20:44:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 163m 1% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 174m 1% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 163m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3621m 22% 6614Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1332m 8% 2188Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3785m 23% 6713Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6408m 40% 14480Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6739m 42% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8022m 50% 14307Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 796m 5% 2641Mi 4% 20:44:58 DEBUG --- stderr --- 20:44:58 DEBUG 20:45:57 INFO 20:45:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:45:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:45:57 INFO [loop_until]: OK (rc = 0) 20:45:57 DEBUG --- stdout --- 20:45:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 107m 5807Mi am-55f77847b7-dr27z 113m 5763Mi am-55f77847b7-fp459 117m 5874Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 7128m 13799Mi ds-idrepo-1 5586m 13773Mi ds-idrepo-2 5249m 13780Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3603m 5455Mi idm-65858d8c4c-4qc5l 3261m 5305Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 714m 1138Mi 20:45:57 DEBUG --- stderr --- 20:45:57 DEBUG 20:45:58 INFO 20:45:58 INFO [loop_until]: kubectl --namespace=xlou top node 20:45:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:45:58 INFO [loop_until]: OK (rc = 0) 20:45:58 DEBUG --- stdout --- 20:45:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 171m 1% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 164m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3568m 22% 6615Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1381m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3816m 24% 6714Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4550m 28% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6046m 38% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7270m 45% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 779m 4% 2640Mi 4% 20:45:58 DEBUG --- stderr --- 20:45:58 DEBUG 20:46:57 INFO 20:46:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:46:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:46:57 INFO [loop_until]: OK (rc = 0) 20:46:57 DEBUG --- stdout --- 20:46:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 107m 5807Mi am-55f77847b7-dr27z 104m 5764Mi am-55f77847b7-fp459 109m 5874Mi ds-cts-0 6m 393Mi ds-cts-1 9m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 7719m 13823Mi ds-idrepo-1 5216m 13693Mi ds-idrepo-2 5970m 13835Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3542m 5455Mi idm-65858d8c4c-4qc5l 3332m 5305Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 715m 1138Mi 20:46:57 DEBUG --- stderr --- 20:46:57 DEBUG 20:46:58 INFO 20:46:58 INFO [loop_until]: kubectl --namespace=xlou top node 20:46:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:46:58 INFO [loop_until]: OK (rc = 0) 20:46:58 DEBUG --- stdout --- 20:46:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 174m 1% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 165m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 163m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3603m 22% 6619Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1350m 8% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3829m 24% 6714Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6253m 39% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5642m 35% 14282Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7885m 49% 14412Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 806m 5% 2637Mi 4% 20:46:58 DEBUG --- stderr --- 20:46:58 DEBUG 20:47:57 INFO 20:47:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:47:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:47:57 INFO [loop_until]: OK (rc = 0) 20:47:57 DEBUG --- stdout --- 20:47:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 108m 5807Mi am-55f77847b7-dr27z 110m 5763Mi am-55f77847b7-fp459 115m 5874Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 5m 377Mi ds-idrepo-0 7157m 13768Mi ds-idrepo-1 5065m 13875Mi ds-idrepo-2 6203m 13869Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3726m 5455Mi idm-65858d8c4c-4qc5l 3292m 5306Mi lodemon-5798c88b8f-k2sv4 1m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 722m 1139Mi 20:47:57 DEBUG --- stderr --- 20:47:57 DEBUG 20:47:58 INFO 20:47:58 INFO [loop_until]: kubectl --namespace=xlou top node 20:47:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:47:58 INFO [loop_until]: OK (rc = 0) 20:47:58 DEBUG --- stdout --- 20:47:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 169m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 165m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3514m 22% 6620Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1392m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3988m 25% 6710Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6113m 38% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5204m 32% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7785m 48% 14314Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 810m 5% 2640Mi 4% 20:47:58 DEBUG --- stderr --- 20:47:58 DEBUG 20:48:57 INFO 20:48:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:48:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:48:57 INFO [loop_until]: OK (rc = 0) 20:48:57 DEBUG --- stdout --- 20:48:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 110m 5807Mi am-55f77847b7-dr27z 107m 5763Mi am-55f77847b7-fp459 113m 5874Mi ds-cts-0 6m 393Mi ds-cts-1 8m 379Mi ds-cts-2 6m 377Mi ds-idrepo-0 7736m 13859Mi ds-idrepo-1 4971m 13825Mi ds-idrepo-2 5358m 13768Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3517m 5455Mi idm-65858d8c4c-4qc5l 3296m 5306Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 704m 1139Mi 20:48:57 DEBUG --- stderr --- 20:48:57 DEBUG 20:48:58 INFO 20:48:58 INFO [loop_until]: kubectl --namespace=xlou top node 20:48:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:48:58 INFO [loop_until]: OK (rc = 0) 20:48:58 DEBUG --- stdout --- 20:48:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 179m 1% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 166m 1% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 166m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3602m 22% 6620Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1336m 8% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3755m 23% 6711Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5737m 36% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4818m 30% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7536m 47% 14412Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 800m 5% 2641Mi 4% 20:48:58 DEBUG --- stderr --- 20:48:58 DEBUG 20:49:57 INFO 20:49:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:49:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:49:57 INFO [loop_until]: OK (rc = 0) 20:49:57 DEBUG --- stdout --- 20:49:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 110m 5807Mi am-55f77847b7-dr27z 109m 5763Mi am-55f77847b7-fp459 113m 5874Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 7948m 13724Mi ds-idrepo-1 5260m 13824Mi ds-idrepo-2 4847m 13684Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3558m 5455Mi idm-65858d8c4c-4qc5l 3386m 5306Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 709m 1139Mi 20:49:57 DEBUG --- stderr --- 20:49:57 DEBUG 20:49:58 INFO 20:49:58 INFO [loop_until]: kubectl --namespace=xlou top node 20:49:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:49:58 INFO [loop_until]: OK (rc = 0) 20:49:58 DEBUG --- stdout --- 20:49:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 175m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 168m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 159m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3634m 22% 6622Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1400m 8% 2184Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3910m 24% 6712Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6718m 42% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5314m 33% 14461Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8599m 54% 14306Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 789m 4% 2641Mi 4% 20:49:58 DEBUG --- stderr --- 20:49:58 DEBUG 20:50:57 INFO 20:50:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:50:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:50:57 INFO [loop_until]: OK (rc = 0) 20:50:57 DEBUG --- stdout --- 20:50:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 107m 5808Mi am-55f77847b7-dr27z 111m 5763Mi am-55f77847b7-fp459 114m 5874Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 8003m 13836Mi ds-idrepo-1 6029m 13738Mi ds-idrepo-2 5385m 13821Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3663m 5455Mi idm-65858d8c4c-4qc5l 3220m 5306Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 689m 1140Mi 20:50:57 DEBUG --- stderr --- 20:50:57 DEBUG 20:50:58 INFO 20:50:58 INFO [loop_until]: kubectl --namespace=xlou top node 20:50:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:50:59 INFO [loop_until]: OK (rc = 0) 20:50:59 DEBUG --- stdout --- 20:50:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 175m 1% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 171m 1% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 167m 1% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3466m 21% 6618Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1326m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3820m 24% 6712Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5554m 34% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6076m 38% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8183m 51% 14415Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 738m 4% 2642Mi 4% 20:50:59 DEBUG --- stderr --- 20:50:59 DEBUG 20:51:57 INFO 20:51:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:51:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:51:57 INFO [loop_until]: OK (rc = 0) 20:51:57 DEBUG --- stdout --- 20:51:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 102m 5808Mi am-55f77847b7-dr27z 108m 5763Mi am-55f77847b7-fp459 116m 5874Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 6727m 13825Mi ds-idrepo-1 5656m 13721Mi ds-idrepo-2 6669m 13800Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3599m 5454Mi idm-65858d8c4c-4qc5l 3248m 5305Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 713m 1140Mi 20:51:57 DEBUG --- stderr --- 20:51:57 DEBUG 20:51:59 INFO 20:51:59 INFO [loop_until]: kubectl --namespace=xlou top node 20:51:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:51:59 INFO [loop_until]: OK (rc = 0) 20:51:59 DEBUG --- stdout --- 20:51:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 174m 1% 6895Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 168m 1% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 161m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3528m 22% 6616Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1373m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3842m 24% 6711Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6427m 40% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5700m 35% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6552m 41% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 800m 5% 2641Mi 4% 20:51:59 DEBUG --- stderr --- 20:51:59 DEBUG 20:52:57 INFO 20:52:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:52:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:52:57 INFO [loop_until]: OK (rc = 0) 20:52:57 DEBUG --- stdout --- 20:52:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 110m 5808Mi am-55f77847b7-dr27z 114m 5763Mi am-55f77847b7-fp459 119m 5874Mi ds-cts-0 15m 399Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 7746m 13777Mi ds-idrepo-1 6544m 13801Mi ds-idrepo-2 5557m 13790Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3651m 5454Mi idm-65858d8c4c-4qc5l 3259m 5305Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 720m 1140Mi 20:52:57 DEBUG --- stderr --- 20:52:57 DEBUG 20:52:59 INFO 20:52:59 INFO [loop_until]: kubectl --namespace=xlou top node 20:52:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:52:59 INFO [loop_until]: OK (rc = 0) 20:52:59 DEBUG --- stdout --- 20:52:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 177m 1% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 171m 1% 6902Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 170m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3548m 22% 6616Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1330m 8% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3889m 24% 6713Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 66m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5817m 36% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6286m 39% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7967m 50% 14373Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 778m 4% 2637Mi 4% 20:52:59 DEBUG --- stderr --- 20:52:59 DEBUG 20:53:57 INFO 20:53:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:53:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:53:57 INFO [loop_until]: OK (rc = 0) 20:53:57 DEBUG --- stdout --- 20:53:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 107m 5808Mi am-55f77847b7-dr27z 109m 5764Mi am-55f77847b7-fp459 113m 5874Mi ds-cts-0 7m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 7784m 13839Mi ds-idrepo-1 5482m 13827Mi ds-idrepo-2 4323m 13798Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3532m 5456Mi idm-65858d8c4c-4qc5l 3363m 5307Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 742m 1141Mi 20:53:57 DEBUG --- stderr --- 20:53:57 DEBUG 20:53:59 INFO 20:53:59 INFO [loop_until]: kubectl --namespace=xlou top node 20:53:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:53:59 INFO [loop_until]: OK (rc = 0) 20:53:59 DEBUG --- stdout --- 20:53:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 170m 1% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 169m 1% 6901Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 158m 0% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3489m 21% 6619Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1382m 8% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3785m 23% 6712Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4134m 26% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5661m 35% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7690m 48% 14424Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 776m 4% 2640Mi 4% 20:53:59 DEBUG --- stderr --- 20:53:59 DEBUG 20:54:57 INFO 20:54:57 INFO [loop_until]: kubectl --namespace=xlou top pods 20:54:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:54:58 INFO [loop_until]: OK (rc = 0) 20:54:58 DEBUG --- stdout --- 20:54:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 108m 5808Mi am-55f77847b7-dr27z 108m 5764Mi am-55f77847b7-fp459 114m 5875Mi ds-cts-0 6m 393Mi ds-cts-1 8m 379Mi ds-cts-2 7m 377Mi ds-idrepo-0 8637m 13848Mi ds-idrepo-1 5884m 13677Mi ds-idrepo-2 4714m 13813Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3701m 5456Mi idm-65858d8c4c-4qc5l 3173m 5307Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 730m 1141Mi 20:54:58 DEBUG --- stderr --- 20:54:58 DEBUG 20:54:59 INFO 20:54:59 INFO [loop_until]: kubectl --namespace=xlou top node 20:54:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:54:59 INFO [loop_until]: OK (rc = 0) 20:54:59 DEBUG --- stdout --- 20:54:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 172m 1% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 175m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 164m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3545m 22% 6619Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1353m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3883m 24% 6714Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 75m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4892m 30% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5760m 36% 14330Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8642m 54% 14441Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 801m 5% 2640Mi 4% 20:54:59 DEBUG --- stderr --- 20:54:59 DEBUG 20:55:58 INFO 20:55:58 INFO [loop_until]: kubectl --namespace=xlou top pods 20:55:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:55:58 INFO [loop_until]: OK (rc = 0) 20:55:58 DEBUG --- stdout --- 20:55:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 111m 5808Mi am-55f77847b7-dr27z 112m 5764Mi am-55f77847b7-fp459 113m 5875Mi ds-cts-0 7m 393Mi ds-cts-1 9m 379Mi ds-cts-2 5m 377Mi ds-idrepo-0 7748m 13765Mi ds-idrepo-1 4702m 13767Mi ds-idrepo-2 4655m 13855Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3614m 5457Mi idm-65858d8c4c-4qc5l 3445m 5308Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 748m 1142Mi 20:55:58 DEBUG --- stderr --- 20:55:58 DEBUG 20:55:59 INFO 20:55:59 INFO [loop_until]: kubectl --namespace=xlou top node 20:55:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:55:59 INFO [loop_until]: OK (rc = 0) 20:55:59 DEBUG --- stdout --- 20:55:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 170m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 165m 1% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 166m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3627m 22% 6619Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1388m 8% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3826m 24% 6713Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4673m 29% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4851m 30% 14418Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7454m 46% 14365Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 809m 5% 2640Mi 4% 20:55:59 DEBUG --- stderr --- 20:55:59 DEBUG 20:56:58 INFO 20:56:58 INFO [loop_until]: kubectl --namespace=xlou top pods 20:56:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:56:58 INFO [loop_until]: OK (rc = 0) 20:56:58 DEBUG --- stdout --- 20:56:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 108m 5808Mi am-55f77847b7-dr27z 110m 5764Mi am-55f77847b7-fp459 106m 5875Mi ds-cts-0 7m 395Mi ds-cts-1 9m 379Mi ds-cts-2 6m 377Mi ds-idrepo-0 7734m 13872Mi ds-idrepo-1 4530m 13766Mi ds-idrepo-2 5003m 13792Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 3382m 5456Mi idm-65858d8c4c-4qc5l 3291m 5308Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 750m 1141Mi 20:56:58 DEBUG --- stderr --- 20:56:58 DEBUG 20:56:59 INFO 20:56:59 INFO [loop_until]: kubectl --namespace=xlou top node 20:56:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:56:59 INFO [loop_until]: OK (rc = 0) 20:56:59 DEBUG --- stdout --- 20:56:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 168m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 173m 1% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 161m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 3444m 21% 6618Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1354m 8% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3604m 22% 6714Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5050m 31% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5151m 32% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7362m 46% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 778m 4% 2644Mi 4% 20:56:59 DEBUG --- stderr --- 20:56:59 DEBUG 20:57:58 INFO 20:57:58 INFO [loop_until]: kubectl --namespace=xlou top pods 20:57:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:57:58 INFO [loop_until]: OK (rc = 0) 20:57:58 DEBUG --- stdout --- 20:57:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 33m 5809Mi am-55f77847b7-dr27z 34m 5764Mi am-55f77847b7-fp459 28m 5875Mi ds-cts-0 6m 394Mi ds-cts-1 9m 379Mi ds-cts-2 7m 377Mi ds-idrepo-0 2140m 13812Mi ds-idrepo-1 3159m 13737Mi ds-idrepo-2 2560m 13798Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 688m 5456Mi idm-65858d8c4c-4qc5l 796m 5307Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 292m 1141Mi 20:57:58 DEBUG --- stderr --- 20:57:58 DEBUG 20:57:59 INFO 20:57:59 INFO [loop_until]: kubectl --namespace=xlou top node 20:57:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:57:59 INFO [loop_until]: OK (rc = 0) 20:57:59 DEBUG --- stdout --- 20:57:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6904Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6619Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 419m 2% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 332m 2% 6713Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1961m 12% 14440Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 1782m 11% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 1924m 12% 14404Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 308m 1% 2643Mi 4% 20:57:59 DEBUG --- stderr --- 20:57:59 DEBUG 20:58:58 INFO 20:58:58 INFO [loop_until]: kubectl --namespace=xlou top pods 20:58:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:58:58 INFO [loop_until]: OK (rc = 0) 20:58:58 DEBUG --- stdout --- 20:58:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 10m 5809Mi am-55f77847b7-dr27z 11m 5765Mi am-55f77847b7-fp459 9m 5875Mi ds-cts-0 12m 393Mi ds-cts-1 8m 378Mi ds-cts-2 5m 377Mi ds-idrepo-0 11m 13753Mi ds-idrepo-1 11m 13731Mi ds-idrepo-2 10m 13797Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 7m 5455Mi idm-65858d8c4c-4qc5l 6m 5306Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 223Mi 20:58:58 DEBUG --- stderr --- 20:58:58 DEBUG 20:59:00 INFO 20:59:00 INFO [loop_until]: kubectl --namespace=xlou top node 20:59:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:59:00 INFO [loop_until]: OK (rc = 0) 20:59:00 DEBUG --- stdout --- 20:59:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6900Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 6622Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 6712Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1733Mi 2% 20:59:00 DEBUG --- stderr --- 20:59:00 DEBUG 127.0.0.1 - - [12/Aug/2023 20:59:44] "GET /monitoring/average?start_time=23-08-12_19:29:06&stop_time=23-08-12_19:57:44 HTTP/1.1" 200 - 20:59:58 INFO 20:59:58 INFO [loop_until]: kubectl --namespace=xlou top pods 20:59:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 20:59:58 INFO [loop_until]: OK (rc = 0) 20:59:58 DEBUG --- stdout --- 20:59:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 8m 5808Mi am-55f77847b7-dr27z 9m 5764Mi am-55f77847b7-fp459 9m 5875Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 9m 13753Mi ds-idrepo-1 10m 13731Mi ds-idrepo-2 10m 13798Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 6m 5455Mi idm-65858d8c4c-4qc5l 6m 5306Mi lodemon-5798c88b8f-k2sv4 3m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 528m 579Mi 20:59:58 DEBUG --- stderr --- 20:59:58 DEBUG 21:00:00 INFO 21:00:00 INFO [loop_until]: kubectl --namespace=xlou top node 21:00:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:00:00 INFO [loop_until]: OK (rc = 0) 21:00:00 DEBUG --- stdout --- 21:00:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6899Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 6621Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 6714Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 55m 0% 14344Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 265m 1% 1937Mi 3% 21:00:00 DEBUG --- stderr --- 21:00:00 DEBUG 21:00:58 INFO 21:00:58 INFO [loop_until]: kubectl --namespace=xlou top pods 21:00:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:00:58 INFO [loop_until]: OK (rc = 0) 21:00:58 DEBUG --- stdout --- 21:00:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 282m 5811Mi am-55f77847b7-dr27z 306m 5767Mi am-55f77847b7-fp459 285m 5860Mi ds-cts-0 6m 393Mi ds-cts-1 8m 380Mi ds-cts-2 5m 377Mi ds-idrepo-0 7891m 13823Mi ds-idrepo-1 3756m 13848Mi ds-idrepo-2 4553m 13824Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2213m 5473Mi idm-65858d8c4c-4qc5l 2234m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1040m 820Mi 21:00:58 DEBUG --- stderr --- 21:00:58 DEBUG 21:01:00 INFO 21:01:00 INFO [loop_until]: kubectl --namespace=xlou top node 21:01:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:01:00 INFO [loop_until]: OK (rc = 0) 21:01:00 DEBUG --- stdout --- 21:01:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1367Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 374m 2% 6885Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 360m 2% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 348m 2% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2557m 16% 6637Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1190m 7% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2634m 16% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4768m 30% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 3909m 24% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7378m 46% 14437Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1128m 7% 2322Mi 3% 21:01:00 DEBUG --- stderr --- 21:01:00 DEBUG 21:01:58 INFO 21:01:58 INFO [loop_until]: kubectl --namespace=xlou top pods 21:01:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:01:58 INFO [loop_until]: OK (rc = 0) 21:01:58 DEBUG --- stdout --- 21:01:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 291m 5811Mi am-55f77847b7-dr27z 304m 5768Mi am-55f77847b7-fp459 346m 5872Mi ds-cts-0 6m 394Mi ds-cts-1 8m 378Mi ds-cts-2 8m 377Mi ds-idrepo-0 9793m 13822Mi ds-idrepo-1 6179m 13865Mi ds-idrepo-2 6005m 13815Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2526m 5473Mi idm-65858d8c4c-4qc5l 2370m 5320Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 758m 835Mi 21:01:58 DEBUG --- stderr --- 21:01:58 DEBUG 21:02:00 INFO 21:02:00 INFO [loop_until]: kubectl --namespace=xlou top node 21:02:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:02:00 INFO [loop_until]: OK (rc = 0) 21:02:00 DEBUG --- stdout --- 21:02:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1370Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 397m 2% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 363m 2% 6903Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 365m 2% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2578m 16% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1356m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2748m 17% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5282m 33% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6082m 38% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9869m 62% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 811m 5% 2335Mi 3% 21:02:00 DEBUG --- stderr --- 21:02:00 DEBUG 21:02:58 INFO 21:02:58 INFO [loop_until]: kubectl --namespace=xlou top pods 21:02:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:02:58 INFO [loop_until]: OK (rc = 0) 21:02:58 DEBUG --- stdout --- 21:02:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 287m 5811Mi am-55f77847b7-dr27z 287m 5768Mi am-55f77847b7-fp459 289m 5872Mi ds-cts-0 6m 393Mi ds-cts-1 8m 379Mi ds-cts-2 6m 377Mi ds-idrepo-0 8493m 13836Mi ds-idrepo-1 5699m 13773Mi ds-idrepo-2 4553m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2439m 5473Mi idm-65858d8c4c-4qc5l 2321m 5321Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 730m 837Mi 21:02:58 DEBUG --- stderr --- 21:02:58 DEBUG 21:03:00 INFO 21:03:00 INFO [loop_until]: kubectl --namespace=xlou top node 21:03:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:03:00 INFO [loop_until]: OK (rc = 0) 21:03:00 DEBUG --- stdout --- 21:03:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 348m 2% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 342m 2% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 342m 2% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2528m 15% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1367m 8% 2181Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2696m 16% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4909m 30% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5671m 35% 14491Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8361m 52% 14431Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 835m 5% 2339Mi 3% 21:03:00 DEBUG --- stderr --- 21:03:00 DEBUG 21:03:58 INFO 21:03:58 INFO [loop_until]: kubectl --namespace=xlou top pods 21:03:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:03:59 INFO [loop_until]: OK (rc = 0) 21:03:59 DEBUG --- stdout --- 21:03:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 279m 5815Mi am-55f77847b7-dr27z 291m 5772Mi am-55f77847b7-fp459 291m 5872Mi ds-cts-0 6m 394Mi ds-cts-1 10m 379Mi ds-cts-2 5m 377Mi ds-idrepo-0 8919m 13822Mi ds-idrepo-1 5096m 13829Mi ds-idrepo-2 6684m 13711Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2497m 5473Mi idm-65858d8c4c-4qc5l 2265m 5321Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 765m 837Mi 21:03:59 DEBUG --- stderr --- 21:03:59 DEBUG 21:04:00 INFO 21:04:00 INFO [loop_until]: kubectl --namespace=xlou top node 21:04:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:04:00 INFO [loop_until]: OK (rc = 0) 21:04:00 DEBUG --- stdout --- 21:04:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 356m 2% 6892Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 351m 2% 6911Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 339m 2% 6983Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2542m 15% 6634Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1346m 8% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2778m 17% 6731Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6493m 40% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5111m 32% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9062m 57% 14437Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 827m 5% 2340Mi 3% 21:04:00 DEBUG --- stderr --- 21:04:00 DEBUG 21:04:59 INFO 21:04:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:04:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:04:59 INFO [loop_until]: OK (rc = 0) 21:04:59 DEBUG --- stdout --- 21:04:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 31m 5815Mi am-55f77847b7-dr27z 47m 5772Mi am-55f77847b7-fp459 38m 5882Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 8510m 13823Mi ds-idrepo-1 4730m 13821Mi ds-idrepo-2 5955m 13808Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2583m 5475Mi idm-65858d8c4c-4qc5l 2460m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 846m 864Mi 21:04:59 DEBUG --- stderr --- 21:04:59 DEBUG 21:05:00 INFO 21:05:00 INFO [loop_until]: kubectl --namespace=xlou top node 21:05:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:05:00 INFO [loop_until]: OK (rc = 0) 21:05:00 DEBUG --- stdout --- 21:05:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6905Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 104m 0% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2629m 16% 6633Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1428m 8% 2193Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2762m 17% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5589m 35% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4589m 28% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8257m 51% 14390Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 892m 5% 2362Mi 4% 21:05:00 DEBUG --- stderr --- 21:05:00 DEBUG 21:05:59 INFO 21:05:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:05:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:05:59 INFO [loop_until]: OK (rc = 0) 21:05:59 DEBUG --- stdout --- 21:05:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 221m 5817Mi am-55f77847b7-dr27z 247m 5775Mi am-55f77847b7-fp459 274m 5883Mi ds-cts-0 6m 394Mi ds-cts-1 8m 379Mi ds-cts-2 6m 377Mi ds-idrepo-0 9560m 13667Mi ds-idrepo-1 5201m 13870Mi ds-idrepo-2 5199m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2334m 5475Mi idm-65858d8c4c-4qc5l 2347m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 812m 866Mi 21:05:59 DEBUG --- stderr --- 21:05:59 DEBUG 21:06:00 INFO 21:06:00 INFO [loop_until]: kubectl --namespace=xlou top node 21:06:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:06:01 INFO [loop_until]: OK (rc = 0) 21:06:01 DEBUG --- stdout --- 21:06:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 247m 1% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 282m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 199m 1% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2609m 16% 6636Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1337m 8% 2227Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2610m 16% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4833m 30% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5269m 33% 14546Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9806m 61% 14285Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 863m 5% 2365Mi 4% 21:06:01 DEBUG --- stderr --- 21:06:01 DEBUG 21:06:59 INFO 21:06:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:06:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:06:59 INFO [loop_until]: OK (rc = 0) 21:06:59 DEBUG --- stdout --- 21:06:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 301m 5819Mi am-55f77847b7-dr27z 275m 5775Mi am-55f77847b7-fp459 248m 5883Mi ds-cts-0 6m 394Mi ds-cts-1 12m 378Mi ds-cts-2 5m 377Mi ds-idrepo-0 9295m 13771Mi ds-idrepo-1 5422m 13881Mi ds-idrepo-2 5362m 13849Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2406m 5475Mi idm-65858d8c4c-4qc5l 2390m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 784m 865Mi 21:06:59 DEBUG --- stderr --- 21:06:59 DEBUG 21:07:01 INFO 21:07:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:07:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:07:01 INFO [loop_until]: OK (rc = 0) 21:07:01 DEBUG --- stdout --- 21:07:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 328m 2% 6908Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 321m 2% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 391m 2% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2683m 16% 6637Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1358m 8% 2184Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2556m 16% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5662m 35% 14470Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5365m 33% 14554Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9241m 58% 14352Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 817m 5% 2364Mi 4% 21:07:01 DEBUG --- stderr --- 21:07:01 DEBUG 21:07:59 INFO 21:07:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:07:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:07:59 INFO [loop_until]: OK (rc = 0) 21:07:59 DEBUG --- stdout --- 21:07:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 114m 5819Mi am-55f77847b7-dr27z 109m 5777Mi am-55f77847b7-fp459 191m 5883Mi ds-cts-0 6m 394Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 9378m 13829Mi ds-idrepo-1 4834m 13862Mi ds-idrepo-2 5065m 13779Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2391m 5476Mi idm-65858d8c4c-4qc5l 2465m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 779m 868Mi 21:07:59 DEBUG --- stderr --- 21:07:59 DEBUG 21:08:01 INFO 21:08:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:08:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:08:01 INFO [loop_until]: OK (rc = 0) 21:08:01 DEBUG --- stdout --- 21:08:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 186m 1% 6907Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 216m 1% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 225m 1% 6984Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2666m 16% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1355m 8% 2218Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2736m 17% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5159m 32% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4364m 27% 14506Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9763m 61% 14443Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 871m 5% 2375Mi 4% 21:08:01 DEBUG --- stderr --- 21:08:01 DEBUG 21:08:59 INFO 21:08:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:08:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:08:59 INFO [loop_until]: OK (rc = 0) 21:08:59 DEBUG --- stdout --- 21:08:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 321m 5819Mi am-55f77847b7-dr27z 270m 5777Mi am-55f77847b7-fp459 342m 5887Mi ds-cts-0 6m 394Mi ds-cts-1 8m 378Mi ds-cts-2 5m 377Mi ds-idrepo-0 8563m 13751Mi ds-idrepo-1 6059m 13822Mi ds-idrepo-2 5483m 13822Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2554m 5475Mi idm-65858d8c4c-4qc5l 2333m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 770m 868Mi 21:08:59 DEBUG --- stderr --- 21:08:59 DEBUG 21:09:01 INFO 21:09:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:09:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:09:01 INFO [loop_until]: OK (rc = 0) 21:09:01 DEBUG --- stdout --- 21:09:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 372m 2% 6910Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 355m 2% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 368m 2% 6997Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2612m 16% 6637Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1364m 8% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2734m 17% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5615m 35% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6012m 37% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8477m 53% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 867m 5% 2368Mi 4% 21:09:01 DEBUG --- stderr --- 21:09:01 DEBUG 21:09:59 INFO 21:09:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:09:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:09:59 INFO [loop_until]: OK (rc = 0) 21:09:59 DEBUG --- stdout --- 21:09:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 317m 5820Mi am-55f77847b7-dr27z 329m 5780Mi am-55f77847b7-fp459 289m 5886Mi ds-cts-0 6m 393Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 8351m 13759Mi ds-idrepo-1 5926m 13761Mi ds-idrepo-2 6100m 13653Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2411m 5475Mi idm-65858d8c4c-4qc5l 2241m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 810m 869Mi 21:09:59 DEBUG --- stderr --- 21:09:59 DEBUG 21:10:01 INFO 21:10:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:10:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:10:01 INFO [loop_until]: OK (rc = 0) 21:10:01 DEBUG --- stdout --- 21:10:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1369Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 347m 2% 6906Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 331m 2% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 379m 2% 6985Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2542m 15% 6637Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1331m 8% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2703m 17% 6726Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6358m 40% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5624m 35% 14480Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9137m 57% 14259Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 834m 5% 2370Mi 4% 21:10:01 DEBUG --- stderr --- 21:10:01 DEBUG 21:10:59 INFO 21:10:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:10:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:10:59 INFO [loop_until]: OK (rc = 0) 21:10:59 DEBUG --- stdout --- 21:10:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 255m 5819Mi am-55f77847b7-dr27z 242m 5779Mi am-55f77847b7-fp459 326m 5888Mi ds-cts-0 6m 397Mi ds-cts-1 8m 379Mi ds-cts-2 6m 377Mi ds-idrepo-0 10447m 13740Mi ds-idrepo-1 4198m 13808Mi ds-idrepo-2 6134m 13848Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2521m 5475Mi idm-65858d8c4c-4qc5l 2451m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 738m 869Mi 21:10:59 DEBUG --- stderr --- 21:10:59 DEBUG 21:11:01 INFO 21:11:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:11:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:11:01 INFO [loop_until]: OK (rc = 0) 21:11:01 DEBUG --- stdout --- 21:11:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 390m 2% 6909Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 304m 1% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 301m 1% 6987Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2686m 16% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1382m 8% 2182Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2726m 17% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6125m 38% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4134m 26% 14510Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9950m 62% 14411Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 826m 5% 2366Mi 4% 21:11:01 DEBUG --- stderr --- 21:11:01 DEBUG 21:11:59 INFO 21:11:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:11:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:11:59 INFO [loop_until]: OK (rc = 0) 21:11:59 DEBUG --- stdout --- 21:11:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 290m 5819Mi am-55f77847b7-dr27z 295m 5779Mi am-55f77847b7-fp459 301m 5888Mi ds-cts-0 5m 396Mi ds-cts-1 9m 378Mi ds-cts-2 5m 377Mi ds-idrepo-0 7354m 13854Mi ds-idrepo-1 5985m 13658Mi ds-idrepo-2 5884m 13777Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2476m 5475Mi idm-65858d8c4c-4qc5l 2295m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 730m 870Mi 21:11:59 DEBUG --- stderr --- 21:11:59 DEBUG 21:12:01 INFO 21:12:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:12:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:12:01 INFO [loop_until]: OK (rc = 0) 21:12:01 DEBUG --- stdout --- 21:12:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 358m 2% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 352m 2% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 335m 2% 6988Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2534m 15% 6648Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1335m 8% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2722m 17% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5748m 36% 14509Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5336m 33% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7493m 47% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 803m 5% 2367Mi 4% 21:12:01 DEBUG --- stderr --- 21:12:01 DEBUG 21:12:59 INFO 21:12:59 INFO [loop_until]: kubectl --namespace=xlou top pods 21:12:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:13:00 INFO [loop_until]: OK (rc = 0) 21:13:00 DEBUG --- stdout --- 21:13:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 267m 5821Mi am-55f77847b7-dr27z 303m 5781Mi am-55f77847b7-fp459 282m 5888Mi ds-cts-0 6m 396Mi ds-cts-1 9m 378Mi ds-cts-2 8m 378Mi ds-idrepo-0 8793m 13798Mi ds-idrepo-1 6493m 13697Mi ds-idrepo-2 5348m 13764Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2440m 5476Mi idm-65858d8c4c-4qc5l 2283m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 735m 870Mi 21:13:00 DEBUG --- stderr --- 21:13:00 DEBUG 21:13:01 INFO 21:13:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:13:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:13:01 INFO [loop_until]: OK (rc = 0) 21:13:01 DEBUG --- stdout --- 21:13:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1380Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 345m 2% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 321m 2% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 326m 2% 6988Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2533m 15% 6636Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1385m 8% 2177Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2772m 17% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4835m 30% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5976m 37% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8894m 55% 14440Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 794m 4% 2366Mi 4% 21:13:01 DEBUG --- stderr --- 21:13:01 DEBUG 21:14:00 INFO 21:14:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:14:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:14:00 INFO [loop_until]: OK (rc = 0) 21:14:00 DEBUG --- stdout --- 21:14:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 272m 5821Mi am-55f77847b7-dr27z 318m 5781Mi am-55f77847b7-fp459 323m 5891Mi ds-cts-0 6m 396Mi ds-cts-1 9m 379Mi ds-cts-2 5m 377Mi ds-idrepo-0 8289m 13743Mi ds-idrepo-1 5087m 13859Mi ds-idrepo-2 3894m 13749Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2492m 5475Mi idm-65858d8c4c-4qc5l 2242m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 732m 870Mi 21:14:00 DEBUG --- stderr --- 21:14:00 DEBUG 21:14:01 INFO 21:14:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:14:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:14:01 INFO [loop_until]: OK (rc = 0) 21:14:01 DEBUG --- stdout --- 21:14:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 375m 2% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 341m 2% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 334m 2% 6987Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2557m 16% 6633Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1316m 8% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2658m 16% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4111m 25% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4625m 29% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8469m 53% 14407Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 798m 5% 2366Mi 4% 21:14:01 DEBUG --- stderr --- 21:14:01 DEBUG 21:15:00 INFO 21:15:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:15:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:15:00 INFO [loop_until]: OK (rc = 0) 21:15:00 DEBUG --- stdout --- 21:15:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 196m 5822Mi am-55f77847b7-dr27z 191m 5781Mi am-55f77847b7-fp459 218m 5890Mi ds-cts-0 6m 396Mi ds-cts-1 8m 378Mi ds-cts-2 6m 377Mi ds-idrepo-0 9703m 13824Mi ds-idrepo-1 4659m 13783Mi ds-idrepo-2 4547m 13811Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2449m 5477Mi idm-65858d8c4c-4qc5l 2273m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 730m 872Mi 21:15:00 DEBUG --- stderr --- 21:15:00 DEBUG 21:15:01 INFO 21:15:01 INFO [loop_until]: kubectl --namespace=xlou top node 21:15:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:15:02 INFO [loop_until]: OK (rc = 0) 21:15:02 DEBUG --- stdout --- 21:15:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1372Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 190m 1% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 216m 1% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 244m 1% 6988Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2505m 15% 6636Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1390m 8% 2201Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2675m 16% 6730Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4353m 27% 14497Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4830m 30% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9861m 62% 14451Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 810m 5% 2370Mi 4% 21:15:02 DEBUG --- stderr --- 21:15:02 DEBUG 21:16:00 INFO 21:16:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:16:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:16:00 INFO [loop_until]: OK (rc = 0) 21:16:00 DEBUG --- stdout --- 21:16:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 378m 5822Mi am-55f77847b7-dr27z 372m 5783Mi am-55f77847b7-fp459 418m 5892Mi ds-cts-0 5m 396Mi ds-cts-1 8m 379Mi ds-cts-2 6m 378Mi ds-idrepo-0 8611m 13838Mi ds-idrepo-1 6133m 13755Mi ds-idrepo-2 6086m 13802Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2476m 5476Mi idm-65858d8c4c-4qc5l 2291m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 750m 871Mi 21:16:00 DEBUG --- stderr --- 21:16:00 DEBUG 21:16:02 INFO 21:16:02 INFO [loop_until]: kubectl --namespace=xlou top node 21:16:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:16:02 INFO [loop_until]: OK (rc = 0) 21:16:02 DEBUG --- stdout --- 21:16:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 488m 3% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 436m 2% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 448m 2% 6991Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2460m 15% 6636Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1340m 8% 2168Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2735m 17% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5878m 36% 14518Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 6217m 39% 14440Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9006m 56% 14460Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 799m 5% 2371Mi 4% 21:16:02 DEBUG --- stderr --- 21:16:02 DEBUG 21:17:00 INFO 21:17:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:17:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:17:00 INFO [loop_until]: OK (rc = 0) 21:17:00 DEBUG --- stdout --- 21:17:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 279m 5822Mi am-55f77847b7-dr27z 278m 5783Mi am-55f77847b7-fp459 284m 5892Mi ds-cts-0 6m 396Mi ds-cts-1 9m 378Mi ds-cts-2 6m 378Mi ds-idrepo-0 6738m 13808Mi ds-idrepo-1 4322m 13810Mi ds-idrepo-2 4376m 13824Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2442m 5476Mi idm-65858d8c4c-4qc5l 2318m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 720m 871Mi 21:17:00 DEBUG --- stderr --- 21:17:00 DEBUG 21:17:02 INFO 21:17:02 INFO [loop_until]: kubectl --namespace=xlou top node 21:17:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:17:02 INFO [loop_until]: OK (rc = 0) 21:17:02 DEBUG --- stdout --- 21:17:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 352m 2% 6914Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 330m 2% 6920Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 342m 2% 6990Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2508m 15% 6636Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1378m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2719m 17% 6730Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4842m 30% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5396m 33% 14507Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8352m 52% 14460Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 791m 4% 2366Mi 4% 21:17:02 DEBUG --- stderr --- 21:17:02 DEBUG 21:18:00 INFO 21:18:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:18:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:18:00 INFO [loop_until]: OK (rc = 0) 21:18:00 DEBUG --- stdout --- 21:18:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 282m 5824Mi am-55f77847b7-dr27z 272m 5784Mi am-55f77847b7-fp459 288m 5892Mi ds-cts-0 6m 397Mi ds-cts-1 8m 379Mi ds-cts-2 5m 378Mi ds-idrepo-0 9073m 13831Mi ds-idrepo-1 3601m 13823Mi ds-idrepo-2 3493m 13814Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2437m 5476Mi idm-65858d8c4c-4qc5l 2278m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 693m 871Mi 21:18:00 DEBUG --- stderr --- 21:18:00 DEBUG 21:18:02 INFO 21:18:02 INFO [loop_until]: kubectl --namespace=xlou top node 21:18:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:18:02 INFO [loop_until]: OK (rc = 0) 21:18:02 DEBUG --- stdout --- 21:18:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 394m 2% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 339m 2% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 335m 2% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2520m 15% 6637Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1378m 8% 2176Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2740m 17% 6740Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3827m 24% 14514Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4016m 25% 14569Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8257m 51% 14463Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 784m 4% 2369Mi 4% 21:18:02 DEBUG --- stderr --- 21:18:02 DEBUG 21:19:00 INFO 21:19:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:19:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:19:00 INFO [loop_until]: OK (rc = 0) 21:19:00 DEBUG --- stdout --- 21:19:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 241m 5824Mi am-55f77847b7-dr27z 275m 5784Mi am-55f77847b7-fp459 261m 5894Mi ds-cts-0 6m 396Mi ds-cts-1 8m 378Mi ds-cts-2 6m 378Mi ds-idrepo-0 10141m 13857Mi ds-idrepo-1 4079m 13824Mi ds-idrepo-2 3444m 13851Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2445m 5476Mi idm-65858d8c4c-4qc5l 2330m 5324Mi lodemon-5798c88b8f-k2sv4 1m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 708m 871Mi 21:19:00 DEBUG --- stderr --- 21:19:00 DEBUG 21:19:02 INFO 21:19:02 INFO [loop_until]: kubectl --namespace=xlou top node 21:19:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:19:02 INFO [loop_until]: OK (rc = 0) 21:19:02 DEBUG --- stdout --- 21:19:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 334m 2% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 317m 1% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 294m 1% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2584m 16% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1363m 8% 2181Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2699m 16% 6728Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3570m 22% 14524Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4345m 27% 14525Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9890m 62% 14461Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 750m 4% 2368Mi 4% 21:19:02 DEBUG --- stderr --- 21:19:02 DEBUG 21:20:00 INFO 21:20:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:20:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:20:00 INFO [loop_until]: OK (rc = 0) 21:20:00 DEBUG --- stdout --- 21:20:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 278m 5826Mi am-55f77847b7-dr27z 376m 5786Mi am-55f77847b7-fp459 303m 5894Mi ds-cts-0 5m 396Mi ds-cts-1 8m 378Mi ds-cts-2 7m 378Mi ds-idrepo-0 8134m 13837Mi ds-idrepo-1 5729m 13742Mi ds-idrepo-2 5722m 13808Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2397m 5476Mi idm-65858d8c4c-4qc5l 2305m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 705m 872Mi 21:20:00 DEBUG --- stderr --- 21:20:00 DEBUG 21:20:02 INFO 21:20:02 INFO [loop_until]: kubectl --namespace=xlou top node 21:20:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:20:02 INFO [loop_until]: OK (rc = 0) 21:20:02 DEBUG --- stdout --- 21:20:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 352m 2% 6916Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 389m 2% 6924Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 334m 2% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2511m 15% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1361m 8% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2673m 16% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5788m 36% 14514Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5850m 36% 14302Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7654m 48% 14464Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 772m 4% 2371Mi 4% 21:20:02 DEBUG --- stderr --- 21:20:02 DEBUG 21:21:00 INFO 21:21:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:21:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:21:00 INFO [loop_until]: OK (rc = 0) 21:21:00 DEBUG --- stdout --- 21:21:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 266m 5826Mi am-55f77847b7-dr27z 280m 5786Mi am-55f77847b7-fp459 346m 5894Mi ds-cts-0 5m 396Mi ds-cts-1 8m 378Mi ds-cts-2 6m 378Mi ds-idrepo-0 9740m 13805Mi ds-idrepo-1 4387m 13728Mi ds-idrepo-2 3554m 13866Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2495m 5476Mi idm-65858d8c4c-4qc5l 2346m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 694m 872Mi 21:21:00 DEBUG --- stderr --- 21:21:00 DEBUG 21:21:02 INFO 21:21:02 INFO [loop_until]: kubectl --namespace=xlou top node 21:21:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:21:02 INFO [loop_until]: OK (rc = 0) 21:21:02 DEBUG --- stdout --- 21:21:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 316m 1% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 335m 2% 6920Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 331m 2% 6992Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2581m 16% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1387m 8% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2733m 17% 6729Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3826m 24% 14515Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4635m 29% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9731m 61% 14294Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 774m 4% 2370Mi 4% 21:21:02 DEBUG --- stderr --- 21:21:02 DEBUG 21:22:00 INFO 21:22:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:22:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:22:00 INFO [loop_until]: OK (rc = 0) 21:22:00 DEBUG --- stdout --- 21:22:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 306m 5828Mi am-55f77847b7-dr27z 280m 5786Mi am-55f77847b7-fp459 295m 5894Mi ds-cts-0 6m 396Mi ds-cts-1 8m 378Mi ds-cts-2 6m 374Mi ds-idrepo-0 9104m 13716Mi ds-idrepo-1 5460m 13577Mi ds-idrepo-2 4027m 13873Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2438m 5477Mi idm-65858d8c4c-4qc5l 2303m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 696m 872Mi 21:22:00 DEBUG --- stderr --- 21:22:00 DEBUG 21:22:02 INFO 21:22:02 INFO [loop_until]: kubectl --namespace=xlou top node 21:22:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:22:02 INFO [loop_until]: OK (rc = 0) 21:22:02 DEBUG --- stdout --- 21:22:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 349m 2% 6917Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 385m 2% 6923Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 371m 2% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2558m 16% 6637Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1377m 8% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2713m 17% 6733Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4162m 26% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5227m 32% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 9283m 58% 14224Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 755m 4% 2372Mi 4% 21:22:02 DEBUG --- stderr --- 21:22:02 DEBUG 21:23:00 INFO 21:23:00 INFO [loop_until]: kubectl --namespace=xlou top pods 21:23:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:23:01 INFO [loop_until]: OK (rc = 0) 21:23:01 DEBUG --- stdout --- 21:23:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 281m 5827Mi am-55f77847b7-dr27z 283m 5786Mi am-55f77847b7-fp459 345m 5896Mi ds-cts-0 6m 396Mi ds-cts-1 8m 378Mi ds-cts-2 6m 373Mi ds-idrepo-0 6856m 13822Mi ds-idrepo-1 3945m 13824Mi ds-idrepo-2 5526m 13833Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2497m 5477Mi idm-65858d8c4c-4qc5l 2220m 5325Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 695m 873Mi 21:23:01 DEBUG --- stderr --- 21:23:01 DEBUG 21:23:02 INFO 21:23:02 INFO [loop_until]: kubectl --namespace=xlou top node 21:23:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:23:03 INFO [loop_until]: OK (rc = 0) 21:23:03 DEBUG --- stdout --- 21:23:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 400m 2% 6920Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 341m 2% 6924Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 336m 2% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2458m 15% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1375m 8% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2739m 17% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 75m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5665m 35% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4189m 26% 14515Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 6972m 43% 14468Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 774m 4% 2370Mi 4% 21:23:03 DEBUG --- stderr --- 21:23:03 DEBUG 21:24:01 INFO 21:24:01 INFO [loop_until]: kubectl --namespace=xlou top pods 21:24:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:24:01 INFO [loop_until]: OK (rc = 0) 21:24:01 DEBUG --- stdout --- 21:24:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 272m 5827Mi am-55f77847b7-dr27z 288m 5786Mi am-55f77847b7-fp459 291m 5896Mi ds-cts-0 6m 397Mi ds-cts-1 8m 378Mi ds-cts-2 5m 373Mi ds-idrepo-0 8466m 13821Mi ds-idrepo-1 4598m 13823Mi ds-idrepo-2 3420m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2488m 5476Mi idm-65858d8c4c-4qc5l 2270m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 709m 873Mi 21:24:01 DEBUG --- stderr --- 21:24:01 DEBUG 21:24:03 INFO 21:24:03 INFO [loop_until]: kubectl --namespace=xlou top node 21:24:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:24:03 INFO [loop_until]: OK (rc = 0) 21:24:03 DEBUG --- stdout --- 21:24:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 351m 2% 6920Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 339m 2% 6938Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 339m 2% 6993Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2525m 15% 6640Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1365m 8% 2178Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2605m 16% 6733Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4356m 27% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4716m 29% 14501Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8420m 52% 14472Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 769m 4% 2373Mi 4% 21:24:03 DEBUG --- stderr --- 21:24:03 DEBUG 21:25:01 INFO 21:25:01 INFO [loop_until]: kubectl --namespace=xlou top pods 21:25:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:25:01 INFO [loop_until]: OK (rc = 0) 21:25:01 DEBUG --- stdout --- 21:25:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 368m 5829Mi am-55f77847b7-dr27z 347m 5787Mi am-55f77847b7-fp459 416m 5896Mi ds-cts-0 6m 396Mi ds-cts-1 8m 379Mi ds-cts-2 6m 373Mi ds-idrepo-0 8972m 13824Mi ds-idrepo-1 7197m 13792Mi ds-idrepo-2 5767m 13823Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2645m 5478Mi idm-65858d8c4c-4qc5l 2442m 5326Mi lodemon-5798c88b8f-k2sv4 1m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 687m 877Mi 21:25:01 DEBUG --- stderr --- 21:25:01 DEBUG 21:25:03 INFO 21:25:03 INFO [loop_until]: kubectl --namespace=xlou top node 21:25:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:25:03 INFO [loop_until]: OK (rc = 0) 21:25:03 DEBUG --- stdout --- 21:25:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1373Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 450m 2% 6929Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 477m 3% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 426m 2% 6996Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2514m 15% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1386m 8% 2180Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2838m 17% 6731Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5615m 35% 14507Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 7776m 48% 14500Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7575m 47% 14471Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 759m 4% 2372Mi 4% 21:25:03 DEBUG --- stderr --- 21:25:03 DEBUG 21:26:01 INFO 21:26:01 INFO [loop_until]: kubectl --namespace=xlou top pods 21:26:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:26:01 INFO [loop_until]: OK (rc = 0) 21:26:01 DEBUG --- stdout --- 21:26:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 264m 5829Mi am-55f77847b7-dr27z 269m 5787Mi am-55f77847b7-fp459 263m 5896Mi ds-cts-0 5m 398Mi ds-cts-1 9m 380Mi ds-cts-2 6m 373Mi ds-idrepo-0 8728m 13541Mi ds-idrepo-1 3275m 13880Mi ds-idrepo-2 5539m 13718Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2508m 5477Mi idm-65858d8c4c-4qc5l 2275m 5325Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 703m 877Mi 21:26:01 DEBUG --- stderr --- 21:26:01 DEBUG 21:26:03 INFO 21:26:03 INFO [loop_until]: kubectl --namespace=xlou top node 21:26:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:26:03 INFO [loop_until]: OK (rc = 0) 21:26:03 DEBUG --- stdout --- 21:26:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 333m 2% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 326m 2% 6923Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 313m 1% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2510m 15% 6636Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1372m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2713m 17% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4877m 30% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 3372m 21% 14566Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8677m 54% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 780m 4% 2372Mi 4% 21:26:03 DEBUG --- stderr --- 21:26:03 DEBUG 21:27:01 INFO 21:27:01 INFO [loop_until]: kubectl --namespace=xlou top pods 21:27:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:27:01 INFO [loop_until]: OK (rc = 0) 21:27:01 DEBUG --- stdout --- 21:27:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 306m 5829Mi am-55f77847b7-dr27z 355m 5787Mi am-55f77847b7-fp459 311m 5896Mi ds-cts-0 6m 398Mi ds-cts-1 9m 381Mi ds-cts-2 6m 373Mi ds-idrepo-0 7002m 13576Mi ds-idrepo-1 3825m 13809Mi ds-idrepo-2 5592m 13856Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2518m 5477Mi idm-65858d8c4c-4qc5l 2167m 5326Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 697m 878Mi 21:27:01 DEBUG --- stderr --- 21:27:01 DEBUG 21:27:03 INFO 21:27:03 INFO [loop_until]: kubectl --namespace=xlou top node 21:27:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:27:03 INFO [loop_until]: OK (rc = 0) 21:27:03 DEBUG --- stdout --- 21:27:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 375m 2% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 398m 2% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 409m 2% 6996Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2460m 15% 6641Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1377m 8% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2649m 16% 6733Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 75m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5542m 34% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4101m 25% 14516Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 7336m 46% 14274Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 783m 4% 2377Mi 4% 21:27:03 DEBUG --- stderr --- 21:27:03 DEBUG 21:28:01 INFO 21:28:01 INFO [loop_until]: kubectl --namespace=xlou top pods 21:28:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:28:01 INFO [loop_until]: OK (rc = 0) 21:28:01 DEBUG --- stdout --- 21:28:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 272m 5829Mi am-55f77847b7-dr27z 275m 5787Mi am-55f77847b7-fp459 329m 5897Mi ds-cts-0 5m 397Mi ds-cts-1 9m 380Mi ds-cts-2 5m 374Mi ds-idrepo-0 7988m 13655Mi ds-idrepo-1 5661m 13600Mi ds-idrepo-2 4929m 13786Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2485m 5476Mi idm-65858d8c4c-4qc5l 2245m 5325Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 721m 877Mi 21:28:01 DEBUG --- stderr --- 21:28:01 DEBUG 21:28:03 INFO 21:28:03 INFO [loop_until]: kubectl --namespace=xlou top node 21:28:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:28:03 INFO [loop_until]: OK (rc = 0) 21:28:03 DEBUG --- stdout --- 21:28:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1374Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 377m 2% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 335m 2% 6928Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 333m 2% 6999Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2498m 15% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1353m 8% 2175Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2660m 16% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 77m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4735m 29% 14485Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 5568m 35% 14366Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8035m 50% 14268Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 763m 4% 2373Mi 4% 21:28:03 DEBUG --- stderr --- 21:28:03 DEBUG 21:29:01 INFO 21:29:01 INFO [loop_until]: kubectl --namespace=xlou top pods 21:29:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:29:01 INFO [loop_until]: OK (rc = 0) 21:29:01 DEBUG --- stdout --- 21:29:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 281m 5829Mi am-55f77847b7-dr27z 278m 5788Mi am-55f77847b7-fp459 297m 5897Mi ds-cts-0 5m 398Mi ds-cts-1 8m 380Mi ds-cts-2 6m 374Mi ds-idrepo-0 8237m 13820Mi ds-idrepo-1 4288m 13806Mi ds-idrepo-2 6333m 13785Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 2422m 5477Mi idm-65858d8c4c-4qc5l 2269m 5325Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 701m 878Mi 21:29:01 DEBUG --- stderr --- 21:29:01 DEBUG 21:29:03 INFO 21:29:03 INFO [loop_until]: kubectl --namespace=xlou top node 21:29:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:29:03 INFO [loop_until]: OK (rc = 0) 21:29:03 DEBUG --- stdout --- 21:29:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1368Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 353m 2% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 348m 2% 6927Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 337m 2% 6999Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2546m 16% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1380m 8% 2179Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2690m 16% 6737Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 6170m 38% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 4317m 27% 14530Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 8628m 54% 14447Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 778m 4% 2375Mi 4% 21:29:03 DEBUG --- stderr --- 21:29:03 DEBUG 21:30:01 INFO 21:30:01 INFO [loop_until]: kubectl --namespace=xlou top pods 21:30:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:30:01 INFO [loop_until]: OK (rc = 0) 21:30:01 DEBUG --- stdout --- 21:30:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 5m 5830Mi am-55f77847b7-dr27z 5m 5788Mi am-55f77847b7-fp459 9m 5897Mi ds-cts-0 6m 398Mi ds-cts-1 9m 380Mi ds-cts-2 6m 373Mi ds-idrepo-0 995m 13824Mi ds-idrepo-1 330m 13742Mi ds-idrepo-2 476m 13826Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 471m 5477Mi idm-65858d8c4c-4qc5l 455m 5326Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 696m 878Mi 21:30:01 DEBUG --- stderr --- 21:30:01 DEBUG 21:30:03 INFO 21:30:03 INFO [loop_until]: kubectl --namespace=xlou top node 21:30:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:30:04 INFO [loop_until]: OK (rc = 0) 21:30:04 DEBUG --- stdout --- 21:30:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1371Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6926Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 7000Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 561m 3% 6638Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 1367m 8% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 589m 3% 6735Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1132Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 467m 2% 14511Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 353m 2% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 1026m 6% 14297Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 772m 4% 2377Mi 4% 21:30:04 DEBUG --- stderr --- 21:30:04 DEBUG 21:31:01 INFO 21:31:01 INFO [loop_until]: kubectl --namespace=xlou top pods 21:31:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:31:01 INFO [loop_until]: OK (rc = 0) 21:31:01 DEBUG --- stdout --- 21:31:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 5m 5830Mi am-55f77847b7-dr27z 5m 5788Mi am-55f77847b7-fp459 10m 5897Mi ds-cts-0 6m 399Mi ds-cts-1 8m 381Mi ds-cts-2 5m 374Mi ds-idrepo-0 11m 13647Mi ds-idrepo-1 10m 13742Mi ds-idrepo-2 10m 13824Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 11m 5476Mi idm-65858d8c4c-4qc5l 6m 5325Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 58m 230Mi 21:31:01 DEBUG --- stderr --- 21:31:01 DEBUG 21:31:04 INFO 21:31:04 INFO [loop_until]: kubectl --namespace=xlou top node 21:31:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:31:04 INFO [loop_until]: OK (rc = 0) 21:31:04 DEBUG --- stdout --- 21:31:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6921Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 6640Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2174Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 6734Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 78m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14516Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14299Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 138m 0% 1734Mi 2% 21:31:04 DEBUG --- stderr --- 21:31:04 DEBUG 21:32:02 INFO 21:32:02 INFO [loop_until]: kubectl --namespace=xlou top pods 21:32:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:32:02 INFO [loop_until]: OK (rc = 0) 21:32:02 DEBUG --- stdout --- 21:32:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 7m 5830Mi am-55f77847b7-dr27z 5m 5788Mi am-55f77847b7-fp459 11m 5897Mi ds-cts-0 5m 398Mi ds-cts-1 11m 381Mi ds-cts-2 5m 374Mi ds-idrepo-0 9m 13647Mi ds-idrepo-1 10m 13742Mi ds-idrepo-2 21m 13825Mi end-user-ui-6845bc78c7-tln2q 1m 4Mi idm-65858d8c4c-2grp9 7m 5476Mi idm-65858d8c4c-4qc5l 6m 5325Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1m 230Mi 21:32:02 DEBUG --- stderr --- 21:32:02 DEBUG 21:32:04 INFO 21:32:04 INFO [loop_until]: kubectl --namespace=xlou top node 21:32:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:32:04 INFO [loop_until]: OK (rc = 0) 21:32:04 DEBUG --- stdout --- 21:32:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6929Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 6642Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 6735Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 71m 0% 14512Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14297Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 70m 0% 1738Mi 2% 21:32:04 DEBUG --- stderr --- 21:32:04 DEBUG 127.0.0.1 - - [12/Aug/2023 21:32:15] "GET /monitoring/average?start_time=23-08-12_20:01:44&stop_time=23-08-12_20:30:15 HTTP/1.1" 200 - 21:33:02 INFO 21:33:02 INFO [loop_until]: kubectl --namespace=xlou top pods 21:33:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:33:02 INFO [loop_until]: OK (rc = 0) 21:33:02 DEBUG --- stdout --- 21:33:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 4Mi am-55f77847b7-8t2dm 7m 5830Mi am-55f77847b7-dr27z 16m 5795Mi am-55f77847b7-fp459 9m 5897Mi ds-cts-0 5m 397Mi ds-cts-1 90m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 164m 13647Mi ds-idrepo-1 126m 13742Mi ds-idrepo-2 130m 13810Mi end-user-ui-6845bc78c7-tln2q 1m 5Mi idm-65858d8c4c-2grp9 6m 5475Mi idm-65858d8c4c-4qc5l 6m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 123m 380Mi 21:33:02 DEBUG --- stderr --- 21:33:02 DEBUG 21:33:04 INFO 21:33:04 INFO [loop_until]: kubectl --namespace=xlou top node 21:33:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:33:04 INFO [loop_until]: OK (rc = 0) 21:33:04 DEBUG --- stdout --- 21:33:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 89m 0% 1375Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 6918Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 83m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 6995Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 6640Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 142m 0% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 75m 0% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 154m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 127m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 99m 0% 14568Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 245m 1% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 183m 1% 14296Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 554m 3% 1898Mi 3% 21:33:04 DEBUG --- stderr --- 21:33:04 DEBUG 21:34:02 INFO 21:34:02 INFO [loop_until]: kubectl --namespace=xlou top pods 21:34:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:34:02 INFO [loop_until]: OK (rc = 0) 21:34:02 DEBUG --- stdout --- 21:34:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 5Mi am-55f77847b7-8t2dm 6m 5830Mi am-55f77847b7-dr27z 6m 5795Mi am-55f77847b7-fp459 10m 5897Mi ds-cts-0 5m 398Mi ds-cts-1 8m 380Mi ds-cts-2 6m 374Mi ds-idrepo-0 70m 13648Mi ds-idrepo-1 20m 13742Mi ds-idrepo-2 10m 13810Mi end-user-ui-6845bc78c7-tln2q 1m 5Mi idm-65858d8c4c-2grp9 6m 5475Mi idm-65858d8c4c-4qc5l 6m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1003m 707Mi 21:34:02 DEBUG --- stderr --- 21:34:02 DEBUG 21:34:04 INFO 21:34:04 INFO [loop_until]: kubectl --namespace=xlou top node 21:34:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:34:04 INFO [loop_until]: OK (rc = 0) 21:34:04 DEBUG --- stdout --- 21:34:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6920Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6936Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6994Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 6639Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 6733Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 118m 0% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 121m 0% 14297Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1080m 6% 2204Mi 3% 21:34:04 DEBUG --- stderr --- 21:34:04 DEBUG 21:35:02 INFO 21:35:02 INFO [loop_until]: kubectl --namespace=xlou top pods 21:35:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:35:02 INFO [loop_until]: OK (rc = 0) 21:35:02 DEBUG --- stdout --- 21:35:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 5Mi am-55f77847b7-8t2dm 6m 5830Mi am-55f77847b7-dr27z 5m 5795Mi am-55f77847b7-fp459 13m 5897Mi ds-cts-0 5m 398Mi ds-cts-1 9m 380Mi ds-cts-2 6m 374Mi ds-idrepo-0 9m 13647Mi ds-idrepo-1 11m 13679Mi ds-idrepo-2 10m 13811Mi end-user-ui-6845bc78c7-tln2q 1m 5Mi idm-65858d8c4c-2grp9 6m 5475Mi idm-65858d8c4c-4qc5l 6m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 966m 805Mi 21:35:02 DEBUG --- stderr --- 21:35:02 DEBUG 21:35:04 INFO 21:35:04 INFO [loop_until]: kubectl --namespace=xlou top node 21:35:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:35:04 INFO [loop_until]: OK (rc = 0) 21:35:04 DEBUG --- stdout --- 21:35:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1376Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 69m 0% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6997Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 6637Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 117m 0% 2172Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 6732Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 85m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14504Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14299Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1165m 7% 2340Mi 3% 21:35:04 DEBUG --- stderr --- 21:35:04 DEBUG 21:36:02 INFO 21:36:02 INFO [loop_until]: kubectl --namespace=xlou top pods 21:36:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:36:02 INFO [loop_until]: OK (rc = 0) 21:36:02 DEBUG --- stdout --- 21:36:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 5Mi am-55f77847b7-8t2dm 6m 5830Mi am-55f77847b7-dr27z 7m 5795Mi am-55f77847b7-fp459 9m 5897Mi ds-cts-0 6m 398Mi ds-cts-1 8m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 9m 13647Mi ds-idrepo-1 10m 13679Mi ds-idrepo-2 10m 13810Mi end-user-ui-6845bc78c7-tln2q 1m 5Mi idm-65858d8c4c-2grp9 6m 5475Mi idm-65858d8c4c-4qc5l 6m 5324Mi lodemon-5798c88b8f-k2sv4 2m 67Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 580m 842Mi 21:36:02 DEBUG --- stderr --- 21:36:02 DEBUG 21:36:04 INFO 21:36:04 INFO [loop_until]: kubectl --namespace=xlou top node 21:36:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:36:04 INFO [loop_until]: OK (rc = 0) 21:36:04 DEBUG --- stdout --- 21:36:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 58m 0% 6998Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 6640Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2171Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 6730Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1134Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14505Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14309Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 681m 4% 2446Mi 4% 21:36:04 DEBUG --- stderr --- 21:36:04 DEBUG 21:37:02 INFO 21:37:02 INFO [loop_until]: kubectl --namespace=xlou top pods 21:37:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:37:02 INFO [loop_until]: OK (rc = 0) 21:37:02 DEBUG --- stdout --- 21:37:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 5Mi am-55f77847b7-8t2dm 6m 5830Mi am-55f77847b7-dr27z 6m 5795Mi am-55f77847b7-fp459 9m 5897Mi ds-cts-0 6m 398Mi ds-cts-1 9m 381Mi ds-cts-2 6m 374Mi ds-idrepo-0 9m 13647Mi ds-idrepo-1 10m 13679Mi ds-idrepo-2 9m 13810Mi end-user-ui-6845bc78c7-tln2q 1m 5Mi idm-65858d8c4c-2grp9 6m 5475Mi idm-65858d8c4c-4qc5l 6m 5324Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 615m 881Mi 21:37:02 DEBUG --- stderr --- 21:37:02 DEBUG 21:37:04 INFO 21:37:04 INFO [loop_until]: kubectl --namespace=xlou top node 21:37:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:37:04 INFO [loop_until]: OK (rc = 0) 21:37:04 DEBUG --- stdout --- 21:37:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 6919Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6935Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6998Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 6641Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 6733Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14506Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 57m 0% 14299Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 733m 4% 2386Mi 4% 21:37:04 DEBUG --- stderr --- 21:37:04 DEBUG 21:38:02 INFO 21:38:02 INFO [loop_until]: kubectl --namespace=xlou top pods 21:38:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:38:02 INFO [loop_until]: OK (rc = 0) 21:38:02 DEBUG --- stdout --- 21:38:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-n6t4z 1m 5Mi am-55f77847b7-8t2dm 6m 5830Mi am-55f77847b7-dr27z 6m 5795Mi am-55f77847b7-fp459 9m 5897Mi ds-cts-0 5m 398Mi ds-cts-1 8m 380Mi ds-cts-2 7m 374Mi ds-idrepo-0 9m 13647Mi ds-idrepo-1 10m 13679Mi ds-idrepo-2 10m 13810Mi end-user-ui-6845bc78c7-tln2q 1m 5Mi idm-65858d8c4c-2grp9 6m 5474Mi idm-65858d8c4c-4qc5l 6m 5323Mi lodemon-5798c88b8f-k2sv4 2m 66Mi login-ui-74d6fb46c-k74fv 1m 3Mi overseer-0-58cf4b587d-f2ksh 1001m 1304Mi 21:38:02 DEBUG --- stderr --- 21:38:02 DEBUG 21:38:04 INFO 21:38:04 INFO [loop_until]: kubectl --namespace=xlou top node 21:38:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 21:38:05 INFO [loop_until]: OK (rc = 0) 21:38:05 DEBUG --- stdout --- 21:38:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1377Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6923Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 6934Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 6998Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 6637Mi 11% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2169Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 6733Mi 11% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1135Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14506Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-x4wx 58m 0% 14304Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1032m 6% 2568Mi 4% 21:38:05 DEBUG --- stderr --- 21:38:05 DEBUG 21:38:11 INFO Finished: True 21:38:11 INFO Waiting for threads to register finish flag 21:39:05 INFO Done. Have a nice day! :) 127.0.0.1 - - [12/Aug/2023 21:39:05] "GET /monitoring/stop HTTP/1.1" 200 - 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Cpu_cores_used_per_pod.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Memory_usage_per_pod.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Disk_tps_read_per_pod.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Disk_tps_writes_per_pod.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Cpu_cores_used_per_node.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Memory_usage_used_per_node.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Cpu_iowait_per_node.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Network_receive_per_node.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Network_transmit_per_node.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/am_cts_task_count_token_session.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/am_authentication_rate.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/am_authentication_count_per_pod.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/Cts_reaper_Deletion_count.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/AM_oauth2_authorization_codes.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/ds_pods_replication_delay.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/am_cts_reaper_cache_size.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/node_disk_read_bytes_total.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/node_disk_written_bytes_total.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/ds_backend_entry_count.json does not exist. Skipping... 21:39:08 INFO File /tmp/lodemon_data-23-08-12_18:58:06/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [12/Aug/2023 21:39:10] "GET /monitoring/process HTTP/1.1" 200 -