==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-7b659c988b-78sgh Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sat, 12 Aug 2023 00:43:25 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=7b659c988b skaffold.dev/run-id=31edc3b4-6e41-4017-a7fb-783ef8cb74d4 Annotations: Status: Running IP: 10.106.45.35 IPs: IP: 10.106.45.35 Controlled By: ReplicaSet/lodemon-7b659c988b Containers: lodemon: Container ID: containerd://da09d982506419d390a68fa3369dad14a2d3ca9b0a55cbfbe1171d1148efdf7a Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sat, 12 Aug 2023 00:43:26 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tsxzj (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-tsxzj: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 01:43:27 INFO 01:43:27 INFO --------------------- Get expected number of pods --------------------- 01:43:27 INFO 01:43:27 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 01:43:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:27 INFO [loop_until]: OK (rc = 0) 01:43:27 DEBUG --- stdout --- 01:43:27 DEBUG 3 01:43:27 DEBUG --- stderr --- 01:43:27 DEBUG 01:43:27 INFO 01:43:27 INFO ---------------------------- Get pod list ---------------------------- 01:43:27 INFO 01:43:27 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 01:43:27 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 01:43:27 INFO [loop_until]: OK (rc = 0) 01:43:27 DEBUG --- stdout --- 01:43:27 DEBUG am-55f77847b7-d6t28 am-55f77847b7-l482k am-55f77847b7-qhqgg 01:43:27 DEBUG --- stderr --- 01:43:27 DEBUG 01:43:27 INFO 01:43:27 INFO -------------- Check pod am-55f77847b7-d6t28 is running -------------- 01:43:27 INFO 01:43:27 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-d6t28 -o=jsonpath={.status.phase} | grep "Running" 01:43:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:27 INFO [loop_until]: OK (rc = 0) 01:43:27 DEBUG --- stdout --- 01:43:27 DEBUG Running 01:43:27 DEBUG --- stderr --- 01:43:27 DEBUG 01:43:27 INFO 01:43:27 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-d6t28 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:27 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:27 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:27 INFO [loop_until]: OK (rc = 0) 01:43:27 DEBUG --- stdout --- 01:43:27 DEBUG true 01:43:27 DEBUG --- stderr --- 01:43:27 DEBUG 01:43:27 INFO 01:43:27 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-d6t28 --output jsonpath={.status.startTime} 01:43:27 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG 2023-08-12T00:33:57Z 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO ------- Check pod am-55f77847b7-d6t28 filesystem is accessible ------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-d6t28 --container openam -- ls / | grep "bin" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO ------------- Check pod am-55f77847b7-d6t28 restart count ------------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-d6t28 --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG 0 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO Pod am-55f77847b7-d6t28 has been restarted 0 times. 01:43:28 INFO 01:43:28 INFO -------------- Check pod am-55f77847b7-l482k is running -------------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-l482k -o=jsonpath={.status.phase} | grep "Running" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG Running 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-l482k -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG true 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-l482k --output jsonpath={.status.startTime} 01:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG 2023-08-12T00:33:57Z 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO ------- Check pod am-55f77847b7-l482k filesystem is accessible ------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-l482k --container openam -- ls / | grep "bin" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO ------------- Check pod am-55f77847b7-l482k restart count ------------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-l482k --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG 0 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO Pod am-55f77847b7-l482k has been restarted 0 times. 01:43:28 INFO 01:43:28 INFO -------------- Check pod am-55f77847b7-qhqgg is running -------------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-qhqgg -o=jsonpath={.status.phase} | grep "Running" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG Running 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-qhqgg -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG true 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-qhqgg --output jsonpath={.status.startTime} 01:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG 2023-08-12T00:33:57Z 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO ------- Check pod am-55f77847b7-qhqgg filesystem is accessible ------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-qhqgg --container openam -- ls / | grep "bin" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO ------------- Check pod am-55f77847b7-qhqgg restart count ------------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-qhqgg --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG 0 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO Pod am-55f77847b7-qhqgg has been restarted 0 times. 01:43:28 INFO 01:43:28 INFO --------------------- Get expected number of pods --------------------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 01:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG 2 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO ---------------------------- Get pod list ---------------------------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 01:43:28 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG idm-65858d8c4c-4x2bg idm-65858d8c4c-vdncx 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO -------------- Check pod idm-65858d8c4c-4x2bg is running -------------- 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-4x2bg -o=jsonpath={.status.phase} | grep "Running" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG Running 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-4x2bg -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:28 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:28 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:28 INFO [loop_until]: OK (rc = 0) 01:43:28 DEBUG --- stdout --- 01:43:28 DEBUG true 01:43:28 DEBUG --- stderr --- 01:43:28 DEBUG 01:43:28 INFO 01:43:28 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-4x2bg --output jsonpath={.status.startTime} 01:43:28 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG 2023-08-12T00:33:57Z 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO ------- Check pod idm-65858d8c4c-4x2bg filesystem is accessible ------- 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-4x2bg --container openidm -- ls / | grep "bin" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO ------------ Check pod idm-65858d8c4c-4x2bg restart count ------------ 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-4x2bg --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG 0 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO Pod idm-65858d8c4c-4x2bg has been restarted 0 times. 01:43:29 INFO 01:43:29 INFO -------------- Check pod idm-65858d8c4c-vdncx is running -------------- 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-vdncx -o=jsonpath={.status.phase} | grep "Running" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG Running 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-vdncx -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG true 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-vdncx --output jsonpath={.status.startTime} 01:43:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG 2023-08-12T00:33:57Z 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO ------- Check pod idm-65858d8c4c-vdncx filesystem is accessible ------- 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-vdncx --container openidm -- ls / | grep "bin" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO ------------ Check pod idm-65858d8c4c-vdncx restart count ------------ 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-vdncx --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG 0 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO Pod idm-65858d8c4c-vdncx has been restarted 0 times. 01:43:29 INFO 01:43:29 INFO --------------------- Get expected number of pods --------------------- 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 01:43:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG 3 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO ---------------------------- Get pod list ---------------------------- 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 01:43:29 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG Running 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG true 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 01:43:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG 2023-08-12T00:00:55Z 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:29 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG 0 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO Pod ds-idrepo-0 has been restarted 0 times. 01:43:29 INFO 01:43:29 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:29 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:29 INFO [loop_until]: OK (rc = 0) 01:43:29 DEBUG --- stdout --- 01:43:29 DEBUG Running 01:43:29 DEBUG --- stderr --- 01:43:29 DEBUG 01:43:29 INFO 01:43:29 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:29 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG true 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 01:43:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG 2023-08-12T00:12:05Z 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG 0 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO Pod ds-idrepo-1 has been restarted 0 times. 01:43:30 INFO 01:43:30 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG Running 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG true 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 01:43:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG 2023-08-12T00:22:59Z 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG 0 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO Pod ds-idrepo-2 has been restarted 0 times. 01:43:30 INFO 01:43:30 INFO --------------------- Get expected number of pods --------------------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 01:43:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG 3 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO ---------------------------- Get pod list ---------------------------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 01:43:30 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO -------------------- Check pod ds-cts-0 is running -------------------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG Running 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG true 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 01:43:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG 2023-08-12T00:00:55Z 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:30 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG 0 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO Pod ds-cts-0 has been restarted 0 times. 01:43:30 INFO 01:43:30 INFO -------------------- Check pod ds-cts-1 is running -------------------- 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:30 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:30 INFO [loop_until]: OK (rc = 0) 01:43:30 DEBUG --- stdout --- 01:43:30 DEBUG Running 01:43:30 DEBUG --- stderr --- 01:43:30 DEBUG 01:43:30 INFO 01:43:30 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:30 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG true 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO 01:43:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 01:43:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG 2023-08-12T00:01:23Z 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO 01:43:31 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 01:43:31 INFO 01:43:31 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 01:43:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO 01:43:31 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 01:43:31 INFO 01:43:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG 0 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO Pod ds-cts-1 has been restarted 0 times. 01:43:31 INFO 01:43:31 INFO -------------------- Check pod ds-cts-2 is running -------------------- 01:43:31 INFO 01:43:31 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 01:43:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG Running 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO 01:43:31 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 01:43:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG true 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO 01:43:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 01:43:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG 2023-08-12T00:01:48Z 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO 01:43:31 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 01:43:31 INFO 01:43:31 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 01:43:31 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO 01:43:31 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 01:43:31 INFO 01:43:31 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 01:43:31 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:31 INFO [loop_until]: OK (rc = 0) 01:43:31 DEBUG --- stdout --- 01:43:31 DEBUG 0 01:43:31 DEBUG --- stderr --- 01:43:31 DEBUG 01:43:31 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.35:8080 Press CTRL+C to quit 01:43:52 INFO 01:43:52 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:52 INFO [loop_until]: OK (rc = 0) 01:43:52 DEBUG --- stdout --- 01:43:52 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:52 DEBUG --- stderr --- 01:43:52 DEBUG 01:43:52 INFO 01:43:52 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:52 INFO [loop_until]: OK (rc = 0) 01:43:52 DEBUG --- stdout --- 01:43:52 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:52 DEBUG --- stderr --- 01:43:52 DEBUG 01:43:52 INFO 01:43:52 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:52 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:53 INFO [loop_until]: OK (rc = 0) 01:43:53 DEBUG --- stdout --- 01:43:53 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:53 DEBUG --- stderr --- 01:43:53 DEBUG 01:43:53 INFO 01:43:53 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:53 INFO [loop_until]: OK (rc = 0) 01:43:53 DEBUG --- stdout --- 01:43:53 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:53 DEBUG --- stderr --- 01:43:53 DEBUG 01:43:53 INFO 01:43:53 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:53 INFO [loop_until]: OK (rc = 0) 01:43:53 DEBUG --- stdout --- 01:43:53 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:53 DEBUG --- stderr --- 01:43:53 DEBUG 01:43:53 INFO 01:43:53 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:53 INFO [loop_until]: OK (rc = 0) 01:43:53 DEBUG --- stdout --- 01:43:53 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:53 DEBUG --- stderr --- 01:43:53 DEBUG 01:43:53 INFO 01:43:53 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:53 INFO [loop_until]: OK (rc = 0) 01:43:53 DEBUG --- stdout --- 01:43:53 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:53 DEBUG --- stderr --- 01:43:53 DEBUG 01:43:53 INFO 01:43:53 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:53 INFO [loop_until]: OK (rc = 0) 01:43:53 DEBUG --- stdout --- 01:43:53 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:53 DEBUG --- stderr --- 01:43:53 DEBUG 01:43:53 INFO 01:43:53 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:53 INFO [loop_until]: OK (rc = 0) 01:43:53 DEBUG --- stdout --- 01:43:53 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:53 DEBUG --- stderr --- 01:43:53 DEBUG 01:43:53 INFO 01:43:53 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:53 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:53 INFO [loop_until]: OK (rc = 0) 01:43:53 DEBUG --- stdout --- 01:43:53 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:53 DEBUG --- stderr --- 01:43:53 DEBUG 01:43:54 INFO 01:43:54 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:54 INFO [loop_until]: OK (rc = 0) 01:43:54 DEBUG --- stdout --- 01:43:54 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:54 DEBUG --- stderr --- 01:43:54 DEBUG 01:43:54 INFO 01:43:54 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:54 INFO [loop_until]: OK (rc = 0) 01:43:54 DEBUG --- stdout --- 01:43:54 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:54 DEBUG --- stderr --- 01:43:54 DEBUG 01:43:54 INFO 01:43:54 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:54 INFO [loop_until]: OK (rc = 0) 01:43:54 DEBUG --- stdout --- 01:43:54 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:54 DEBUG --- stderr --- 01:43:54 DEBUG 01:43:54 INFO 01:43:54 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:54 INFO [loop_until]: OK (rc = 0) 01:43:54 DEBUG --- stdout --- 01:43:54 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:54 DEBUG --- stderr --- 01:43:54 DEBUG 01:43:54 INFO 01:43:54 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:54 INFO [loop_until]: OK (rc = 0) 01:43:54 DEBUG --- stdout --- 01:43:54 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:54 DEBUG --- stderr --- 01:43:54 DEBUG 01:43:54 INFO 01:43:54 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:54 INFO [loop_until]: OK (rc = 0) 01:43:54 DEBUG --- stdout --- 01:43:54 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:54 DEBUG --- stderr --- 01:43:54 DEBUG 01:43:54 INFO 01:43:54 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:54 INFO [loop_until]: OK (rc = 0) 01:43:54 DEBUG --- stdout --- 01:43:54 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:54 DEBUG --- stderr --- 01:43:54 DEBUG 01:43:54 INFO 01:43:54 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:54 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:55 INFO [loop_until]: OK (rc = 0) 01:43:55 DEBUG --- stdout --- 01:43:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:55 DEBUG --- stderr --- 01:43:55 DEBUG 01:43:55 INFO 01:43:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:55 INFO [loop_until]: OK (rc = 0) 01:43:55 DEBUG --- stdout --- 01:43:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:55 DEBUG --- stderr --- 01:43:55 DEBUG 01:43:55 INFO 01:43:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:55 INFO [loop_until]: OK (rc = 0) 01:43:55 DEBUG --- stdout --- 01:43:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:55 DEBUG --- stderr --- 01:43:55 DEBUG 01:43:55 INFO 01:43:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:55 INFO [loop_until]: OK (rc = 0) 01:43:55 DEBUG --- stdout --- 01:43:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:55 DEBUG --- stderr --- 01:43:55 DEBUG 01:43:55 INFO 01:43:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:55 INFO [loop_until]: OK (rc = 0) 01:43:55 DEBUG --- stdout --- 01:43:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:55 DEBUG --- stderr --- 01:43:55 DEBUG 01:43:55 INFO 01:43:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:55 INFO [loop_until]: OK (rc = 0) 01:43:55 DEBUG --- stdout --- 01:43:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:55 DEBUG --- stderr --- 01:43:55 DEBUG 01:43:55 INFO 01:43:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:55 INFO [loop_until]: OK (rc = 0) 01:43:55 DEBUG --- stdout --- 01:43:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:55 DEBUG --- stderr --- 01:43:55 DEBUG 01:43:55 INFO 01:43:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:55 INFO [loop_until]: OK (rc = 0) 01:43:55 DEBUG --- stdout --- 01:43:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:55 DEBUG --- stderr --- 01:43:55 DEBUG 01:43:55 INFO 01:43:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:56 INFO [loop_until]: OK (rc = 0) 01:43:56 DEBUG --- stdout --- 01:43:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:56 DEBUG --- stderr --- 01:43:56 DEBUG 01:43:56 INFO 01:43:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 01:43:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:56 INFO [loop_until]: OK (rc = 0) 01:43:56 DEBUG --- stdout --- 01:43:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 01:43:56 DEBUG --- stderr --- 01:43:56 DEBUG 01:43:56 INFO Initializing monitoring instance threads 01:43:56 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 01:43:56 INFO Starting instance threads 01:43:56 INFO 01:43:56 INFO Thread started 01:43:56 INFO [loop_until]: kubectl --namespace=xlou top node 01:43:56 INFO 01:43:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:56 INFO Thread started 01:43:56 INFO [loop_until]: kubectl --namespace=xlou top pods 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036" 01:43:56 INFO Thread started Exception in thread Thread-23: 01:43:56 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-24: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 01:43:56 INFO Thread started self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run Exception in thread Thread-25: Traceback (most recent call last): 01:43:56 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691801036" 01:43:56 INFO Thread started self._target(*self._args, **self._kwargs) self.run() 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691801036" File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop 01:43:56 INFO Thread started self.run() 01:43:56 INFO Thread started Exception in thread Thread-28: 01:43:56 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036" File "/usr/local/lib/python3.9/threading.py", line 910, in run 01:43:56 INFO Thread started 01:43:56 INFO All threads has been started File "/usr/local/lib/python3.9/threading.py", line 910, in run instance.run() Traceback (most recent call last): self._target(*self._args, **self._kwargs) 127.0.0.1 - - [12/Aug/2023 01:43:56] "GET /monitoring/start HTTP/1.1" 200 - File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self.run() if self.prom_data['functions']: File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run KeyError: 'functions' instance.run() if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run self._target(*self._args, **self._kwargs) KeyError: 'functions' File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop if self.prom_data['functions']: KeyError: 'functions' instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: KeyError: 'functions' 01:43:56 INFO [loop_until]: OK (rc = 0) 01:43:56 DEBUG --- stdout --- 01:43:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 15m 3403Mi am-55f77847b7-l482k 21m 3971Mi am-55f77847b7-qhqgg 14m 3595Mi ds-cts-0 8m 373Mi ds-cts-1 9m 377Mi ds-cts-2 8m 352Mi ds-idrepo-0 28m 10291Mi ds-idrepo-1 17m 10365Mi ds-idrepo-2 43m 10284Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 3436Mi idm-65858d8c4c-vdncx 9m 1293Mi lodemon-7b659c988b-78sgh 394m 60Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 15Mi 01:43:56 DEBUG --- stderr --- 01:43:56 DEBUG 01:43:56 INFO [loop_until]: OK (rc = 0) 01:43:56 DEBUG --- stdout --- 01:43:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 466m 2% 1322Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 4418Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 4720Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5076Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 82m 0% 4740Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 127m 0% 2102Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 2541Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 81m 0% 10942Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 10998Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 10906Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1623Mi 2% 01:43:56 DEBUG --- stderr --- 01:43:56 DEBUG 01:43:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:43:57 WARNING Response is NONE 01:43:57 DEBUG Exception is preset. Setting retry_loop to true 01:43:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:43:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:43:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:43:59 WARNING Response is NONE 01:43:59 WARNING Response is NONE 01:43:59 DEBUG Exception is preset. Setting retry_loop to true 01:43:59 DEBUG Exception is preset. Setting retry_loop to true 01:43:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:43:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:03 WARNING Response is NONE 01:44:03 WARNING Response is NONE 01:44:03 WARNING Response is NONE 01:44:03 WARNING Response is NONE 01:44:03 DEBUG Exception is preset. Setting retry_loop to true 01:44:03 DEBUG Exception is preset. Setting retry_loop to true 01:44:03 DEBUG Exception is preset. Setting retry_loop to true 01:44:03 DEBUG Exception is preset. Setting retry_loop to true 01:44:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:08 WARNING Response is NONE 01:44:08 DEBUG Exception is preset. Setting retry_loop to true 01:44:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:10 WARNING Response is NONE 01:44:10 WARNING Response is NONE 01:44:10 DEBUG Exception is preset. Setting retry_loop to true 01:44:10 DEBUG Exception is preset. Setting retry_loop to true 01:44:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:11 WARNING Response is NONE 01:44:11 WARNING Response is NONE 01:44:11 DEBUG Exception is preset. Setting retry_loop to true 01:44:11 DEBUG Exception is preset. Setting retry_loop to true 01:44:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:14 WARNING Response is NONE 01:44:14 DEBUG Exception is preset. Setting retry_loop to true 01:44:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:16 WARNING Response is NONE 01:44:16 WARNING Response is NONE 01:44:16 DEBUG Exception is preset. Setting retry_loop to true 01:44:16 DEBUG Exception is preset. Setting retry_loop to true 01:44:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:19 WARNING Response is NONE 01:44:19 DEBUG Exception is preset. Setting retry_loop to true 01:44:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:21 WARNING Response is NONE 01:44:21 DEBUG Exception is preset. Setting retry_loop to true 01:44:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:22 WARNING Response is NONE 01:44:22 DEBUG Exception is preset. Setting retry_loop to true 01:44:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:23 WARNING Response is NONE 01:44:23 DEBUG Exception is preset. Setting retry_loop to true 01:44:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:25 WARNING Response is NONE 01:44:25 DEBUG Exception is preset. Setting retry_loop to true 01:44:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:27 WARNING Response is NONE 01:44:27 DEBUG Exception is preset. Setting retry_loop to true 01:44:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:28 WARNING Response is NONE 01:44:28 DEBUG Exception is preset. Setting retry_loop to true 01:44:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:30 WARNING Response is NONE 01:44:30 DEBUG Exception is preset. Setting retry_loop to true 01:44:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:32 WARNING Response is NONE 01:44:32 DEBUG Exception is preset. Setting retry_loop to true 01:44:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:34 WARNING Response is NONE 01:44:34 DEBUG Exception is preset. Setting retry_loop to true 01:44:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:36 WARNING Response is NONE 01:44:36 DEBUG Exception is preset. Setting retry_loop to true 01:44:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:38 WARNING Response is NONE 01:44:38 DEBUG Exception is preset. Setting retry_loop to true 01:44:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:39 WARNING Response is NONE 01:44:39 DEBUG Exception is preset. Setting retry_loop to true 01:44:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:41 WARNING Response is NONE 01:44:41 DEBUG Exception is preset. Setting retry_loop to true 01:44:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:43 WARNING Response is NONE 01:44:43 DEBUG Exception is preset. Setting retry_loop to true 01:44:43 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:45 WARNING Response is NONE 01:44:45 DEBUG Exception is preset. Setting retry_loop to true 01:44:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:47 WARNING Response is NONE 01:44:47 DEBUG Exception is preset. Setting retry_loop to true 01:44:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:49 WARNING Response is NONE 01:44:49 DEBUG Exception is preset. Setting retry_loop to true 01:44:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:50 WARNING Response is NONE 01:44:50 DEBUG Exception is preset. Setting retry_loop to true 01:44:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:52 WARNING Response is NONE 01:44:52 DEBUG Exception is preset. Setting retry_loop to true 01:44:52 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:44:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:54 WARNING Response is NONE 01:44:54 DEBUG Exception is preset. Setting retry_loop to true 01:44:54 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:44:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:56 WARNING Response is NONE 01:44:56 DEBUG Exception is preset. Setting retry_loop to true 01:44:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:44:56 INFO 01:44:56 INFO [loop_until]: kubectl --namespace=xlou top pods 01:44:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:44:56 INFO 01:44:56 INFO [loop_until]: kubectl --namespace=xlou top node 01:44:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:44:56 INFO [loop_until]: OK (rc = 0) 01:44:56 DEBUG --- stdout --- 01:44:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 12m 3407Mi am-55f77847b7-l482k 21m 3972Mi am-55f77847b7-qhqgg 15m 3607Mi ds-cts-0 8m 387Mi ds-cts-1 7m 384Mi ds-cts-2 7m 353Mi ds-idrepo-0 539m 10306Mi ds-idrepo-1 20m 10367Mi ds-idrepo-2 89m 10292Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 6m 3436Mi idm-65858d8c4c-vdncx 9m 1306Mi lodemon-7b659c988b-78sgh 3m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 180m 48Mi 01:44:56 DEBUG --- stderr --- 01:44:56 DEBUG 01:44:56 INFO [loop_until]: OK (rc = 0) 01:44:56 DEBUG --- stdout --- 01:44:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4422Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 4731Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5069Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 80m 0% 4743Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2102Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2556Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 328m 2% 10960Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1068Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 11000Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 10913Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 262m 1% 1624Mi 2% 01:44:56 DEBUG --- stderr --- 01:44:56 DEBUG 01:44:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:44:58 WARNING Response is NONE 01:44:58 DEBUG Exception is preset. Setting retry_loop to true 01:44:58 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:45:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:00 WARNING Response is NONE 01:45:00 DEBUG Exception is preset. Setting retry_loop to true 01:45:00 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:45:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:01 WARNING Response is NONE 01:45:01 DEBUG Exception is preset. Setting retry_loop to true 01:45:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:04 WARNING Response is NONE 01:45:04 DEBUG Exception is preset. Setting retry_loop to true 01:45:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:07 WARNING Response is NONE 01:45:07 DEBUG Exception is preset. Setting retry_loop to true 01:45:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:12 WARNING Response is NONE 01:45:12 DEBUG Exception is preset. Setting retry_loop to true 01:45:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:45:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:15 WARNING Response is NONE 01:45:15 DEBUG Exception is preset. Setting retry_loop to true 01:45:15 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:16 WARNING Response is NONE 01:45:16 DEBUG Exception is preset. Setting retry_loop to true 01:45:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:18 WARNING Response is NONE 01:45:18 DEBUG Exception is preset. Setting retry_loop to true 01:45:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:26 WARNING Response is NONE 01:45:26 DEBUG Exception is preset. Setting retry_loop to true 01:45:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:27 WARNING Response is NONE 01:45:27 DEBUG Exception is preset. Setting retry_loop to true 01:45:27 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:45:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:29 WARNING Response is NONE 01:45:29 DEBUG Exception is preset. Setting retry_loop to true 01:45:29 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:45:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:30 WARNING Response is NONE 01:45:30 DEBUG Exception is preset. Setting retry_loop to true 01:45:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:37 WARNING Response is NONE 01:45:37 DEBUG Exception is preset. Setting retry_loop to true 01:45:37 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:45:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:41 WARNING Response is NONE 01:45:41 DEBUG Exception is preset. Setting retry_loop to true 01:45:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:45:52 WARNING Response is NONE 01:45:52 DEBUG Exception is preset. Setting retry_loop to true 01:45:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:45:56 INFO 01:45:56 INFO [loop_until]: kubectl --namespace=xlou top pods 01:45:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:45:56 INFO 01:45:56 INFO [loop_until]: kubectl --namespace=xlou top node 01:45:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:45:56 INFO [loop_until]: OK (rc = 0) 01:45:56 DEBUG --- stdout --- 01:45:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 15m 3408Mi am-55f77847b7-l482k 21m 3972Mi am-55f77847b7-qhqgg 12m 3618Mi ds-cts-0 7m 387Mi ds-cts-1 7m 385Mi ds-cts-2 9m 353Mi ds-idrepo-0 22m 10308Mi ds-idrepo-1 17m 10371Mi ds-idrepo-2 25m 10293Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 3437Mi idm-65858d8c4c-vdncx 10m 1317Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 48Mi 01:45:56 DEBUG --- stderr --- 01:45:56 DEBUG 01:45:56 INFO [loop_until]: OK (rc = 0) 01:45:56 DEBUG --- stdout --- 01:45:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4420Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 4745Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 5071Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4739Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2106Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2565Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 75m 0% 10960Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 11000Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 10917Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 64m 0% 1623Mi 2% 01:45:56 DEBUG --- stderr --- 01:45:56 DEBUG 01:46:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:03 WARNING Response is NONE 01:46:03 DEBUG Exception is preset. Setting retry_loop to true 01:46:03 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 01:46:05 WARNING Response is NONE 01:46:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 WARNING Response is NONE 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 WARNING Response is NONE 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 DEBUG Exception is preset. Setting retry_loop to true 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:05 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:16 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:16 WARNING Response is NONE 01:46:16 WARNING Response is NONE 01:46:16 DEBUG Exception is preset. Setting retry_loop to true 01:46:16 DEBUG Exception is preset. Setting retry_loop to true 01:46:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:16 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:18 WARNING Response is NONE 01:46:18 WARNING Response is NONE 01:46:18 WARNING Response is NONE 01:46:18 DEBUG Exception is preset. Setting retry_loop to true 01:46:18 DEBUG Exception is preset. Setting retry_loop to true 01:46:18 DEBUG Exception is preset. Setting retry_loop to true 01:46:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:22 WARNING Response is NONE 01:46:22 WARNING Response is NONE 01:46:22 WARNING Response is NONE 01:46:22 WARNING Response is NONE 01:46:22 DEBUG Exception is preset. Setting retry_loop to true 01:46:22 DEBUG Exception is preset. Setting retry_loop to true 01:46:22 DEBUG Exception is preset. Setting retry_loop to true 01:46:22 DEBUG Exception is preset. Setting retry_loop to true 01:46:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:27 WARNING Response is NONE 01:46:27 WARNING Response is NONE 01:46:27 DEBUG Exception is preset. Setting retry_loop to true 01:46:27 DEBUG Exception is preset. Setting retry_loop to true 01:46:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:29 WARNING Response is NONE 01:46:29 DEBUG Exception is preset. Setting retry_loop to true 01:46:29 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:31 WARNING Response is NONE 01:46:31 DEBUG Exception is preset. Setting retry_loop to true 01:46:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:31 WARNING Response is NONE 01:46:31 DEBUG Exception is preset. Setting retry_loop to true 01:46:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:33 WARNING Response is NONE 01:46:33 DEBUG Exception is preset. Setting retry_loop to true 01:46:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:35 WARNING Response is NONE 01:46:35 WARNING Response is NONE 01:46:35 DEBUG Exception is preset. Setting retry_loop to true 01:46:35 DEBUG Exception is preset. Setting retry_loop to true 01:46:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:38 WARNING Response is NONE 01:46:38 DEBUG Exception is preset. Setting retry_loop to true 01:46:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:40 WARNING Response is NONE 01:46:40 DEBUG Exception is preset. Setting retry_loop to true 01:46:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:42 WARNING Response is NONE 01:46:42 DEBUG Exception is preset. Setting retry_loop to true 01:46:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:44 WARNING Response is NONE 01:46:44 DEBUG Exception is preset. Setting retry_loop to true 01:46:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:46 WARNING Response is NONE 01:46:46 DEBUG Exception is preset. Setting retry_loop to true 01:46:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:48 WARNING Response is NONE 01:46:48 DEBUG Exception is preset. Setting retry_loop to true 01:46:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:49 WARNING Response is NONE 01:46:49 DEBUG Exception is preset. Setting retry_loop to true 01:46:49 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:51 WARNING Response is NONE 01:46:51 DEBUG Exception is preset. Setting retry_loop to true 01:46:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:54 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:54 WARNING Response is NONE 01:46:54 DEBUG Exception is preset. Setting retry_loop to true 01:46:54 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:56 WARNING Response is NONE 01:46:56 DEBUG Exception is preset. Setting retry_loop to true 01:46:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:56 WARNING Response is NONE 01:46:56 DEBUG Exception is preset. Setting retry_loop to true 01:46:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:56 INFO 01:46:56 INFO 01:46:56 INFO [loop_until]: kubectl --namespace=xlou top pods 01:46:56 INFO [loop_until]: kubectl --namespace=xlou top node 01:46:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:46:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:46:56 INFO [loop_until]: OK (rc = 0) 01:46:56 DEBUG --- stdout --- 01:46:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 19m 3408Mi am-55f77847b7-l482k 15m 3972Mi am-55f77847b7-qhqgg 12m 3631Mi ds-cts-0 11m 388Mi ds-cts-1 11m 385Mi ds-cts-2 8m 353Mi ds-idrepo-0 21m 10308Mi ds-idrepo-1 50m 10367Mi ds-idrepo-2 39m 10290Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 5m 3437Mi idm-65858d8c4c-vdncx 8m 1326Mi lodemon-7b659c988b-78sgh 4m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 369m 99Mi 01:46:56 DEBUG --- stderr --- 01:46:56 DEBUG 01:46:56 INFO [loop_until]: OK (rc = 0) 01:46:56 DEBUG --- stdout --- 01:46:56 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 4419Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 4754Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 5076Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4741Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2105Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 65m 0% 2576Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 76m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 82m 0% 11001Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 77m 0% 10912Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 361m 2% 1625Mi 2% 01:46:56 DEBUG --- stderr --- 01:46:56 DEBUG 01:46:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:58 WARNING Response is NONE 01:46:58 DEBUG Exception is preset. Setting retry_loop to true 01:46:58 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:46:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:46:59 WARNING Response is NONE 01:46:59 DEBUG Exception is preset. Setting retry_loop to true 01:46:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:01 WARNING Response is NONE 01:47:01 DEBUG Exception is preset. Setting retry_loop to true 01:47:01 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:03 WARNING Response is NONE 01:47:03 DEBUG Exception is preset. Setting retry_loop to true 01:47:03 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:05 WARNING Response is NONE 01:47:05 DEBUG Exception is preset. Setting retry_loop to true 01:47:05 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:07 WARNING Response is NONE 01:47:07 DEBUG Exception is preset. Setting retry_loop to true 01:47:07 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:09 WARNING Response is NONE 01:47:09 DEBUG Exception is preset. Setting retry_loop to true 01:47:09 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:10 WARNING Response is NONE 01:47:10 DEBUG Exception is preset. Setting retry_loop to true 01:47:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:11 WARNING Response is NONE 01:47:11 DEBUG Exception is preset. Setting retry_loop to true 01:47:11 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:13 WARNING Response is NONE 01:47:13 DEBUG Exception is preset. Setting retry_loop to true 01:47:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:17 WARNING Response is NONE 01:47:17 DEBUG Exception is preset. Setting retry_loop to true 01:47:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:18 WARNING Response is NONE 01:47:18 DEBUG Exception is preset. Setting retry_loop to true 01:47:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:19 WARNING Response is NONE 01:47:19 WARNING Response is NONE 01:47:19 WARNING Response is NONE 01:47:19 WARNING Response is NONE 01:47:19 DEBUG Exception is preset. Setting retry_loop to true 01:47:19 DEBUG Exception is preset. Setting retry_loop to true 01:47:19 DEBUG Exception is preset. Setting retry_loop to true 01:47:19 DEBUG Exception is preset. Setting retry_loop to true 01:47:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:21 WARNING Response is NONE 01:47:21 DEBUG Exception is preset. Setting retry_loop to true 01:47:21 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:24 WARNING Response is NONE 01:47:24 DEBUG Exception is preset. Setting retry_loop to true 01:47:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:28 WARNING Response is NONE 01:47:28 DEBUG Exception is preset. Setting retry_loop to true 01:47:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:30 WARNING Response is NONE 01:47:30 DEBUG Exception is preset. Setting retry_loop to true 01:47:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:32 WARNING Response is NONE 01:47:32 DEBUG Exception is preset. Setting retry_loop to true 01:47:32 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:33 WARNING Response is NONE 01:47:33 DEBUG Exception is preset. Setting retry_loop to true 01:47:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:35 WARNING Response is NONE 01:47:35 DEBUG Exception is preset. Setting retry_loop to true 01:47:35 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:37 WARNING Response is NONE 01:47:37 WARNING Response is NONE 01:47:37 DEBUG Exception is preset. Setting retry_loop to true 01:47:37 DEBUG Exception is preset. Setting retry_loop to true 01:47:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:39 WARNING Response is NONE 01:47:39 DEBUG Exception is preset. Setting retry_loop to true 01:47:39 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:47:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:42 WARNING Response is NONE 01:47:42 DEBUG Exception is preset. Setting retry_loop to true 01:47:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:44 WARNING Response is NONE 01:47:44 DEBUG Exception is preset. Setting retry_loop to true 01:47:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:48 WARNING Response is NONE 01:47:48 WARNING Response is NONE 01:47:48 DEBUG Exception is preset. Setting retry_loop to true 01:47:48 DEBUG Exception is preset. Setting retry_loop to true 01:47:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:53 WARNING Response is NONE 01:47:53 DEBUG Exception is preset. Setting retry_loop to true 01:47:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:55 WARNING Response is NONE 01:47:55 DEBUG Exception is preset. Setting retry_loop to true 01:47:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:56 INFO 01:47:56 INFO [loop_until]: kubectl --namespace=xlou top pods 01:47:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:47:56 INFO [loop_until]: OK (rc = 0) 01:47:56 DEBUG --- stdout --- 01:47:56 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 13m 3408Mi am-55f77847b7-l482k 17m 3973Mi am-55f77847b7-qhqgg 11m 3643Mi ds-cts-0 9m 389Mi ds-cts-1 7m 385Mi ds-cts-2 10m 353Mi ds-idrepo-0 31m 10308Mi ds-idrepo-1 22m 10371Mi ds-idrepo-2 24m 10292Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 6m 3437Mi idm-65858d8c4c-vdncx 10m 1336Mi lodemon-7b659c988b-78sgh 3m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 98Mi 01:47:56 DEBUG --- stderr --- 01:47:56 DEBUG 01:47:56 INFO 01:47:56 INFO [loop_until]: kubectl --namespace=xlou top node 01:47:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:47:57 INFO [loop_until]: OK (rc = 0) 01:47:57 DEBUG --- stdout --- 01:47:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4418Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 4764Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5072Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 4741Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2102Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2588Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 92m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 74m 0% 11005Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 10913Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1623Mi 2% 01:47:57 DEBUG --- stderr --- 01:47:57 DEBUG 01:47:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:59 WARNING Response is NONE 01:47:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:47:59 DEBUG Exception is preset. Setting retry_loop to true 01:47:59 WARNING Response is NONE 01:47:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:47:59 DEBUG Exception is preset. Setting retry_loop to true 01:47:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:48:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:48:04 WARNING Response is NONE 01:48:04 DEBUG Exception is preset. Setting retry_loop to true 01:48:04 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:48:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:48:06 WARNING Response is NONE 01:48:06 DEBUG Exception is preset. Setting retry_loop to true 01:48:06 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:48:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:48:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:48:10 WARNING Response is NONE 01:48:10 WARNING Response is NONE 01:48:10 DEBUG Exception is preset. Setting retry_loop to true 01:48:10 DEBUG Exception is preset. Setting retry_loop to true 01:48:10 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: 01:48:10 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Traceback (most recent call last): Exception in thread Thread-19: File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): raise FailException('Failed to obtain response from server...') File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) self.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/usr/local/lib/python3.9/threading.py", line 910, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:48:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:48:37 WARNING Response is NONE 01:48:37 DEBUG Exception is preset. Setting retry_loop to true 01:48:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:48:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:48:48 WARNING Response is NONE 01:48:48 DEBUG Exception is preset. Setting retry_loop to true 01:48:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:48:57 INFO 01:48:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:48:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:48:57 INFO [loop_until]: OK (rc = 0) 01:48:57 DEBUG --- stdout --- 01:48:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 103m 3451Mi am-55f77847b7-l482k 72m 4003Mi am-55f77847b7-qhqgg 119m 3877Mi ds-cts-0 82m 389Mi ds-cts-1 88m 386Mi ds-cts-2 71m 354Mi ds-idrepo-0 327m 10310Mi ds-idrepo-1 90m 10374Mi ds-idrepo-2 117m 10301Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 91m 3464Mi idm-65858d8c4c-vdncx 8m 1347Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 611m 364Mi 01:48:57 DEBUG --- stderr --- 01:48:57 DEBUG 01:48:57 INFO 01:48:57 INFO [loop_until]: kubectl --namespace=xlou top node 01:48:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:48:57 INFO [loop_until]: OK (rc = 0) 01:48:57 DEBUG --- stdout --- 01:48:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1330Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 204m 1% 4462Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 186m 1% 5000Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 139m 0% 5108Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 137m 0% 4767Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2109Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 85m 0% 2618Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 743m 4% 11075Mi 18% gke-xlou-cdm-ds-32e4dcb1-4z9d 139m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 137m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 157m 0% 11007Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 132m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 161m 1% 10920Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 769m 4% 1887Mi 3% 01:48:57 DEBUG --- stderr --- 01:48:57 DEBUG 01:48:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:48:59 WARNING Response is NONE 01:48:59 DEBUG Exception is preset. Setting retry_loop to true 01:48:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 01:49:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691801036 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 01:49:10 WARNING Response is NONE 01:49:10 DEBUG Exception is preset. Setting retry_loop to true 01:49:10 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-27: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 01:49:57 INFO 01:49:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:49:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:49:57 INFO [loop_until]: OK (rc = 0) 01:49:57 DEBUG --- stdout --- 01:49:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 21m 3454Mi am-55f77847b7-l482k 13m 4004Mi am-55f77847b7-qhqgg 15m 3886Mi ds-cts-0 258m 400Mi ds-cts-1 145m 384Mi ds-cts-2 121m 356Mi ds-idrepo-0 3446m 13263Mi ds-idrepo-1 151m 10375Mi ds-idrepo-2 126m 10304Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 8m 3464Mi idm-65858d8c4c-vdncx 10m 1373Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1093m 363Mi 01:49:57 DEBUG --- stderr --- 01:49:57 DEBUG 01:49:57 INFO 01:49:57 INFO [loop_until]: kubectl --namespace=xlou top node 01:49:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:49:57 INFO [loop_until]: OK (rc = 0) 01:49:57 DEBUG --- stdout --- 01:49:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4466Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 70m 0% 5009Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5107Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4771Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2106Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2621Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 3450m 21% 13833Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 77m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 138m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 137m 0% 11009Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 275m 1% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 176m 1% 10924Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1141m 7% 1886Mi 3% 01:49:57 DEBUG --- stderr --- 01:49:57 DEBUG 01:50:57 INFO 01:50:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:50:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:50:57 INFO [loop_until]: OK (rc = 0) 01:50:57 DEBUG --- stdout --- 01:50:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 12m 3454Mi am-55f77847b7-l482k 9m 4004Mi am-55f77847b7-qhqgg 20m 3899Mi ds-cts-0 6m 392Mi ds-cts-1 9m 385Mi ds-cts-2 8m 356Mi ds-idrepo-0 2755m 13323Mi ds-idrepo-1 34m 10376Mi ds-idrepo-2 20m 10307Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 9m 3465Mi idm-65858d8c4c-vdncx 8m 1385Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1177m 363Mi 01:50:57 DEBUG --- stderr --- 01:50:57 DEBUG 01:50:57 INFO 01:50:57 INFO [loop_until]: kubectl --namespace=xlou top node 01:50:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:50:57 INFO [loop_until]: OK (rc = 0) 01:50:57 DEBUG --- stdout --- 01:50:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 4465Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5017Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5108Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 4771Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2108Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2647Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 2901m 18% 13895Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 78m 0% 11010Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10926Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1255m 7% 1885Mi 3% 01:50:57 DEBUG --- stderr --- 01:50:57 DEBUG 01:51:57 INFO 01:51:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:51:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:51:57 INFO [loop_until]: OK (rc = 0) 01:51:57 DEBUG --- stdout --- 01:51:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 22m 3454Mi am-55f77847b7-l482k 13m 4005Mi am-55f77847b7-qhqgg 15m 3908Mi ds-cts-0 6m 392Mi ds-cts-1 7m 385Mi ds-cts-2 9m 356Mi ds-idrepo-0 3164m 13323Mi ds-idrepo-1 19m 10378Mi ds-idrepo-2 24m 10310Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 9m 3467Mi idm-65858d8c4c-vdncx 8m 1397Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1198m 365Mi 01:51:57 DEBUG --- stderr --- 01:51:57 DEBUG 01:51:57 INFO 01:51:57 INFO [loop_until]: kubectl --namespace=xlou top node 01:51:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:51:57 INFO [loop_until]: OK (rc = 0) 01:51:57 DEBUG --- stdout --- 01:51:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4465Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5030Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5110Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4773Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 117m 0% 2111Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2649Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 3245m 20% 13892Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 11012Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 73m 0% 10930Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1304m 8% 1886Mi 3% 01:51:57 DEBUG --- stderr --- 01:51:57 DEBUG 01:52:57 INFO 01:52:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:52:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:52:57 INFO [loop_until]: OK (rc = 0) 01:52:57 DEBUG --- stdout --- 01:52:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 17m 3448Mi am-55f77847b7-l482k 9m 4006Mi am-55f77847b7-qhqgg 13m 3919Mi ds-cts-0 6m 392Mi ds-cts-1 7m 385Mi ds-cts-2 7m 356Mi ds-idrepo-0 3129m 13462Mi ds-idrepo-1 19m 10384Mi ds-idrepo-2 22m 10312Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 3467Mi idm-65858d8c4c-vdncx 10m 1406Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1318m 366Mi 01:52:57 DEBUG --- stderr --- 01:52:57 DEBUG 01:52:57 INFO 01:52:57 INFO [loop_until]: kubectl --namespace=xlou top node 01:52:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:52:57 INFO [loop_until]: OK (rc = 0) 01:52:57 DEBUG --- stdout --- 01:52:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 76m 0% 4459Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5040Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5109Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4773Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2110Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2658Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 3308m 20% 14033Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 11018Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 78m 0% 10932Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1369m 8% 1888Mi 3% 01:52:57 DEBUG --- stderr --- 01:52:57 DEBUG 01:53:57 INFO 01:53:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:53:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:53:57 INFO [loop_until]: OK (rc = 0) 01:53:57 DEBUG --- stdout --- 01:53:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 12m 3448Mi am-55f77847b7-l482k 8m 4006Mi am-55f77847b7-qhqgg 11m 3934Mi ds-cts-0 8m 392Mi ds-cts-1 9m 385Mi ds-cts-2 7m 357Mi ds-idrepo-0 3383m 13477Mi ds-idrepo-1 12m 10386Mi ds-idrepo-2 18m 10312Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 12m 3467Mi idm-65858d8c4c-vdncx 11m 1417Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1337m 365Mi 01:53:57 DEBUG --- stderr --- 01:53:57 DEBUG 01:53:57 INFO 01:53:57 INFO [loop_until]: kubectl --namespace=xlou top node 01:53:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:53:57 INFO [loop_until]: OK (rc = 0) 01:53:57 DEBUG --- stdout --- 01:53:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1322Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4457Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 5054Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5107Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 86m 0% 4779Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 142m 0% 2112Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2670Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 3237m 20% 14036Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 11019Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 70m 0% 10934Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1383m 8% 1622Mi 2% 01:53:57 DEBUG --- stderr --- 01:53:57 DEBUG 01:54:57 INFO 01:54:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:54:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:54:57 INFO [loop_until]: OK (rc = 0) 01:54:57 DEBUG --- stdout --- 01:54:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 11m 3449Mi am-55f77847b7-l482k 8m 4006Mi am-55f77847b7-qhqgg 13m 3947Mi ds-cts-0 7m 392Mi ds-cts-1 7m 385Mi ds-cts-2 6m 357Mi ds-idrepo-0 12m 13477Mi ds-idrepo-1 13m 10386Mi ds-idrepo-2 16m 10306Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 13m 3476Mi idm-65858d8c4c-vdncx 7m 1428Mi lodemon-7b659c988b-78sgh 1m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 2m 99Mi 01:54:57 DEBUG --- stderr --- 01:54:57 DEBUG 01:54:57 INFO 01:54:57 INFO [loop_until]: kubectl --namespace=xlou top node 01:54:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:54:57 INFO [loop_until]: OK (rc = 0) 01:54:57 DEBUG --- stdout --- 01:54:57 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4459Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 5069Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 5110Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4773Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 118m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2680Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 14037Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 203m 1% 11021Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 10929Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1623Mi 2% 01:54:57 DEBUG --- stderr --- 01:54:57 DEBUG 01:55:57 INFO 01:55:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:55:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:55:57 INFO [loop_until]: OK (rc = 0) 01:55:57 DEBUG --- stdout --- 01:55:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 10m 3450Mi am-55f77847b7-l482k 12m 4007Mi am-55f77847b7-qhqgg 11m 3960Mi ds-cts-0 8m 393Mi ds-cts-1 6m 385Mi ds-cts-2 7m 358Mi ds-idrepo-0 17m 13477Mi ds-idrepo-1 2645m 12661Mi ds-idrepo-2 17m 10310Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 3464Mi idm-65858d8c4c-vdncx 9m 1437Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1038m 376Mi 01:55:57 DEBUG --- stderr --- 01:55:57 DEBUG 01:55:57 INFO 01:55:57 INFO [loop_until]: kubectl --namespace=xlou top node 01:55:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:55:58 INFO [loop_until]: OK (rc = 0) 01:55:58 DEBUG --- stdout --- 01:55:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 4461Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 5083Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5112Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4768Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2689Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 14038Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2784m 17% 13233Mi 22% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 70m 0% 10931Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1104m 6% 1894Mi 3% 01:55:58 DEBUG --- stderr --- 01:55:58 DEBUG 01:56:57 INFO 01:56:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:56:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:56:57 INFO [loop_until]: OK (rc = 0) 01:56:57 DEBUG --- stdout --- 01:56:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 12m 3450Mi am-55f77847b7-l482k 12m 4008Mi am-55f77847b7-qhqgg 20m 3970Mi ds-cts-0 7m 393Mi ds-cts-1 7m 385Mi ds-cts-2 7m 357Mi ds-idrepo-0 15m 13477Mi ds-idrepo-1 2693m 13367Mi ds-idrepo-2 16m 10310Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 6m 3464Mi idm-65858d8c4c-vdncx 7m 1448Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1073m 376Mi 01:56:57 DEBUG --- stderr --- 01:56:57 DEBUG 01:56:58 INFO 01:56:58 INFO [loop_until]: kubectl --namespace=xlou top node 01:56:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:56:58 INFO [loop_until]: OK (rc = 0) 01:56:58 DEBUG --- stdout --- 01:56:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1322Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4461Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 5091Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5109Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4771Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 2703Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 14035Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2808m 17% 13914Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 10930Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1155m 7% 1897Mi 3% 01:56:58 DEBUG --- stderr --- 01:56:58 DEBUG 01:57:57 INFO 01:57:57 INFO [loop_until]: kubectl --namespace=xlou top pods 01:57:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:57 INFO [loop_until]: OK (rc = 0) 01:57:57 DEBUG --- stdout --- 01:57:57 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 19m 3451Mi am-55f77847b7-l482k 19m 4004Mi am-55f77847b7-qhqgg 12m 3981Mi ds-cts-0 7m 393Mi ds-cts-1 13m 385Mi ds-cts-2 10m 357Mi ds-idrepo-0 83m 13492Mi ds-idrepo-1 2724m 13345Mi ds-idrepo-2 15m 10310Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 6m 3464Mi idm-65858d8c4c-vdncx 8m 1457Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1194m 378Mi 01:57:57 DEBUG --- stderr --- 01:57:57 DEBUG 01:57:58 INFO 01:57:58 INFO [loop_until]: kubectl --namespace=xlou top node 01:57:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:57:58 INFO [loop_until]: OK (rc = 0) 01:57:58 DEBUG --- stdout --- 01:57:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 4473Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5103Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5102Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4766Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2711Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 79m 0% 14054Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 68m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2862m 18% 13898Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 10941Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1255m 7% 1898Mi 3% 01:57:58 DEBUG --- stderr --- 01:57:58 DEBUG 01:58:58 INFO 01:58:58 INFO [loop_until]: kubectl --namespace=xlou top pods 01:58:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:58 INFO [loop_until]: OK (rc = 0) 01:58:58 DEBUG --- stdout --- 01:58:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 12m 3451Mi am-55f77847b7-l482k 8m 4004Mi am-55f77847b7-qhqgg 15m 3992Mi ds-cts-0 14m 395Mi ds-cts-1 6m 385Mi ds-cts-2 7m 357Mi ds-idrepo-0 18m 13497Mi ds-idrepo-1 3046m 13513Mi ds-idrepo-2 24m 10310Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 6m 3464Mi idm-65858d8c4c-vdncx 9m 1474Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1279m 378Mi 01:58:58 DEBUG --- stderr --- 01:58:58 DEBUG 01:58:58 INFO 01:58:58 INFO [loop_until]: kubectl --namespace=xlou top node 01:58:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:58:58 INFO [loop_until]: OK (rc = 0) 01:58:58 DEBUG --- stdout --- 01:58:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4465Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 5109Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 5105Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4771Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 2724Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 75m 0% 14059Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3057m 19% 14056Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 10934Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1351m 8% 1898Mi 3% 01:58:58 DEBUG --- stderr --- 01:58:58 DEBUG 01:59:58 INFO 01:59:58 INFO [loop_until]: kubectl --namespace=xlou top pods 01:59:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:59:58 INFO [loop_until]: OK (rc = 0) 01:59:58 DEBUG --- stdout --- 01:59:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 10m 3451Mi am-55f77847b7-l482k 9m 4004Mi am-55f77847b7-qhqgg 10m 4003Mi ds-cts-0 7m 395Mi ds-cts-1 7m 385Mi ds-cts-2 7m 356Mi ds-idrepo-0 23m 13502Mi ds-idrepo-1 2956m 13562Mi ds-idrepo-2 14m 10311Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 8m 3466Mi idm-65858d8c4c-vdncx 25m 1511Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1262m 378Mi 01:59:58 DEBUG --- stderr --- 01:59:58 DEBUG 01:59:58 INFO 01:59:58 INFO [loop_until]: kubectl --namespace=xlou top node 01:59:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 01:59:58 INFO [loop_until]: OK (rc = 0) 01:59:58 DEBUG --- stdout --- 01:59:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 72m 0% 4464Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5123Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 5103Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 4767Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 87m 0% 2763Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 82m 0% 14064Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3128m 19% 14105Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 65m 0% 10935Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1355m 8% 1899Mi 3% 01:59:58 DEBUG --- stderr --- 01:59:58 DEBUG 02:00:58 INFO 02:00:58 INFO [loop_until]: kubectl --namespace=xlou top pods 02:00:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:00:58 INFO [loop_until]: OK (rc = 0) 02:00:58 DEBUG --- stdout --- 02:00:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 8m 3452Mi am-55f77847b7-l482k 9m 4004Mi am-55f77847b7-qhqgg 10m 4014Mi ds-cts-0 5m 396Mi ds-cts-1 7m 385Mi ds-cts-2 6m 356Mi ds-idrepo-0 17m 13506Mi ds-idrepo-1 10m 13562Mi ds-idrepo-2 14m 10311Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 9m 3466Mi idm-65858d8c4c-vdncx 6m 1512Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 99Mi 02:00:58 DEBUG --- stderr --- 02:00:58 DEBUG 02:00:58 INFO 02:00:58 INFO [loop_until]: kubectl --namespace=xlou top node 02:00:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:00:58 INFO [loop_until]: OK (rc = 0) 02:00:58 DEBUG --- stdout --- 02:00:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1321Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 4465Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5136Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5102Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4772Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 2763Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 14070Mi 23% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14103Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 10937Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1624Mi 2% 02:00:58 DEBUG --- stderr --- 02:00:58 DEBUG 02:01:58 INFO 02:01:58 INFO [loop_until]: kubectl --namespace=xlou top pods 02:01:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:01:58 INFO [loop_until]: OK (rc = 0) 02:01:58 DEBUG --- stdout --- 02:01:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 9m 3452Mi am-55f77847b7-l482k 9m 4004Mi am-55f77847b7-qhqgg 12m 4028Mi ds-cts-0 6m 396Mi ds-cts-1 6m 385Mi ds-cts-2 7m 356Mi ds-idrepo-0 23m 13511Mi ds-idrepo-1 10m 13562Mi ds-idrepo-2 2468m 12062Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 6m 3466Mi idm-65858d8c4c-vdncx 6m 1512Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1149m 363Mi 02:01:58 DEBUG --- stderr --- 02:01:58 DEBUG 02:01:58 INFO 02:01:58 INFO [loop_until]: kubectl --namespace=xlou top node 02:01:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:01:58 INFO [loop_until]: OK (rc = 0) 02:01:58 DEBUG --- stdout --- 02:01:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 4465Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5147Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 5106Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 4772Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2765Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 80m 0% 14075Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 14101Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2570m 16% 12619Mi 21% gke-xlou-cdm-frontend-a8771548-k40m 1013m 6% 1882Mi 3% 02:01:58 DEBUG --- stderr --- 02:01:58 DEBUG 02:02:58 INFO 02:02:58 INFO [loop_until]: kubectl --namespace=xlou top pods 02:02:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:02:58 INFO [loop_until]: OK (rc = 0) 02:02:58 DEBUG --- stdout --- 02:02:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 11m 3452Mi am-55f77847b7-l482k 7m 4004Mi am-55f77847b7-qhqgg 10m 4040Mi ds-cts-0 6m 395Mi ds-cts-1 8m 386Mi ds-cts-2 6m 356Mi ds-idrepo-0 17m 13516Mi ds-idrepo-1 11m 13556Mi ds-idrepo-2 2663m 13314Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 3466Mi idm-65858d8c4c-vdncx 7m 1512Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1125m 363Mi 02:02:58 DEBUG --- stderr --- 02:02:58 DEBUG 02:02:58 INFO 02:02:58 INFO [loop_until]: kubectl --namespace=xlou top node 02:02:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:02:58 INFO [loop_until]: OK (rc = 0) 02:02:58 DEBUG --- stdout --- 02:02:58 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4467Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 5159Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5108Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4770Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2113Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2764Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 78m 0% 14080Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14098Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2668m 16% 13938Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1156m 7% 1884Mi 3% 02:02:58 DEBUG --- stderr --- 02:02:58 DEBUG 02:03:58 INFO 02:03:58 INFO [loop_until]: kubectl --namespace=xlou top pods 02:03:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:03:58 INFO [loop_until]: OK (rc = 0) 02:03:58 DEBUG --- stdout --- 02:03:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 9m 3452Mi am-55f77847b7-l482k 9m 4004Mi am-55f77847b7-qhqgg 11m 4050Mi ds-cts-0 6m 396Mi ds-cts-1 7m 386Mi ds-cts-2 8m 357Mi ds-idrepo-0 20m 13521Mi ds-idrepo-1 10m 13561Mi ds-idrepo-2 2847m 13388Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 10m 3467Mi idm-65858d8c4c-vdncx 5m 1513Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1092m 365Mi 02:03:58 DEBUG --- stderr --- 02:03:58 DEBUG 02:03:58 INFO 02:03:58 INFO [loop_until]: kubectl --namespace=xlou top node 02:03:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:03:59 INFO [loop_until]: OK (rc = 0) 02:03:59 DEBUG --- stdout --- 02:03:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 4466Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5174Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5107Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 78m 0% 4773Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2126Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2768Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 75m 0% 14086Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14103Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2892m 18% 13931Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1177m 7% 1888Mi 3% 02:03:59 DEBUG --- stderr --- 02:03:59 DEBUG 02:04:58 INFO 02:04:58 INFO [loop_until]: kubectl --namespace=xlou top pods 02:04:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:04:58 INFO [loop_until]: OK (rc = 0) 02:04:58 DEBUG --- stdout --- 02:04:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 8m 3452Mi am-55f77847b7-l482k 8m 4005Mi am-55f77847b7-qhqgg 31m 4089Mi ds-cts-0 7m 396Mi ds-cts-1 7m 386Mi ds-cts-2 7m 357Mi ds-idrepo-0 16m 13526Mi ds-idrepo-1 17m 13563Mi ds-idrepo-2 2773m 13424Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 3465Mi idm-65858d8c4c-vdncx 14m 1513Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1146m 365Mi 02:04:58 DEBUG --- stderr --- 02:04:58 DEBUG 02:04:59 INFO 02:04:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:04:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:04:59 INFO [loop_until]: OK (rc = 0) 02:04:59 DEBUG --- stdout --- 02:04:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 83m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 4463Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 5216Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 5110Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 4770Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 78m 0% 2765Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 14093Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 14108Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2853m 17% 13961Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1213m 7% 1884Mi 3% 02:04:59 DEBUG --- stderr --- 02:04:59 DEBUG 02:05:58 INFO 02:05:58 INFO [loop_until]: kubectl --namespace=xlou top pods 02:05:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:05:58 INFO [loop_until]: OK (rc = 0) 02:05:58 DEBUG --- stdout --- 02:05:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 11m 3455Mi am-55f77847b7-l482k 9m 4006Mi am-55f77847b7-qhqgg 12m 4102Mi ds-cts-0 6m 396Mi ds-cts-1 6m 386Mi ds-cts-2 6m 357Mi ds-idrepo-0 17m 13531Mi ds-idrepo-1 10m 13563Mi ds-idrepo-2 2949m 13645Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 6m 3469Mi idm-65858d8c4c-vdncx 5m 1513Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1216m 365Mi 02:05:58 DEBUG --- stderr --- 02:05:58 DEBUG 02:05:59 INFO 02:05:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:05:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:05:59 INFO [loop_until]: OK (rc = 0) 02:05:59 DEBUG --- stdout --- 02:05:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 4468Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 5226Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 5109Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4777Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2767Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 79m 0% 14095Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3044m 19% 14171Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1314m 8% 1886Mi 3% 02:05:59 DEBUG --- stderr --- 02:05:59 DEBUG 02:06:58 INFO 02:06:58 INFO [loop_until]: kubectl --namespace=xlou top pods 02:06:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:06:58 INFO [loop_until]: OK (rc = 0) 02:06:58 DEBUG --- stdout --- 02:06:58 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 12m 3462Mi am-55f77847b7-l482k 9m 4008Mi am-55f77847b7-qhqgg 9m 4113Mi ds-cts-0 6m 397Mi ds-cts-1 6m 387Mi ds-cts-2 6m 357Mi ds-idrepo-0 18m 13536Mi ds-idrepo-1 12m 13563Mi ds-idrepo-2 12m 13684Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 3469Mi idm-65858d8c4c-vdncx 7m 1514Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 99Mi 02:06:58 DEBUG --- stderr --- 02:06:58 DEBUG 02:06:59 INFO 02:06:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:06:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:06:59 INFO [loop_until]: OK (rc = 0) 02:06:59 DEBUG --- stdout --- 02:06:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4474Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 67m 0% 5236Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 5111Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4778Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2766Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 75m 0% 14099Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14213Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1625Mi 2% 02:06:59 DEBUG --- stderr --- 02:06:59 DEBUG 02:07:59 INFO 02:07:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:07:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:07:59 INFO [loop_until]: OK (rc = 0) 02:07:59 DEBUG --- stdout --- 02:07:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 18m 3472Mi am-55f77847b7-l482k 10m 4008Mi am-55f77847b7-qhqgg 53m 4170Mi ds-cts-0 9m 398Mi ds-cts-1 8m 389Mi ds-cts-2 8m 358Mi ds-idrepo-0 47m 13544Mi ds-idrepo-1 26m 13563Mi ds-idrepo-2 156m 13686Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 3470Mi idm-65858d8c4c-vdncx 588m 1687Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1977m 467Mi 02:07:59 DEBUG --- stderr --- 02:07:59 DEBUG 02:07:59 INFO 02:07:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:07:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:07:59 INFO [loop_until]: OK (rc = 0) 02:07:59 DEBUG --- stdout --- 02:07:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1321Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 4483Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 5293Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 5110Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 715m 4% 4811Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 152m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 806m 5% 3035Mi 5% gke-xlou-cdm-ds-32e4dcb1-1l6p 443m 2% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 109m 0% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 201m 1% 14216Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1907m 12% 1977Mi 3% 02:07:59 DEBUG --- stderr --- 02:07:59 DEBUG 02:08:59 INFO 02:08:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:08:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:08:59 INFO [loop_until]: OK (rc = 0) 02:08:59 DEBUG --- stdout --- 02:08:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 37m 3489Mi am-55f77847b7-l482k 34m 4010Mi am-55f77847b7-qhqgg 60m 4369Mi ds-cts-0 7m 398Mi ds-cts-1 5m 389Mi ds-cts-2 7m 359Mi ds-idrepo-0 1010m 13680Mi ds-idrepo-1 292m 13565Mi ds-idrepo-2 297m 13605Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 1112m 3561Mi idm-65858d8c4c-vdncx 1142m 3381Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 356m 487Mi 02:08:59 DEBUG --- stderr --- 02:08:59 DEBUG 02:08:59 INFO 02:08:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:08:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:08:59 INFO [loop_until]: OK (rc = 0) 02:08:59 DEBUG --- stdout --- 02:08:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 4500Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 115m 0% 5490Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 89m 0% 5118Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 1227m 7% 4894Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 364m 2% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1221m 7% 4630Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1063m 6% 14238Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 345m 2% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 306m 1% 14134Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 417m 2% 2004Mi 3% 02:08:59 DEBUG --- stderr --- 02:08:59 DEBUG 02:09:59 INFO 02:09:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:09:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:09:59 INFO [loop_until]: OK (rc = 0) 02:09:59 DEBUG --- stdout --- 02:09:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 41m 3492Mi am-55f77847b7-l482k 22m 4010Mi am-55f77847b7-qhqgg 27m 4580Mi ds-cts-0 6m 399Mi ds-cts-1 6m 389Mi ds-cts-2 6m 358Mi ds-idrepo-0 878m 13759Mi ds-idrepo-1 200m 13566Mi ds-idrepo-2 216m 13608Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 989m 3672Mi idm-65858d8c4c-vdncx 936m 3375Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 192m 491Mi 02:09:59 DEBUG --- stderr --- 02:09:59 DEBUG 02:09:59 INFO 02:09:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:09:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:09:59 INFO [loop_until]: OK (rc = 0) 02:09:59 DEBUG --- stdout --- 02:09:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 4502Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 5708Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 5115Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 1136m 7% 4986Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 372m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1061m 6% 4622Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 913m 5% 14311Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 251m 1% 14107Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 271m 1% 14135Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 258m 1% 2007Mi 3% 02:09:59 DEBUG --- stderr --- 02:09:59 DEBUG 02:10:59 INFO 02:10:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:10:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:10:59 INFO [loop_until]: OK (rc = 0) 02:10:59 DEBUG --- stdout --- 02:10:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 31m 3492Mi am-55f77847b7-l482k 24m 4010Mi am-55f77847b7-qhqgg 28m 4755Mi ds-cts-0 6m 399Mi ds-cts-1 6m 389Mi ds-cts-2 7m 358Mi ds-idrepo-0 1220m 13723Mi ds-idrepo-1 202m 13566Mi ds-idrepo-2 209m 13608Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 859m 3754Mi idm-65858d8c4c-vdncx 872m 3402Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 200m 492Mi 02:10:59 DEBUG --- stderr --- 02:10:59 DEBUG 02:10:59 INFO 02:10:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:10:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:10:59 INFO [loop_until]: OK (rc = 0) 02:10:59 DEBUG --- stdout --- 02:10:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 4503Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 5917Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 5113Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 922m 5% 5059Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 362m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 981m 6% 4653Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1116m 7% 14279Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 253m 1% 14110Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 267m 1% 14139Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 278m 1% 2022Mi 3% 02:10:59 DEBUG --- stderr --- 02:10:59 DEBUG 02:11:59 INFO 02:11:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:11:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:11:59 INFO [loop_until]: OK (rc = 0) 02:11:59 DEBUG --- stdout --- 02:11:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 41m 3701Mi am-55f77847b7-l482k 23m 4010Mi am-55f77847b7-qhqgg 33m 4999Mi ds-cts-0 7m 398Mi ds-cts-1 9m 389Mi ds-cts-2 6m 360Mi ds-idrepo-0 1166m 13754Mi ds-idrepo-1 211m 13566Mi ds-idrepo-2 222m 13609Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 880m 3759Mi idm-65858d8c4c-vdncx 836m 3407Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 193m 493Mi 02:11:59 DEBUG --- stderr --- 02:11:59 DEBUG 02:11:59 INFO 02:11:59 INFO [loop_until]: kubectl --namespace=xlou top node 02:11:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:11:59 INFO [loop_until]: OK (rc = 0) 02:11:59 DEBUG --- stdout --- 02:11:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 94m 0% 4719Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 89m 0% 6140Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 5112Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 970m 6% 5063Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 379m 2% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 946m 5% 4655Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1268m 7% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 251m 1% 14111Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 270m 1% 14133Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 268m 1% 2013Mi 3% 02:11:59 DEBUG --- stderr --- 02:11:59 DEBUG 02:12:59 INFO 02:12:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:12:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:12:59 INFO [loop_until]: OK (rc = 0) 02:12:59 DEBUG --- stdout --- 02:12:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 34m 3939Mi am-55f77847b7-l482k 24m 4011Mi am-55f77847b7-qhqgg 30m 5195Mi ds-cts-0 6m 399Mi ds-cts-1 8m 389Mi ds-cts-2 9m 359Mi ds-idrepo-0 1001m 13754Mi ds-idrepo-1 212m 13566Mi ds-idrepo-2 462m 13613Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 869m 3763Mi idm-65858d8c4c-vdncx 889m 3412Mi lodemon-7b659c988b-78sgh 1m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 201m 493Mi 02:12:59 DEBUG --- stderr --- 02:12:59 DEBUG 02:13:00 INFO 02:13:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:13:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:13:00 INFO [loop_until]: OK (rc = 0) 02:13:00 DEBUG --- stdout --- 02:13:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 91m 0% 4934Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 6356Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 5117Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 973m 6% 5067Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 363m 2% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 960m 6% 4662Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1092m 6% 14312Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 258m 1% 14109Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 501m 3% 14144Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 261m 1% 2009Mi 3% 02:13:00 DEBUG --- stderr --- 02:13:00 DEBUG 02:13:59 INFO 02:13:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:13:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:13:59 INFO [loop_until]: OK (rc = 0) 02:13:59 DEBUG --- stdout --- 02:13:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 39m 4157Mi am-55f77847b7-l482k 27m 4019Mi am-55f77847b7-qhqgg 28m 5440Mi ds-cts-0 7m 398Mi ds-cts-1 15m 392Mi ds-cts-2 7m 359Mi ds-idrepo-0 1304m 13791Mi ds-idrepo-1 443m 13815Mi ds-idrepo-2 215m 13617Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 868m 3769Mi idm-65858d8c4c-vdncx 877m 3417Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 195m 492Mi 02:13:59 DEBUG --- stderr --- 02:13:59 DEBUG 02:14:00 INFO 02:14:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:14:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:14:00 INFO [loop_until]: OK (rc = 0) 02:14:00 DEBUG --- stdout --- 02:14:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1321Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 5168Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 89m 0% 6575Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 82m 0% 5122Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 947m 5% 5072Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 387m 2% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 978m 6% 4665Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1258m 7% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 72m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 503m 3% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 263m 1% 14135Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 273m 1% 2005Mi 3% 02:14:00 DEBUG --- stderr --- 02:14:00 DEBUG 02:14:59 INFO 02:14:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:14:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:14:59 INFO [loop_until]: OK (rc = 0) 02:14:59 DEBUG --- stdout --- 02:14:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 35m 4157Mi am-55f77847b7-l482k 30m 4085Mi am-55f77847b7-qhqgg 42m 5650Mi ds-cts-0 6m 399Mi ds-cts-1 12m 394Mi ds-cts-2 8m 360Mi ds-idrepo-0 980m 13819Mi ds-idrepo-1 199m 13749Mi ds-idrepo-2 211m 13617Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 851m 3773Mi idm-65858d8c4c-vdncx 843m 3425Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 174m 493Mi 02:14:59 DEBUG --- stderr --- 02:14:59 DEBUG 02:15:00 INFO 02:15:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:15:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:15:00 INFO [loop_until]: OK (rc = 0) 02:15:00 DEBUG --- stdout --- 02:15:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 90m 0% 5167Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6771Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 5189Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 954m 6% 5076Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 367m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 935m 5% 4671Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1041m 6% 14369Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 254m 1% 14280Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 261m 1% 14142Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 249m 1% 2010Mi 3% 02:15:00 DEBUG --- stderr --- 02:15:00 DEBUG 02:15:59 INFO 02:15:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:15:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:15:59 INFO [loop_until]: OK (rc = 0) 02:15:59 DEBUG --- stdout --- 02:15:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 25m 4157Mi am-55f77847b7-l482k 21m 4085Mi am-55f77847b7-qhqgg 22m 5651Mi ds-cts-0 6m 398Mi ds-cts-1 7m 393Mi ds-cts-2 6m 360Mi ds-idrepo-0 1198m 13819Mi ds-idrepo-1 188m 13748Mi ds-idrepo-2 288m 13707Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 853m 3778Mi idm-65858d8c4c-vdncx 842m 3431Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 179m 493Mi 02:15:59 DEBUG --- stderr --- 02:15:59 DEBUG 02:16:00 INFO 02:16:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:16:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:16:00 INFO [loop_until]: OK (rc = 0) 02:16:00 DEBUG --- stdout --- 02:16:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 5169Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 6769Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 5191Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 968m 6% 5083Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 368m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 951m 5% 4675Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1187m 7% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 235m 1% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 377m 2% 14232Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 250m 1% 2010Mi 3% 02:16:00 DEBUG --- stderr --- 02:16:00 DEBUG 02:16:59 INFO 02:16:59 INFO [loop_until]: kubectl --namespace=xlou top pods 02:16:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:16:59 INFO [loop_until]: OK (rc = 0) 02:16:59 DEBUG --- stdout --- 02:16:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 25m 4158Mi am-55f77847b7-l482k 24m 4085Mi am-55f77847b7-qhqgg 30m 5650Mi ds-cts-0 6m 398Mi ds-cts-1 7m 393Mi ds-cts-2 6m 359Mi ds-idrepo-0 1339m 13779Mi ds-idrepo-1 305m 13786Mi ds-idrepo-2 268m 13738Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 902m 3782Mi idm-65858d8c4c-vdncx 843m 3435Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 179m 493Mi 02:16:59 DEBUG --- stderr --- 02:16:59 DEBUG 02:17:00 INFO 02:17:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:17:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:17:00 INFO [loop_until]: OK (rc = 0) 02:17:00 DEBUG --- stdout --- 02:17:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 5169Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6769Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 5192Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 970m 6% 5086Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 392m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 929m 5% 4683Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1269m 7% 14342Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 328m 2% 14329Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 323m 2% 14267Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 248m 1% 2008Mi 3% 02:17:00 DEBUG --- stderr --- 02:17:00 DEBUG 02:18:00 INFO 02:18:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:18:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:18:00 INFO [loop_until]: OK (rc = 0) 02:18:00 DEBUG --- stdout --- 02:18:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 25m 4158Mi am-55f77847b7-l482k 21m 4085Mi am-55f77847b7-qhqgg 28m 5651Mi ds-cts-0 8m 398Mi ds-cts-1 9m 393Mi ds-cts-2 7m 359Mi ds-idrepo-0 1165m 13819Mi ds-idrepo-1 180m 13787Mi ds-idrepo-2 303m 13769Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 871m 3786Mi idm-65858d8c4c-vdncx 833m 3438Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 177m 493Mi 02:18:00 DEBUG --- stderr --- 02:18:00 DEBUG 02:18:00 INFO 02:18:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:18:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:18:00 INFO [loop_until]: OK (rc = 0) 02:18:00 DEBUG --- stdout --- 02:18:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 5170Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 89m 0% 6772Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 82m 0% 5188Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 988m 6% 5089Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 373m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 895m 5% 4687Mi 7% gke-xlou-cdm-ds-32e4dcb1-1l6p 1208m 7% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 242m 1% 14333Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 345m 2% 14301Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 251m 1% 2007Mi 3% 02:18:00 DEBUG --- stderr --- 02:18:00 DEBUG 02:19:00 INFO 02:19:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:19:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:19:00 INFO [loop_until]: OK (rc = 0) 02:19:00 DEBUG --- stdout --- 02:19:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 28m 4159Mi am-55f77847b7-l482k 27m 4090Mi am-55f77847b7-qhqgg 25m 5656Mi ds-cts-0 8m 399Mi ds-cts-1 12m 393Mi ds-cts-2 7m 360Mi ds-idrepo-0 1062m 13819Mi ds-idrepo-1 313m 13788Mi ds-idrepo-2 186m 13769Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 846m 3790Mi idm-65858d8c4c-vdncx 818m 3443Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 163m 493Mi 02:19:00 DEBUG --- stderr --- 02:19:00 DEBUG 02:19:00 INFO 02:19:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:19:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:19:00 INFO [loop_until]: OK (rc = 0) 02:19:00 DEBUG --- stdout --- 02:19:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 5170Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 6778Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 88m 0% 5195Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 945m 5% 5093Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 354m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 905m 5% 4694Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1194m 7% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 343m 2% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 246m 1% 14304Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 245m 1% 2010Mi 3% 02:19:00 DEBUG --- stderr --- 02:19:00 DEBUG 02:20:00 INFO 02:20:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:20:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:20:00 INFO [loop_until]: OK (rc = 0) 02:20:00 DEBUG --- stdout --- 02:20:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 23m 4158Mi am-55f77847b7-l482k 20m 4090Mi am-55f77847b7-qhqgg 25m 5662Mi ds-cts-0 6m 399Mi ds-cts-1 7m 393Mi ds-cts-2 6m 360Mi ds-idrepo-0 1061m 13819Mi ds-idrepo-1 173m 13817Mi ds-idrepo-2 188m 13770Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 784m 3794Mi idm-65858d8c4c-vdncx 815m 3450Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 179m 494Mi 02:20:00 DEBUG --- stderr --- 02:20:00 DEBUG 02:20:00 INFO 02:20:00 INFO [loop_until]: kubectl --namespace=xlou top node 02:20:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:20:00 INFO [loop_until]: OK (rc = 0) 02:20:00 DEBUG --- stdout --- 02:20:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 5171Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6786Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 5192Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 870m 5% 5099Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 351m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 901m 5% 4701Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1146m 7% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 225m 1% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 232m 1% 14302Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 245m 1% 2011Mi 3% 02:20:00 DEBUG --- stderr --- 02:20:00 DEBUG 02:21:00 INFO 02:21:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:21:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:21:00 INFO [loop_until]: OK (rc = 0) 02:21:00 DEBUG --- stdout --- 02:21:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 4159Mi am-55f77847b7-l482k 29m 4090Mi am-55f77847b7-qhqgg 24m 5662Mi ds-cts-0 9m 399Mi ds-cts-1 6m 393Mi ds-cts-2 6m 360Mi ds-idrepo-0 1165m 13819Mi ds-idrepo-1 303m 13818Mi ds-idrepo-2 203m 13771Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 852m 3800Mi idm-65858d8c4c-vdncx 844m 3455Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 179m 493Mi 02:21:00 DEBUG --- stderr --- 02:21:00 DEBUG 02:21:01 INFO 02:21:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:21:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:21:01 INFO [loop_until]: OK (rc = 0) 02:21:01 DEBUG --- stdout --- 02:21:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1322Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 5168Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6783Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 5195Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 941m 5% 5106Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 376m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 955m 6% 4702Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1132m 7% 14384Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 355m 2% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 250m 1% 14304Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 247m 1% 2013Mi 3% 02:21:01 DEBUG --- stderr --- 02:21:01 DEBUG 02:22:00 INFO 02:22:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:22:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:22:00 INFO [loop_until]: OK (rc = 0) 02:22:00 DEBUG --- stdout --- 02:22:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 25m 4159Mi am-55f77847b7-l482k 23m 4090Mi am-55f77847b7-qhqgg 21m 5665Mi ds-cts-0 6m 398Mi ds-cts-1 8m 393Mi ds-cts-2 6m 360Mi ds-idrepo-0 1132m 13820Mi ds-idrepo-1 173m 13821Mi ds-idrepo-2 187m 13771Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 842m 3823Mi idm-65858d8c4c-vdncx 880m 3460Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 172m 494Mi 02:22:00 DEBUG --- stderr --- 02:22:00 DEBUG 02:22:01 INFO 02:22:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:22:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:22:01 INFO [loop_until]: OK (rc = 0) 02:22:01 DEBUG --- stdout --- 02:22:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 5170Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6785Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 5196Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 928m 5% 5124Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 361m 2% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 959m 6% 4706Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1230m 7% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 236m 1% 14371Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 241m 1% 14309Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 245m 1% 2011Mi 3% 02:22:01 DEBUG --- stderr --- 02:22:01 DEBUG 02:23:00 INFO 02:23:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:23:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:23:00 INFO [loop_until]: OK (rc = 0) 02:23:00 DEBUG --- stdout --- 02:23:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 4160Mi am-55f77847b7-l482k 29m 4091Mi am-55f77847b7-qhqgg 27m 5668Mi ds-cts-0 6m 399Mi ds-cts-1 6m 394Mi ds-cts-2 6m 360Mi ds-idrepo-0 1155m 13821Mi ds-idrepo-1 303m 13823Mi ds-idrepo-2 206m 13768Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 874m 3809Mi idm-65858d8c4c-vdncx 849m 3464Mi lodemon-7b659c988b-78sgh 2m 65Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 183m 493Mi 02:23:00 DEBUG --- stderr --- 02:23:00 DEBUG 02:23:01 INFO 02:23:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:23:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:23:01 INFO [loop_until]: OK (rc = 0) 02:23:01 DEBUG --- stdout --- 02:23:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 5172Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 6792Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 87m 0% 5197Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 963m 6% 5119Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 381m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 964m 6% 4711Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1360m 8% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 246m 1% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 257m 1% 14310Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 246m 1% 2011Mi 3% 02:23:01 DEBUG --- stderr --- 02:23:01 DEBUG 02:24:00 INFO 02:24:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:24:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:24:00 INFO [loop_until]: OK (rc = 0) 02:24:00 DEBUG --- stdout --- 02:24:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 4160Mi am-55f77847b7-l482k 29m 4091Mi am-55f77847b7-qhqgg 23m 5671Mi ds-cts-0 8m 399Mi ds-cts-1 7m 393Mi ds-cts-2 6m 360Mi ds-idrepo-0 1202m 13821Mi ds-idrepo-1 186m 13823Mi ds-idrepo-2 189m 13772Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 868m 3818Mi idm-65858d8c4c-vdncx 797m 3468Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 181m 494Mi 02:24:00 DEBUG --- stderr --- 02:24:00 DEBUG 02:24:01 INFO 02:24:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:24:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:24:01 INFO [loop_until]: OK (rc = 0) 02:24:01 DEBUG --- stdout --- 02:24:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 5173Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 83m 0% 6793Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 85m 0% 5196Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 957m 6% 5120Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 356m 2% 2140Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 898m 5% 4715Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1284m 8% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 237m 1% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 243m 1% 14306Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 253m 1% 2012Mi 3% 02:24:01 DEBUG --- stderr --- 02:24:01 DEBUG 02:25:00 INFO 02:25:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:25:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:00 INFO [loop_until]: OK (rc = 0) 02:25:00 DEBUG --- stdout --- 02:25:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 4162Mi am-55f77847b7-l482k 30m 4091Mi am-55f77847b7-qhqgg 22m 5681Mi ds-cts-0 10m 399Mi ds-cts-1 14m 397Mi ds-cts-2 10m 361Mi ds-idrepo-0 1118m 13821Mi ds-idrepo-1 309m 13823Mi ds-idrepo-2 196m 13773Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 833m 3823Mi idm-65858d8c4c-vdncx 848m 3473Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 179m 494Mi 02:25:00 DEBUG --- stderr --- 02:25:00 DEBUG 02:25:01 INFO 02:25:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:25:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:25:01 INFO [loop_until]: OK (rc = 0) 02:25:01 DEBUG --- stdout --- 02:25:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 5173Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 85m 0% 5197Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 935m 5% 5138Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 366m 2% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 950m 5% 4731Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1175m 7% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 323m 2% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 254m 1% 14314Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 243m 1% 2014Mi 3% 02:25:01 DEBUG --- stderr --- 02:25:01 DEBUG 02:26:00 INFO 02:26:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:26:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:26:00 INFO [loop_until]: OK (rc = 0) 02:26:00 DEBUG --- stdout --- 02:26:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 4162Mi am-55f77847b7-l482k 25m 4091Mi am-55f77847b7-qhqgg 26m 5683Mi ds-cts-0 8m 397Mi ds-cts-1 6m 391Mi ds-cts-2 7m 360Mi ds-idrepo-0 1011m 13822Mi ds-idrepo-1 199m 13823Mi ds-idrepo-2 195m 13773Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 879m 3828Mi idm-65858d8c4c-vdncx 849m 3481Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 181m 494Mi 02:26:00 DEBUG --- stderr --- 02:26:00 DEBUG 02:26:01 INFO 02:26:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:26:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:26:01 INFO [loop_until]: OK (rc = 0) 02:26:01 DEBUG --- stdout --- 02:26:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 5175Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 5196Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 970m 6% 5128Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 375m 2% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 975m 6% 4727Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1069m 6% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 248m 1% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 378m 2% 14314Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 246m 1% 2012Mi 3% 02:26:01 DEBUG --- stderr --- 02:26:01 DEBUG 02:27:00 INFO 02:27:00 INFO [loop_until]: kubectl --namespace=xlou top pods 02:27:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:27:01 INFO [loop_until]: OK (rc = 0) 02:27:01 DEBUG --- stdout --- 02:27:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 4163Mi am-55f77847b7-l482k 30m 4091Mi am-55f77847b7-qhqgg 23m 5696Mi ds-cts-0 7m 397Mi ds-cts-1 7m 391Mi ds-cts-2 9m 360Mi ds-idrepo-0 1211m 13822Mi ds-idrepo-1 318m 13823Mi ds-idrepo-2 243m 13774Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 887m 3832Mi idm-65858d8c4c-vdncx 873m 3486Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 181m 494Mi 02:27:01 DEBUG --- stderr --- 02:27:01 DEBUG 02:27:01 INFO 02:27:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:27:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:27:01 INFO [loop_until]: OK (rc = 0) 02:27:01 DEBUG --- stdout --- 02:27:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 5173Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 88m 0% 5197Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 988m 6% 5132Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 382m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 959m 6% 4733Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1275m 8% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 359m 2% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 285m 1% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 257m 1% 2012Mi 3% 02:27:01 DEBUG --- stderr --- 02:27:01 DEBUG 02:28:01 INFO 02:28:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:28:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:28:01 INFO [loop_until]: OK (rc = 0) 02:28:01 DEBUG --- stdout --- 02:28:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 4282Mi am-55f77847b7-l482k 28m 4091Mi am-55f77847b7-qhqgg 23m 5718Mi ds-cts-0 6m 397Mi ds-cts-1 6m 391Mi ds-cts-2 15m 361Mi ds-idrepo-0 1320m 13822Mi ds-idrepo-1 192m 13824Mi ds-idrepo-2 397m 13774Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 873m 3837Mi idm-65858d8c4c-vdncx 892m 3491Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 185m 494Mi 02:28:01 DEBUG --- stderr --- 02:28:01 DEBUG 02:28:01 INFO 02:28:01 INFO [loop_until]: kubectl --namespace=xlou top node 02:28:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:28:01 INFO [loop_until]: OK (rc = 0) 02:28:01 DEBUG --- stdout --- 02:28:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 87m 0% 5305Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 5197Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 964m 6% 5138Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 366m 2% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 997m 6% 4738Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1316m 8% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 69m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 241m 1% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 432m 2% 14317Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 261m 1% 2013Mi 3% 02:28:01 DEBUG --- stderr --- 02:28:01 DEBUG 02:29:01 INFO 02:29:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:29:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:29:01 INFO [loop_until]: OK (rc = 0) 02:29:01 DEBUG --- stdout --- 02:29:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 28m 4509Mi am-55f77847b7-l482k 26m 4176Mi am-55f77847b7-qhqgg 24m 5727Mi ds-cts-0 9m 397Mi ds-cts-1 11m 391Mi ds-cts-2 6m 360Mi ds-idrepo-0 1804m 13809Mi ds-idrepo-1 406m 13822Mi ds-idrepo-2 543m 13819Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 861m 3842Mi idm-65858d8c4c-vdncx 882m 3495Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 187m 495Mi 02:29:01 DEBUG --- stderr --- 02:29:01 DEBUG 02:29:02 INFO 02:29:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:29:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:29:02 INFO [loop_until]: OK (rc = 0) 02:29:02 DEBUG --- stdout --- 02:29:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 5520Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6847Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 5297Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 946m 5% 5140Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 380m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 940m 5% 4743Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1728m 10% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 472m 2% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 748m 4% 14361Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 248m 1% 2015Mi 3% 02:29:02 DEBUG --- stderr --- 02:29:02 DEBUG 02:30:01 INFO 02:30:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:30:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:30:01 INFO [loop_until]: OK (rc = 0) 02:30:01 DEBUG --- stdout --- 02:30:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 4706Mi am-55f77847b7-l482k 25m 4383Mi am-55f77847b7-qhqgg 25m 5732Mi ds-cts-0 6m 397Mi ds-cts-1 5m 391Mi ds-cts-2 7m 360Mi ds-idrepo-0 962m 13825Mi ds-idrepo-1 326m 13830Mi ds-idrepo-2 289m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 848m 3846Mi idm-65858d8c4c-vdncx 854m 3501Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 180m 494Mi 02:30:01 DEBUG --- stderr --- 02:30:01 DEBUG 02:30:02 INFO 02:30:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:30:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:30:02 INFO [loop_until]: OK (rc = 0) 02:30:02 DEBUG --- stdout --- 02:30:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 5735Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6853Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 921m 5% 5146Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 354m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 934m 5% 4748Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1049m 6% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 371m 2% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 290m 1% 14363Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 246m 1% 2013Mi 3% 02:30:02 DEBUG --- stderr --- 02:30:02 DEBUG 02:31:01 INFO 02:31:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:31:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:31:01 INFO [loop_until]: OK (rc = 0) 02:31:01 DEBUG --- stdout --- 02:31:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 4923Mi am-55f77847b7-l482k 28m 4594Mi am-55f77847b7-qhqgg 23m 5733Mi ds-cts-0 8m 397Mi ds-cts-1 6m 392Mi ds-cts-2 6m 361Mi ds-idrepo-0 1019m 13827Mi ds-idrepo-1 194m 13833Mi ds-idrepo-2 356m 13826Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 831m 3851Mi idm-65858d8c4c-vdncx 923m 3505Mi lodemon-7b659c988b-78sgh 1m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 182m 494Mi 02:31:01 DEBUG --- stderr --- 02:31:01 DEBUG 02:31:02 INFO 02:31:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:31:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:31:02 INFO [loop_until]: OK (rc = 0) 02:31:02 DEBUG --- stdout --- 02:31:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 5947Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 5726Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 953m 5% 5153Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 371m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 968m 6% 4751Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1098m 6% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 232m 1% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 383m 2% 14371Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 254m 1% 2013Mi 3% 02:31:02 DEBUG --- stderr --- 02:31:02 DEBUG 02:32:01 INFO 02:32:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:32:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:32:01 INFO [loop_until]: OK (rc = 0) 02:32:01 DEBUG --- stdout --- 02:32:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 31m 5145Mi am-55f77847b7-l482k 25m 4830Mi am-55f77847b7-qhqgg 24m 5733Mi ds-cts-0 7m 397Mi ds-cts-1 8m 392Mi ds-cts-2 7m 361Mi ds-idrepo-0 1012m 13823Mi ds-idrepo-1 189m 13827Mi ds-idrepo-2 316m 13827Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 911m 3856Mi idm-65858d8c4c-vdncx 827m 3509Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 181m 495Mi 02:32:01 DEBUG --- stderr --- 02:32:01 DEBUG 02:32:02 INFO 02:32:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:32:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:32:02 INFO [loop_until]: OK (rc = 0) 02:32:02 DEBUG --- stdout --- 02:32:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 89m 0% 6162Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 5938Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 947m 5% 5158Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 374m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 945m 5% 4754Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1095m 6% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 241m 1% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 383m 2% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 254m 1% 2015Mi 3% 02:32:02 DEBUG --- stderr --- 02:32:02 DEBUG 02:33:01 INFO 02:33:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:33:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:33:01 INFO [loop_until]: OK (rc = 0) 02:33:01 DEBUG --- stdout --- 02:33:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 5351Mi am-55f77847b7-l482k 25m 5031Mi am-55f77847b7-qhqgg 24m 5734Mi ds-cts-0 10m 397Mi ds-cts-1 7m 393Mi ds-cts-2 7m 361Mi ds-idrepo-0 986m 13818Mi ds-idrepo-1 188m 13823Mi ds-idrepo-2 345m 13828Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 838m 3860Mi idm-65858d8c4c-vdncx 882m 3522Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 182m 495Mi 02:33:01 DEBUG --- stderr --- 02:33:01 DEBUG 02:33:02 INFO 02:33:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:33:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:33:02 INFO [loop_until]: OK (rc = 0) 02:33:02 DEBUG --- stdout --- 02:33:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 87m 0% 6382Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 6150Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 943m 5% 5164Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 370m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 970m 6% 4768Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1084m 6% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 256m 1% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 268m 1% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 259m 1% 2015Mi 3% 02:33:02 DEBUG --- stderr --- 02:33:02 DEBUG 02:34:01 INFO 02:34:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:34:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:34:01 INFO [loop_until]: OK (rc = 0) 02:34:01 DEBUG --- stdout --- 02:34:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5547Mi am-55f77847b7-l482k 25m 5230Mi am-55f77847b7-qhqgg 25m 5735Mi ds-cts-0 10m 397Mi ds-cts-1 8m 392Mi ds-cts-2 7m 361Mi ds-idrepo-0 1067m 13823Mi ds-idrepo-1 198m 13824Mi ds-idrepo-2 356m 13817Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 870m 3865Mi idm-65858d8c4c-vdncx 831m 3527Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 179m 495Mi 02:34:01 DEBUG --- stderr --- 02:34:01 DEBUG 02:34:02 INFO 02:34:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:34:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:34:02 INFO [loop_until]: OK (rc = 0) 02:34:02 DEBUG --- stdout --- 02:34:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6596Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 82m 0% 6855Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 6363Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 964m 6% 5169Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 374m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 933m 5% 4772Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1045m 6% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 254m 1% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 386m 2% 14362Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 252m 1% 2014Mi 3% 02:34:02 DEBUG --- stderr --- 02:34:02 DEBUG 02:35:01 INFO 02:35:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:35:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:35:01 INFO [loop_until]: OK (rc = 0) 02:35:01 DEBUG --- stdout --- 02:35:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 32m 5743Mi am-55f77847b7-l482k 30m 5464Mi am-55f77847b7-qhqgg 26m 5740Mi ds-cts-0 6m 397Mi ds-cts-1 7m 392Mi ds-cts-2 7m 361Mi ds-idrepo-0 1150m 13823Mi ds-idrepo-1 198m 13819Mi ds-idrepo-2 197m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 905m 3870Mi idm-65858d8c4c-vdncx 862m 3533Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 188m 495Mi 02:35:01 DEBUG --- stderr --- 02:35:01 DEBUG 02:35:02 INFO 02:35:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:35:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:35:02 INFO [loop_until]: OK (rc = 0) 02:35:02 DEBUG --- stdout --- 02:35:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 92m 0% 6762Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 6590Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1015m 6% 5174Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 380m 2% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 944m 5% 4778Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1097m 6% 14401Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 259m 1% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 252m 1% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 257m 1% 2016Mi 3% 02:35:02 DEBUG --- stderr --- 02:35:02 DEBUG 02:36:01 INFO 02:36:01 INFO [loop_until]: kubectl --namespace=xlou top pods 02:36:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:36:02 INFO [loop_until]: OK (rc = 0) 02:36:02 DEBUG --- stdout --- 02:36:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 23m 5743Mi am-55f77847b7-l482k 31m 5696Mi am-55f77847b7-qhqgg 21m 5823Mi ds-cts-0 7m 397Mi ds-cts-1 7m 392Mi ds-cts-2 8m 361Mi ds-idrepo-0 1017m 13824Mi ds-idrepo-1 359m 13819Mi ds-idrepo-2 194m 13822Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 857m 3876Mi idm-65858d8c4c-vdncx 928m 3537Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 183m 495Mi 02:36:02 DEBUG --- stderr --- 02:36:02 DEBUG 02:36:02 INFO 02:36:02 INFO [loop_until]: kubectl --namespace=xlou top node 02:36:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:36:03 INFO [loop_until]: OK (rc = 0) 02:36:03 DEBUG --- stdout --- 02:36:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 85m 0% 1336Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 6751Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 86m 0% 6797Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 962m 6% 5177Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 373m 2% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 996m 6% 4784Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1115m 7% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 386m 2% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 257m 1% 14382Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 256m 1% 2012Mi 3% 02:36:03 DEBUG --- stderr --- 02:36:03 DEBUG 02:37:02 INFO 02:37:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:37:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:37:02 INFO [loop_until]: OK (rc = 0) 02:37:02 DEBUG --- stdout --- 02:37:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 24m 5744Mi am-55f77847b7-l482k 22m 5697Mi am-55f77847b7-qhqgg 21m 5823Mi ds-cts-0 6m 397Mi ds-cts-1 9m 392Mi ds-cts-2 7m 361Mi ds-idrepo-0 999m 13823Mi ds-idrepo-1 340m 13825Mi ds-idrepo-2 215m 13820Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 851m 3880Mi idm-65858d8c4c-vdncx 844m 3542Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 186m 495Mi 02:37:02 DEBUG --- stderr --- 02:37:02 DEBUG 02:37:03 INFO 02:37:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:37:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:37:03 INFO [loop_until]: OK (rc = 0) 02:37:03 DEBUG --- stdout --- 02:37:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 6747Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 6947Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6795Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 975m 6% 5184Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 371m 2% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 914m 5% 4791Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1059m 6% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 383m 2% 14391Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 264m 1% 14373Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 251m 1% 2014Mi 3% 02:37:03 DEBUG --- stderr --- 02:37:03 DEBUG 02:38:02 INFO 02:38:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:38:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:38:02 INFO [loop_until]: OK (rc = 0) 02:38:02 DEBUG --- stdout --- 02:38:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 13m 5744Mi am-55f77847b7-l482k 13m 5697Mi am-55f77847b7-qhqgg 15m 5823Mi ds-cts-0 8m 397Mi ds-cts-1 8m 392Mi ds-cts-2 7m 362Mi ds-idrepo-0 651m 13823Mi ds-idrepo-1 118m 13823Mi ds-idrepo-2 119m 13820Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 297m 3882Mi idm-65858d8c4c-vdncx 403m 3544Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 174m 494Mi 02:38:02 DEBUG --- stderr --- 02:38:02 DEBUG 02:38:03 INFO 02:38:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:38:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:38:03 INFO [loop_until]: OK (rc = 0) 02:38:03 DEBUG --- stdout --- 02:38:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 6750Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 70m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 383m 2% 5185Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 241m 1% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 423m 2% 4794Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 464m 2% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 121m 0% 14390Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 170m 1% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 213m 1% 2013Mi 3% 02:38:03 DEBUG --- stderr --- 02:38:03 DEBUG 02:39:02 INFO 02:39:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:39:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:39:02 INFO [loop_until]: OK (rc = 0) 02:39:02 DEBUG --- stdout --- 02:39:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 7m 5744Mi am-55f77847b7-l482k 7m 5697Mi am-55f77847b7-qhqgg 7m 5823Mi ds-cts-0 5m 397Mi ds-cts-1 7m 392Mi ds-cts-2 6m 362Mi ds-idrepo-0 13m 13823Mi ds-idrepo-1 10m 13823Mi ds-idrepo-2 11m 13820Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 5m 3882Mi idm-65858d8c4c-vdncx 7m 3544Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 102Mi 02:39:02 DEBUG --- stderr --- 02:39:02 DEBUG 02:39:03 INFO 02:39:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:39:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:39:03 INFO [loop_until]: OK (rc = 0) 02:39:03 DEBUG --- stdout --- 02:39:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6753Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 5185Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 117m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4799Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 62m 0% 14373Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1629Mi 2% 02:39:03 DEBUG --- stderr --- 02:39:03 DEBUG 127.0.0.1 - - [12/Aug/2023 02:39:47] "GET /monitoring/average?start_time=23-08-12_01:09:16&stop_time=23-08-12_01:37:46 HTTP/1.1" 200 - 02:40:02 INFO 02:40:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:40:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:40:02 INFO [loop_until]: OK (rc = 0) 02:40:02 DEBUG --- stdout --- 02:40:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 7m 5744Mi am-55f77847b7-l482k 5m 5697Mi am-55f77847b7-qhqgg 6m 5823Mi ds-cts-0 5m 397Mi ds-cts-1 6m 392Mi ds-cts-2 7m 362Mi ds-idrepo-0 11m 13823Mi ds-idrepo-1 9m 13823Mi ds-idrepo-2 11m 13819Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 4m 3882Mi idm-65858d8c4c-vdncx 6m 3544Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 2m 102Mi 02:40:02 DEBUG --- stderr --- 02:40:02 DEBUG 02:40:03 INFO 02:40:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:40:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:40:03 INFO [loop_until]: OK (rc = 0) 02:40:03 DEBUG --- stdout --- 02:40:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 6752Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 5186Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 128m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 4794Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 61m 0% 14376Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 111m 0% 1680Mi 2% 02:40:03 DEBUG --- stderr --- 02:40:03 DEBUG 02:41:02 INFO 02:41:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:41:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:41:02 INFO [loop_until]: OK (rc = 0) 02:41:02 DEBUG --- stdout --- 02:41:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 21m 5744Mi am-55f77847b7-l482k 19m 5700Mi am-55f77847b7-qhqgg 20m 5824Mi ds-cts-0 6m 397Mi ds-cts-1 10m 393Mi ds-cts-2 9m 362Mi ds-idrepo-0 705m 13827Mi ds-idrepo-1 285m 13827Mi ds-idrepo-2 174m 13822Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 705m 3904Mi idm-65858d8c4c-vdncx 742m 3559Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 456m 469Mi 02:41:02 DEBUG --- stderr --- 02:41:02 DEBUG 02:41:03 INFO 02:41:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:41:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:41:03 INFO [loop_until]: OK (rc = 0) 02:41:03 DEBUG --- stdout --- 02:41:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 6751Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 6801Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 822m 5% 5205Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 321m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 826m 5% 4808Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 903m 5% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 423m 2% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 225m 1% 14378Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 525m 3% 1988Mi 3% 02:41:03 DEBUG --- stderr --- 02:41:03 DEBUG 02:42:02 INFO 02:42:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:42:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:42:02 INFO [loop_until]: OK (rc = 0) 02:42:02 DEBUG --- stdout --- 02:42:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 23m 5744Mi am-55f77847b7-l482k 20m 5700Mi am-55f77847b7-qhqgg 23m 5823Mi ds-cts-0 7m 397Mi ds-cts-1 7m 392Mi ds-cts-2 6m 362Mi ds-idrepo-0 1359m 13823Mi ds-idrepo-1 153m 13832Mi ds-idrepo-2 758m 13822Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 763m 3912Mi idm-65858d8c4c-vdncx 797m 3568Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 263m 482Mi 02:42:02 DEBUG --- stderr --- 02:42:02 DEBUG 02:42:03 INFO 02:42:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:42:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:42:03 INFO [loop_until]: OK (rc = 0) 02:42:03 DEBUG --- stdout --- 02:42:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 6753Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 871m 5% 5214Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 373m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 884m 5% 4816Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1491m 9% 14412Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 427m 2% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 788m 4% 14384Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 317m 1% 2001Mi 3% 02:42:03 DEBUG --- stderr --- 02:42:03 DEBUG 02:43:02 INFO 02:43:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:43:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:43:02 INFO [loop_until]: OK (rc = 0) 02:43:02 DEBUG --- stdout --- 02:43:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 23m 5744Mi am-55f77847b7-l482k 20m 5700Mi am-55f77847b7-qhqgg 23m 5824Mi ds-cts-0 10m 397Mi ds-cts-1 7m 392Mi ds-cts-2 7m 363Mi ds-idrepo-0 942m 13833Mi ds-idrepo-1 505m 13816Mi ds-idrepo-2 167m 13813Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 793m 3918Mi idm-65858d8c4c-vdncx 785m 3573Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 200m 483Mi 02:43:02 DEBUG --- stderr --- 02:43:02 DEBUG 02:43:03 INFO 02:43:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:43:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:43:03 INFO [loop_until]: OK (rc = 0) 02:43:03 DEBUG --- stdout --- 02:43:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6750Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 6947Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 6801Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 890m 5% 5221Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 374m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 868m 5% 4820Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 989m 6% 14426Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 66m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 597m 3% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 294m 1% 14375Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 273m 1% 2009Mi 3% 02:43:03 DEBUG --- stderr --- 02:43:03 DEBUG 02:44:02 INFO 02:44:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:44:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:44:02 INFO [loop_until]: OK (rc = 0) 02:44:02 DEBUG --- stdout --- 02:44:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 23m 5744Mi am-55f77847b7-l482k 24m 5700Mi am-55f77847b7-qhqgg 22m 5824Mi ds-cts-0 10m 400Mi ds-cts-1 7m 393Mi ds-cts-2 6m 362Mi ds-idrepo-0 1019m 13837Mi ds-idrepo-1 321m 13826Mi ds-idrepo-2 163m 13816Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 780m 3923Mi idm-65858d8c4c-vdncx 798m 3578Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 193m 484Mi 02:44:02 DEBUG --- stderr --- 02:44:02 DEBUG 02:44:03 INFO 02:44:03 INFO [loop_until]: kubectl --namespace=xlou top node 02:44:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:44:03 INFO [loop_until]: OK (rc = 0) 02:44:03 DEBUG --- stdout --- 02:44:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 6750Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 892m 5% 5225Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 368m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 872m 5% 4828Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1089m 6% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 357m 2% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 216m 1% 14374Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 268m 1% 2005Mi 3% 02:44:03 DEBUG --- stderr --- 02:44:03 DEBUG 02:45:02 INFO 02:45:02 INFO [loop_until]: kubectl --namespace=xlou top pods 02:45:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:45:02 INFO [loop_until]: OK (rc = 0) 02:45:02 DEBUG --- stdout --- 02:45:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 24m 5744Mi am-55f77847b7-l482k 20m 5700Mi am-55f77847b7-qhqgg 23m 5824Mi ds-cts-0 7m 397Mi ds-cts-1 7m 392Mi ds-cts-2 8m 363Mi ds-idrepo-0 927m 13772Mi ds-idrepo-1 328m 13739Mi ds-idrepo-2 297m 13759Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 771m 3928Mi idm-65858d8c4c-vdncx 800m 3582Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 191m 484Mi 02:45:02 DEBUG --- stderr --- 02:45:02 DEBUG 02:45:04 INFO 02:45:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:45:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:45:04 INFO [loop_until]: OK (rc = 0) 02:45:04 DEBUG --- stdout --- 02:45:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 6752Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 6801Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 864m 5% 5232Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 370m 2% 2139Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 874m 5% 4827Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1023m 6% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 218m 1% 14306Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 261m 1% 14296Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 255m 1% 2005Mi 3% 02:45:04 DEBUG --- stderr --- 02:45:04 DEBUG 02:46:03 INFO 02:46:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:46:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:46:03 INFO [loop_until]: OK (rc = 0) 02:46:03 DEBUG --- stdout --- 02:46:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 23m 5744Mi am-55f77847b7-l482k 22m 5700Mi am-55f77847b7-qhqgg 22m 5824Mi ds-cts-0 6m 397Mi ds-cts-1 7m 392Mi ds-cts-2 6m 362Mi ds-idrepo-0 1036m 13790Mi ds-idrepo-1 347m 13763Mi ds-idrepo-2 178m 13747Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 792m 3932Mi idm-65858d8c4c-vdncx 801m 3588Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 194m 485Mi 02:46:03 DEBUG --- stderr --- 02:46:03 DEBUG 02:46:04 INFO 02:46:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:46:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:46:04 INFO [loop_until]: OK (rc = 0) 02:46:04 DEBUG --- stdout --- 02:46:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 6752Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 6800Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 862m 5% 5236Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 369m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 873m 5% 4835Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1045m 6% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 351m 2% 14328Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 220m 1% 14300Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 261m 1% 2007Mi 3% 02:46:04 DEBUG --- stderr --- 02:46:04 DEBUG 02:47:03 INFO 02:47:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:47:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:47:03 INFO [loop_until]: OK (rc = 0) 02:47:03 DEBUG --- stdout --- 02:47:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 25m 5744Mi am-55f77847b7-l482k 20m 5700Mi am-55f77847b7-qhqgg 25m 5824Mi ds-cts-0 6m 397Mi ds-cts-1 7m 392Mi ds-cts-2 8m 361Mi ds-idrepo-0 1128m 13797Mi ds-idrepo-1 333m 13767Mi ds-idrepo-2 178m 13751Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 784m 3937Mi idm-65858d8c4c-vdncx 778m 3591Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 200m 486Mi 02:47:03 DEBUG --- stderr --- 02:47:03 DEBUG 02:47:04 INFO 02:47:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:47:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:47:04 INFO [loop_until]: OK (rc = 0) 02:47:04 DEBUG --- stdout --- 02:47:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 6748Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6798Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 904m 5% 5237Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 372m 2% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 869m 5% 4840Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1229m 7% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 210m 1% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 361m 2% 14312Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 265m 1% 2004Mi 3% 02:47:04 DEBUG --- stderr --- 02:47:04 DEBUG 02:48:03 INFO 02:48:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:48:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:48:03 INFO [loop_until]: OK (rc = 0) 02:48:03 DEBUG --- stdout --- 02:48:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 25m 5744Mi am-55f77847b7-l482k 22m 5700Mi am-55f77847b7-qhqgg 24m 5824Mi ds-cts-0 6m 397Mi ds-cts-1 7m 392Mi ds-cts-2 6m 361Mi ds-idrepo-0 940m 13810Mi ds-idrepo-1 179m 13759Mi ds-idrepo-2 355m 13782Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 822m 3941Mi idm-65858d8c4c-vdncx 758m 3597Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 204m 485Mi 02:48:03 DEBUG --- stderr --- 02:48:03 DEBUG 02:48:04 INFO 02:48:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:48:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:48:04 INFO [loop_until]: OK (rc = 0) 02:48:04 DEBUG --- stdout --- 02:48:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6753Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 75m 0% 6800Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 909m 5% 5244Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 364m 2% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 853m 5% 4847Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 960m 6% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 227m 1% 14337Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 359m 2% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 267m 1% 2006Mi 3% 02:48:04 DEBUG --- stderr --- 02:48:04 DEBUG 02:49:03 INFO 02:49:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:49:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:49:03 INFO [loop_until]: OK (rc = 0) 02:49:03 DEBUG --- stdout --- 02:49:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 24m 5744Mi am-55f77847b7-l482k 22m 5700Mi am-55f77847b7-qhqgg 22m 5824Mi ds-cts-0 6m 397Mi ds-cts-1 9m 392Mi ds-cts-2 7m 361Mi ds-idrepo-0 895m 13815Mi ds-idrepo-1 170m 13763Mi ds-idrepo-2 280m 13784Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 763m 3944Mi idm-65858d8c4c-vdncx 756m 3602Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 191m 487Mi 02:49:03 DEBUG --- stderr --- 02:49:03 DEBUG 02:49:04 INFO 02:49:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:49:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:49:04 INFO [loop_until]: OK (rc = 0) 02:49:04 DEBUG --- stdout --- 02:49:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 6749Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 83m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 851m 5% 5247Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 365m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 840m 5% 4853Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 999m 6% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 216m 1% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 363m 2% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 261m 1% 2007Mi 3% 02:49:04 DEBUG --- stderr --- 02:49:04 DEBUG 02:50:03 INFO 02:50:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:50:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:50:03 INFO [loop_until]: OK (rc = 0) 02:50:03 DEBUG --- stdout --- 02:50:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 25m 5744Mi am-55f77847b7-l482k 20m 5700Mi am-55f77847b7-qhqgg 28m 5824Mi ds-cts-0 6m 397Mi ds-cts-1 8m 393Mi ds-cts-2 7m 361Mi ds-idrepo-0 962m 13822Mi ds-idrepo-1 288m 13777Mi ds-idrepo-2 165m 13774Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 789m 3956Mi idm-65858d8c4c-vdncx 748m 3606Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 198m 488Mi 02:50:03 DEBUG --- stderr --- 02:50:03 DEBUG 02:50:04 INFO 02:50:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:50:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:50:04 INFO [loop_until]: OK (rc = 0) 02:50:04 DEBUG --- stdout --- 02:50:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6750Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 82m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 888m 5% 5258Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 358m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 889m 5% 4857Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1034m 6% 14413Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 359m 2% 14351Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 222m 1% 14335Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 270m 1% 2009Mi 3% 02:50:04 DEBUG --- stderr --- 02:50:04 DEBUG 02:51:03 INFO 02:51:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:51:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:51:03 INFO [loop_until]: OK (rc = 0) 02:51:03 DEBUG --- stdout --- 02:51:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 5744Mi am-55f77847b7-l482k 22m 5700Mi am-55f77847b7-qhqgg 23m 5824Mi ds-cts-0 6m 398Mi ds-cts-1 7m 392Mi ds-cts-2 6m 361Mi ds-idrepo-0 983m 13826Mi ds-idrepo-1 324m 13790Mi ds-idrepo-2 161m 13783Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 769m 3957Mi idm-65858d8c4c-vdncx 822m 3612Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 192m 488Mi 02:51:03 DEBUG --- stderr --- 02:51:03 DEBUG 02:51:04 INFO 02:51:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:51:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:51:04 INFO [loop_until]: OK (rc = 0) 02:51:04 DEBUG --- stdout --- 02:51:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6749Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6798Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 886m 5% 5256Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 369m 2% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 936m 5% 4861Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1064m 6% 14417Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 360m 2% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 214m 1% 14342Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 266m 1% 2009Mi 3% 02:51:04 DEBUG --- stderr --- 02:51:04 DEBUG 02:52:03 INFO 02:52:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:52:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:52:03 INFO [loop_until]: OK (rc = 0) 02:52:03 DEBUG --- stdout --- 02:52:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 5750Mi am-55f77847b7-l482k 21m 5700Mi am-55f77847b7-qhqgg 23m 5824Mi ds-cts-0 6m 398Mi ds-cts-1 7m 393Mi ds-cts-2 6m 361Mi ds-idrepo-0 935m 13830Mi ds-idrepo-1 150m 13779Mi ds-idrepo-2 168m 13797Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 747m 3960Mi idm-65858d8c4c-vdncx 794m 3618Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 187m 489Mi 02:52:03 DEBUG --- stderr --- 02:52:03 DEBUG 02:52:04 INFO 02:52:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:52:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:52:04 INFO [loop_until]: OK (rc = 0) 02:52:04 DEBUG --- stdout --- 02:52:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6758Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6945Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 6798Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 839m 5% 5261Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 360m 2% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 896m 5% 4867Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1031m 6% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 207m 1% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 222m 1% 14352Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 268m 1% 2009Mi 3% 02:52:04 DEBUG --- stderr --- 02:52:04 DEBUG 02:53:03 INFO 02:53:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:53:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:53:03 INFO [loop_until]: OK (rc = 0) 02:53:03 DEBUG --- stdout --- 02:53:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 5751Mi am-55f77847b7-l482k 22m 5703Mi am-55f77847b7-qhqgg 24m 5824Mi ds-cts-0 5m 398Mi ds-cts-1 9m 393Mi ds-cts-2 6m 361Mi ds-idrepo-0 945m 13835Mi ds-idrepo-1 163m 13789Mi ds-idrepo-2 320m 13799Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 809m 3965Mi idm-65858d8c4c-vdncx 775m 3622Mi lodemon-7b659c988b-78sgh 1m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 194m 489Mi 02:53:03 DEBUG --- stderr --- 02:53:03 DEBUG 02:53:04 INFO 02:53:04 INFO [loop_until]: kubectl --namespace=xlou top node 02:53:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:53:05 INFO [loop_until]: OK (rc = 0) 02:53:05 DEBUG --- stdout --- 02:53:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 6756Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 82m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 907m 5% 5268Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 380m 2% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 893m 5% 4871Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1028m 6% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 74m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 224m 1% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 377m 2% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 267m 1% 2010Mi 3% 02:53:05 DEBUG --- stderr --- 02:53:05 DEBUG 02:54:03 INFO 02:54:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:54:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:54:03 INFO [loop_until]: OK (rc = 0) 02:54:03 DEBUG --- stdout --- 02:54:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 5750Mi am-55f77847b7-l482k 23m 5704Mi am-55f77847b7-qhqgg 25m 5824Mi ds-cts-0 6m 399Mi ds-cts-1 12m 393Mi ds-cts-2 15m 361Mi ds-idrepo-0 1225m 13823Mi ds-idrepo-1 464m 13794Mi ds-idrepo-2 557m 13790Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 787m 3970Mi idm-65858d8c4c-vdncx 769m 3626Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 193m 490Mi 02:54:03 DEBUG --- stderr --- 02:54:03 DEBUG 02:54:05 INFO 02:54:05 INFO [loop_until]: kubectl --namespace=xlou top node 02:54:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:54:05 INFO [loop_until]: OK (rc = 0) 02:54:05 DEBUG --- stdout --- 02:54:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6758Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 878m 5% 5268Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 357m 2% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 865m 5% 4873Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1308m 8% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 486m 3% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 547m 3% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 258m 1% 2005Mi 3% 02:54:05 DEBUG --- stderr --- 02:54:05 DEBUG 02:55:03 INFO 02:55:03 INFO [loop_until]: kubectl --namespace=xlou top pods 02:55:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:55:03 INFO [loop_until]: OK (rc = 0) 02:55:03 DEBUG --- stdout --- 02:55:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5752Mi am-55f77847b7-l482k 25m 5705Mi am-55f77847b7-qhqgg 31m 5832Mi ds-cts-0 6m 397Mi ds-cts-1 9m 393Mi ds-cts-2 6m 361Mi ds-idrepo-0 997m 13830Mi ds-idrepo-1 324m 13833Mi ds-idrepo-2 388m 13802Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 812m 3974Mi idm-65858d8c4c-vdncx 776m 3630Mi lodemon-7b659c988b-78sgh 1m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 188m 491Mi 02:55:03 DEBUG --- stderr --- 02:55:03 DEBUG 02:55:05 INFO 02:55:05 INFO [loop_until]: kubectl --namespace=xlou top node 02:55:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:55:05 INFO [loop_until]: OK (rc = 0) 02:55:05 DEBUG --- stdout --- 02:55:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 6759Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6952Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 916m 5% 5273Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 373m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 872m 5% 4876Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1045m 6% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 387m 2% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 350m 2% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 269m 1% 2006Mi 3% 02:55:05 DEBUG --- stderr --- 02:55:05 DEBUG 02:56:04 INFO 02:56:04 INFO [loop_until]: kubectl --namespace=xlou top pods 02:56:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:56:04 INFO [loop_until]: OK (rc = 0) 02:56:04 DEBUG --- stdout --- 02:56:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 26m 5750Mi am-55f77847b7-l482k 22m 5701Mi am-55f77847b7-qhqgg 24m 5822Mi ds-cts-0 6m 397Mi ds-cts-1 9m 393Mi ds-cts-2 6m 361Mi ds-idrepo-0 937m 13824Mi ds-idrepo-1 397m 13803Mi ds-idrepo-2 159m 13807Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 771m 3979Mi idm-65858d8c4c-vdncx 759m 3634Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 188m 491Mi 02:56:04 DEBUG --- stderr --- 02:56:04 DEBUG 02:56:05 INFO 02:56:05 INFO [loop_until]: kubectl --namespace=xlou top node 02:56:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:56:05 INFO [loop_until]: OK (rc = 0) 02:56:05 DEBUG --- stdout --- 02:56:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 6758Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 6800Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 863m 5% 5290Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 351m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 859m 5% 4882Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1026m 6% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 219m 1% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 210m 1% 14382Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 257m 1% 2011Mi 3% 02:56:05 DEBUG --- stderr --- 02:56:05 DEBUG 02:57:04 INFO 02:57:04 INFO [loop_until]: kubectl --namespace=xlou top pods 02:57:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:57:04 INFO [loop_until]: OK (rc = 0) 02:57:04 DEBUG --- stdout --- 02:57:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 26m 5754Mi am-55f77847b7-l482k 23m 5702Mi am-55f77847b7-qhqgg 24m 5821Mi ds-cts-0 5m 398Mi ds-cts-1 5m 393Mi ds-cts-2 6m 361Mi ds-idrepo-0 860m 13829Mi ds-idrepo-1 165m 13815Mi ds-idrepo-2 172m 13814Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 799m 3983Mi idm-65858d8c4c-vdncx 760m 3639Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 190m 492Mi 02:57:04 DEBUG --- stderr --- 02:57:04 DEBUG 02:57:05 INFO 02:57:05 INFO [loop_until]: kubectl --namespace=xlou top node 02:57:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:57:05 INFO [loop_until]: OK (rc = 0) 02:57:05 DEBUG --- stdout --- 02:57:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 83m 0% 6759Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 81m 0% 6941Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 912m 5% 5282Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 374m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 855m 5% 4897Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 983m 6% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 223m 1% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 224m 1% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 267m 1% 2009Mi 3% 02:57:05 DEBUG --- stderr --- 02:57:05 DEBUG 02:58:04 INFO 02:58:04 INFO [loop_until]: kubectl --namespace=xlou top pods 02:58:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:58:04 INFO [loop_until]: OK (rc = 0) 02:58:04 DEBUG --- stdout --- 02:58:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 26m 5756Mi am-55f77847b7-l482k 23m 5705Mi am-55f77847b7-qhqgg 24m 5821Mi ds-cts-0 9m 397Mi ds-cts-1 9m 393Mi ds-cts-2 6m 361Mi ds-idrepo-0 861m 13832Mi ds-idrepo-1 300m 13829Mi ds-idrepo-2 157m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 791m 4014Mi idm-65858d8c4c-vdncx 810m 3644Mi lodemon-7b659c988b-78sgh 1m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 202m 493Mi 02:58:04 DEBUG --- stderr --- 02:58:04 DEBUG 02:58:05 INFO 02:58:05 INFO [loop_until]: kubectl --namespace=xlou top node 02:58:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:58:05 INFO [loop_until]: OK (rc = 0) 02:58:05 DEBUG --- stdout --- 02:58:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 6765Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 83m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 906m 5% 5316Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 360m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 908m 5% 4891Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 861m 5% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 376m 2% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 213m 1% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 265m 1% 2010Mi 3% 02:58:05 DEBUG --- stderr --- 02:58:05 DEBUG 02:59:04 INFO 02:59:04 INFO [loop_until]: kubectl --namespace=xlou top pods 02:59:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:59:04 INFO [loop_until]: OK (rc = 0) 02:59:04 DEBUG --- stdout --- 02:59:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 26m 5760Mi am-55f77847b7-l482k 24m 5706Mi am-55f77847b7-qhqgg 32m 5822Mi ds-cts-0 6m 399Mi ds-cts-1 8m 393Mi ds-cts-2 6m 361Mi ds-idrepo-0 1007m 13837Mi ds-idrepo-1 359m 13833Mi ds-idrepo-2 212m 13829Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 826m 3993Mi idm-65858d8c4c-vdncx 758m 3648Mi lodemon-7b659c988b-78sgh 1m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 195m 493Mi 02:59:04 DEBUG --- stderr --- 02:59:04 DEBUG 02:59:05 INFO 02:59:05 INFO [loop_until]: kubectl --namespace=xlou top node 02:59:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 02:59:05 INFO [loop_until]: OK (rc = 0) 02:59:05 DEBUG --- stdout --- 02:59:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 84m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 913m 5% 5291Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 369m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 867m 5% 4895Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1058m 6% 14440Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 420m 2% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 253m 1% 14402Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 265m 1% 2013Mi 3% 02:59:05 DEBUG --- stderr --- 02:59:05 DEBUG 03:00:04 INFO 03:00:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:00:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:00:04 INFO [loop_until]: OK (rc = 0) 03:00:04 DEBUG --- stdout --- 03:00:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 22m 5760Mi am-55f77847b7-l482k 24m 5714Mi am-55f77847b7-qhqgg 20m 5826Mi ds-cts-0 5m 397Mi ds-cts-1 5m 393Mi ds-cts-2 8m 363Mi ds-idrepo-0 953m 13840Mi ds-idrepo-1 160m 13824Mi ds-idrepo-2 334m 13832Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 803m 3999Mi idm-65858d8c4c-vdncx 788m 3652Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 194m 494Mi 03:00:04 DEBUG --- stderr --- 03:00:04 DEBUG 03:00:05 INFO 03:00:05 INFO [loop_until]: kubectl --namespace=xlou top node 03:00:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:00:05 INFO [loop_until]: OK (rc = 0) 03:00:05 DEBUG --- stdout --- 03:00:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 77m 0% 6767Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 887m 5% 5300Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 369m 2% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 857m 5% 4899Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 999m 6% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 213m 1% 14418Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 358m 2% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 258m 1% 2012Mi 3% 03:00:05 DEBUG --- stderr --- 03:00:05 DEBUG 03:01:04 INFO 03:01:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:01:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:01:04 INFO [loop_until]: OK (rc = 0) 03:01:04 DEBUG --- stdout --- 03:01:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 34m 5760Mi am-55f77847b7-l482k 20m 5714Mi am-55f77847b7-qhqgg 20m 5826Mi ds-cts-0 5m 397Mi ds-cts-1 5m 393Mi ds-cts-2 6m 363Mi ds-idrepo-0 1313m 13827Mi ds-idrepo-1 278m 13833Mi ds-idrepo-2 796m 13824Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 788m 4007Mi idm-65858d8c4c-vdncx 786m 3658Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 191m 495Mi 03:01:04 DEBUG --- stderr --- 03:01:04 DEBUG 03:01:06 INFO 03:01:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:01:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:01:06 INFO [loop_until]: OK (rc = 0) 03:01:06 DEBUG --- stdout --- 03:01:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 6767Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 72m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 872m 5% 5310Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 368m 2% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 894m 5% 4905Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1275m 8% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 352m 2% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 848m 5% 14413Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 252m 1% 2015Mi 3% 03:01:06 DEBUG --- stderr --- 03:01:06 DEBUG 03:02:04 INFO 03:02:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:02:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:02:04 INFO [loop_until]: OK (rc = 0) 03:02:04 DEBUG --- stdout --- 03:02:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 23m 5760Mi am-55f77847b7-l482k 21m 5714Mi am-55f77847b7-qhqgg 20m 5826Mi ds-cts-0 6m 397Mi ds-cts-1 5m 393Mi ds-cts-2 6m 364Mi ds-idrepo-0 1012m 13831Mi ds-idrepo-1 160m 13831Mi ds-idrepo-2 306m 13838Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 786m 4011Mi idm-65858d8c4c-vdncx 817m 3662Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 194m 495Mi 03:02:04 DEBUG --- stderr --- 03:02:04 DEBUG 03:02:06 INFO 03:02:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:02:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:02:06 INFO [loop_until]: OK (rc = 0) 03:02:06 DEBUG --- stdout --- 03:02:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 859m 5% 5315Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 376m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 887m 5% 4910Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1216m 7% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 212m 1% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 365m 2% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 259m 1% 2015Mi 3% 03:02:06 DEBUG --- stderr --- 03:02:06 DEBUG 03:03:04 INFO 03:03:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:03:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:03:04 INFO [loop_until]: OK (rc = 0) 03:03:04 DEBUG --- stdout --- 03:03:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 23m 5760Mi am-55f77847b7-l482k 21m 5714Mi am-55f77847b7-qhqgg 20m 5826Mi ds-cts-0 7m 401Mi ds-cts-1 5m 393Mi ds-cts-2 6m 363Mi ds-idrepo-0 1158m 13820Mi ds-idrepo-1 167m 13837Mi ds-idrepo-2 164m 13821Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 841m 4033Mi idm-65858d8c4c-vdncx 792m 3666Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 203m 495Mi 03:03:04 DEBUG --- stderr --- 03:03:04 DEBUG 03:03:06 INFO 03:03:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:03:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:03:06 INFO [loop_until]: OK (rc = 0) 03:03:06 DEBUG --- stdout --- 03:03:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 6769Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 920m 5% 5334Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 375m 2% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 896m 5% 4915Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1198m 7% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 219m 1% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 373m 2% 14425Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 274m 1% 2014Mi 3% 03:03:06 DEBUG --- stderr --- 03:03:06 DEBUG 03:04:04 INFO 03:04:04 INFO [loop_until]: kubectl --namespace=xlou top pods 03:04:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:04:05 INFO [loop_until]: OK (rc = 0) 03:04:05 DEBUG --- stdout --- 03:04:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 24m 5761Mi am-55f77847b7-l482k 21m 5714Mi am-55f77847b7-qhqgg 21m 5826Mi ds-cts-0 6m 401Mi ds-cts-1 8m 393Mi ds-cts-2 6m 363Mi ds-idrepo-0 871m 13823Mi ds-idrepo-1 158m 13847Mi ds-idrepo-2 168m 13830Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 767m 4015Mi idm-65858d8c4c-vdncx 799m 3671Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 180m 496Mi 03:04:05 DEBUG --- stderr --- 03:04:05 DEBUG 03:04:06 INFO 03:04:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:04:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:04:06 INFO [loop_until]: OK (rc = 0) 03:04:06 DEBUG --- stdout --- 03:04:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 6769Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 865m 5% 5315Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 373m 2% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 884m 5% 4917Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 831m 5% 14439Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 209m 1% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 215m 1% 14415Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 258m 1% 2015Mi 3% 03:04:06 DEBUG --- stderr --- 03:04:06 DEBUG 03:05:05 INFO 03:05:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:05:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:05:05 INFO [loop_until]: OK (rc = 0) 03:05:05 DEBUG --- stdout --- 03:05:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 24m 5760Mi am-55f77847b7-l482k 21m 5714Mi am-55f77847b7-qhqgg 25m 5827Mi ds-cts-0 5m 401Mi ds-cts-1 5m 395Mi ds-cts-2 14m 361Mi ds-idrepo-0 965m 13827Mi ds-idrepo-1 168m 13844Mi ds-idrepo-2 179m 13838Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 823m 4020Mi idm-65858d8c4c-vdncx 821m 3675Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 200m 497Mi 03:05:05 DEBUG --- stderr --- 03:05:05 DEBUG 03:05:06 INFO 03:05:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:05:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:05:06 INFO [loop_until]: OK (rc = 0) 03:05:06 DEBUG --- stdout --- 03:05:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 79m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 82m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 883m 5% 5317Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 371m 2% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 915m 5% 4919Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1047m 6% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 67m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 216m 1% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 234m 1% 14420Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 268m 1% 2015Mi 3% 03:05:06 DEBUG --- stderr --- 03:05:06 DEBUG 03:06:05 INFO 03:06:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:06:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:06:05 INFO [loop_until]: OK (rc = 0) 03:06:05 DEBUG --- stdout --- 03:06:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 24m 5760Mi am-55f77847b7-l482k 21m 5714Mi am-55f77847b7-qhqgg 21m 5826Mi ds-cts-0 5m 401Mi ds-cts-1 5m 395Mi ds-cts-2 6m 361Mi ds-idrepo-0 968m 13833Mi ds-idrepo-1 269m 13848Mi ds-idrepo-2 335m 13837Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 782m 4024Mi idm-65858d8c4c-vdncx 795m 3680Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 184m 497Mi 03:06:05 DEBUG --- stderr --- 03:06:05 DEBUG 03:06:06 INFO 03:06:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:06:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:06:06 INFO [loop_until]: OK (rc = 0) 03:06:06 DEBUG --- stdout --- 03:06:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 892m 5% 5327Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 385m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 878m 5% 4927Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1029m 6% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 368m 2% 14453Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 403m 2% 14430Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 251m 1% 2017Mi 3% 03:06:06 DEBUG --- stderr --- 03:06:06 DEBUG 03:07:05 INFO 03:07:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:07:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:07:05 INFO [loop_until]: OK (rc = 0) 03:07:05 DEBUG --- stdout --- 03:07:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 26m 5760Mi am-55f77847b7-l482k 24m 5714Mi am-55f77847b7-qhqgg 21m 5826Mi ds-cts-0 6m 402Mi ds-cts-1 5m 395Mi ds-cts-2 7m 362Mi ds-idrepo-0 1152m 13840Mi ds-idrepo-1 381m 13848Mi ds-idrepo-2 1169m 13824Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 786m 4044Mi idm-65858d8c4c-vdncx 791m 3684Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 180m 498Mi 03:07:05 DEBUG --- stderr --- 03:07:05 DEBUG 03:07:06 INFO 03:07:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:07:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:07:06 INFO [loop_until]: OK (rc = 0) 03:07:06 DEBUG --- stdout --- 03:07:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1330Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 77m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 873m 5% 5344Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 363m 2% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 879m 5% 4931Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1338m 8% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 361m 2% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1053m 6% 14420Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 251m 1% 2018Mi 3% 03:07:06 DEBUG --- stderr --- 03:07:06 DEBUG 03:08:05 INFO 03:08:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:08:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:08:05 INFO [loop_until]: OK (rc = 0) 03:08:05 DEBUG --- stdout --- 03:08:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 24m 5760Mi am-55f77847b7-l482k 21m 5714Mi am-55f77847b7-qhqgg 23m 5826Mi ds-cts-0 7m 402Mi ds-cts-1 6m 395Mi ds-cts-2 6m 362Mi ds-idrepo-0 1438m 13822Mi ds-idrepo-1 152m 13845Mi ds-idrepo-2 362m 13835Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 786m 4033Mi idm-65858d8c4c-vdncx 764m 3688Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 182m 499Mi 03:08:05 DEBUG --- stderr --- 03:08:05 DEBUG 03:08:06 INFO 03:08:06 INFO [loop_until]: kubectl --namespace=xlou top node 03:08:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:08:06 INFO [loop_until]: OK (rc = 0) 03:08:06 DEBUG --- stdout --- 03:08:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 6765Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 882m 5% 5353Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 371m 2% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 888m 5% 4938Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1362m 8% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 420m 2% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 415m 2% 14435Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 255m 1% 2017Mi 3% 03:08:06 DEBUG --- stderr --- 03:08:06 DEBUG 03:09:05 INFO 03:09:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:09:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:09:05 INFO [loop_until]: OK (rc = 0) 03:09:05 DEBUG --- stdout --- 03:09:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 5761Mi am-55f77847b7-l482k 21m 5714Mi am-55f77847b7-qhqgg 23m 5826Mi ds-cts-0 6m 402Mi ds-cts-1 9m 397Mi ds-cts-2 6m 362Mi ds-idrepo-0 920m 13820Mi ds-idrepo-1 530m 13824Mi ds-idrepo-2 189m 13824Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 788m 4039Mi idm-65858d8c4c-vdncx 785m 3691Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 180m 499Mi 03:09:05 DEBUG --- stderr --- 03:09:05 DEBUG 03:09:07 INFO 03:09:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:09:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:09:07 INFO [loop_until]: OK (rc = 0) 03:09:07 DEBUG --- stdout --- 03:09:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 88m 0% 6770Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 6944Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 908m 5% 5342Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 359m 2% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 851m 5% 4947Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1001m 6% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 609m 3% 14442Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 239m 1% 14427Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 243m 1% 2019Mi 3% 03:09:07 DEBUG --- stderr --- 03:09:07 DEBUG 03:10:05 INFO 03:10:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:10:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:10:05 INFO [loop_until]: OK (rc = 0) 03:10:05 DEBUG --- stdout --- 03:10:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 25m 5762Mi am-55f77847b7-l482k 20m 5714Mi am-55f77847b7-qhqgg 21m 5826Mi ds-cts-0 6m 402Mi ds-cts-1 7m 398Mi ds-cts-2 5m 362Mi ds-idrepo-0 933m 13830Mi ds-idrepo-1 153m 13808Mi ds-idrepo-2 260m 13836Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 820m 4043Mi idm-65858d8c4c-vdncx 752m 3697Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 176m 499Mi 03:10:05 DEBUG --- stderr --- 03:10:05 DEBUG 03:10:07 INFO 03:10:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:10:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:10:07 INFO [loop_until]: OK (rc = 0) 03:10:07 DEBUG --- stdout --- 03:10:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 78m 0% 6943Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 899m 5% 5344Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 367m 2% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 793m 4% 4946Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1008m 6% 14465Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 369m 2% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 241m 1% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 242m 1% 2018Mi 3% 03:10:07 DEBUG --- stderr --- 03:10:07 DEBUG 03:11:05 INFO 03:11:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:11:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:11:05 INFO [loop_until]: OK (rc = 0) 03:11:05 DEBUG --- stdout --- 03:11:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 11m 5762Mi am-55f77847b7-l482k 7m 5714Mi am-55f77847b7-qhqgg 8m 5826Mi ds-cts-0 5m 402Mi ds-cts-1 6m 397Mi ds-cts-2 6m 362Mi ds-idrepo-0 204m 13838Mi ds-idrepo-1 109m 13805Mi ds-idrepo-2 13m 13839Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 4m 4044Mi idm-65858d8c4c-vdncx 6m 3697Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 43m 118Mi 03:11:05 DEBUG --- stderr --- 03:11:05 DEBUG 03:11:07 INFO 03:11:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:11:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:11:07 INFO [loop_until]: OK (rc = 0) 03:11:07 DEBUG --- stdout --- 03:11:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6767Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 66m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 67m 0% 5344Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4947Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 314m 1% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 93m 0% 14422Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14433Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 105m 0% 1643Mi 2% 03:11:07 DEBUG --- stderr --- 03:11:07 DEBUG 03:12:05 INFO 03:12:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:12:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:12:05 INFO [loop_until]: OK (rc = 0) 03:12:05 DEBUG --- stdout --- 03:12:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 9m 5762Mi am-55f77847b7-l482k 7m 5714Mi am-55f77847b7-qhqgg 8m 5826Mi ds-cts-0 5m 402Mi ds-cts-1 8m 397Mi ds-cts-2 7m 362Mi ds-idrepo-0 12m 13839Mi ds-idrepo-1 9m 13805Mi ds-idrepo-2 11m 13838Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 4m 4043Mi idm-65858d8c4c-vdncx 6m 3696Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 118Mi 03:12:05 DEBUG --- stderr --- 03:12:05 DEBUG 03:12:07 INFO 03:12:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:12:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:12:07 INFO [loop_until]: OK (rc = 0) 03:12:07 DEBUG --- stdout --- 03:12:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6769Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 5347Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 4944Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 61m 0% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14436Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1641Mi 2% 03:12:07 DEBUG --- stderr --- 03:12:07 DEBUG 127.0.0.1 - - [12/Aug/2023 03:12:19] "GET /monitoring/average?start_time=23-08-12_01:41:47&stop_time=23-08-12_02:10:18 HTTP/1.1" 200 - 03:13:05 INFO 03:13:05 INFO [loop_until]: kubectl --namespace=xlou top pods 03:13:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:13:05 INFO [loop_until]: OK (rc = 0) 03:13:05 DEBUG --- stdout --- 03:13:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 19m 5762Mi am-55f77847b7-l482k 12m 5715Mi am-55f77847b7-qhqgg 22m 5827Mi ds-cts-0 7m 402Mi ds-cts-1 8m 397Mi ds-cts-2 7m 362Mi ds-idrepo-0 28m 13841Mi ds-idrepo-1 10m 13806Mi ds-idrepo-2 224m 13843Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 53m 4050Mi idm-65858d8c4c-vdncx 146m 3701Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1782m 413Mi 03:13:05 DEBUG --- stderr --- 03:13:05 DEBUG 03:13:07 INFO 03:13:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:13:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:13:07 INFO [loop_until]: OK (rc = 0) 03:13:07 DEBUG --- stdout --- 03:13:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 78m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 79m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 211m 1% 5352Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 199m 1% 4950Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 83m 0% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 64m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 146m 0% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 87m 0% 14449Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1939m 12% 2002Mi 3% 03:13:07 DEBUG --- stderr --- 03:13:07 DEBUG 03:14:06 INFO 03:14:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:14:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:14:06 INFO [loop_until]: OK (rc = 0) 03:14:06 DEBUG --- stdout --- 03:14:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 5762Mi am-55f77847b7-l482k 28m 5714Mi am-55f77847b7-qhqgg 27m 5827Mi ds-cts-0 11m 402Mi ds-cts-1 6m 398Mi ds-cts-2 9m 363Mi ds-idrepo-0 913m 13840Mi ds-idrepo-1 616m 13843Mi ds-idrepo-2 681m 13805Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 505m 4052Mi idm-65858d8c4c-vdncx 552m 3710Mi lodemon-7b659c988b-78sgh 1m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 275m 507Mi 03:14:06 DEBUG --- stderr --- 03:14:06 DEBUG 03:14:07 INFO 03:14:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:14:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:14:07 INFO [loop_until]: OK (rc = 0) 03:14:07 DEBUG --- stdout --- 03:14:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6947Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 85m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 600m 3% 5357Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 306m 1% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 613m 3% 4963Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1004m 6% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 667m 4% 14416Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 69m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 703m 4% 14418Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 330m 2% 2026Mi 3% 03:14:07 DEBUG --- stderr --- 03:14:07 DEBUG 03:15:06 INFO 03:15:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:15:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:15:06 INFO [loop_until]: OK (rc = 0) 03:15:06 DEBUG --- stdout --- 03:15:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 5762Mi am-55f77847b7-l482k 30m 5714Mi am-55f77847b7-qhqgg 28m 5827Mi ds-cts-0 6m 402Mi ds-cts-1 13m 394Mi ds-cts-2 9m 363Mi ds-idrepo-0 855m 13823Mi ds-idrepo-1 598m 13842Mi ds-idrepo-2 740m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 462m 4062Mi idm-65858d8c4c-vdncx 464m 3719Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 187m 519Mi 03:15:06 DEBUG --- stderr --- 03:15:06 DEBUG 03:15:07 INFO 03:15:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:15:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:15:07 INFO [loop_until]: OK (rc = 0) 03:15:07 DEBUG --- stdout --- 03:15:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 6767Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 547m 3% 5368Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 292m 1% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 544m 3% 4967Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 908m 5% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 617m 3% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 659m 4% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 255m 1% 2039Mi 3% 03:15:07 DEBUG --- stderr --- 03:15:07 DEBUG 03:16:06 INFO 03:16:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:16:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:16:06 INFO [loop_until]: OK (rc = 0) 03:16:06 DEBUG --- stdout --- 03:16:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 32m 5762Mi am-55f77847b7-l482k 26m 5714Mi am-55f77847b7-qhqgg 28m 5827Mi ds-cts-0 10m 403Mi ds-cts-1 7m 394Mi ds-cts-2 10m 364Mi ds-idrepo-0 862m 13823Mi ds-idrepo-1 508m 13824Mi ds-idrepo-2 497m 13806Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 535m 4069Mi idm-65858d8c4c-vdncx 503m 3725Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 149m 520Mi 03:16:06 DEBUG --- stderr --- 03:16:06 DEBUG 03:16:07 INFO 03:16:07 INFO [loop_until]: kubectl --namespace=xlou top node 03:16:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:16:08 INFO [loop_until]: OK (rc = 0) 03:16:08 DEBUG --- stdout --- 03:16:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6769Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 614m 3% 5376Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 292m 1% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 584m 3% 4971Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 907m 5% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 611m 3% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 696m 4% 14430Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 219m 1% 2039Mi 3% 03:16:08 DEBUG --- stderr --- 03:16:08 DEBUG 03:17:06 INFO 03:17:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:17:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:17:06 INFO [loop_until]: OK (rc = 0) 03:17:06 DEBUG --- stdout --- 03:17:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 32m 5762Mi am-55f77847b7-l482k 27m 5714Mi am-55f77847b7-qhqgg 36m 5827Mi ds-cts-0 13m 403Mi ds-cts-1 6m 394Mi ds-cts-2 7m 364Mi ds-idrepo-0 908m 13807Mi ds-idrepo-1 612m 13814Mi ds-idrepo-2 615m 13807Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 485m 4081Mi idm-65858d8c4c-vdncx 483m 3732Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 142m 521Mi 03:17:06 DEBUG --- stderr --- 03:17:06 DEBUG 03:17:08 INFO 03:17:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:17:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:17:08 INFO [loop_until]: OK (rc = 0) 03:17:08 DEBUG --- stdout --- 03:17:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6771Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6947Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 566m 3% 5385Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 307m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 555m 3% 4981Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 882m 5% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 453m 2% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 728m 4% 14422Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 206m 1% 2041Mi 3% 03:17:08 DEBUG --- stderr --- 03:17:08 DEBUG 03:18:06 INFO 03:18:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:18:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:18:06 INFO [loop_until]: OK (rc = 0) 03:18:06 DEBUG --- stdout --- 03:18:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5762Mi am-55f77847b7-l482k 27m 5714Mi am-55f77847b7-qhqgg 31m 5827Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 11m 365Mi ds-idrepo-0 826m 13807Mi ds-idrepo-1 508m 13823Mi ds-idrepo-2 706m 13808Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 451m 4085Mi idm-65858d8c4c-vdncx 464m 3740Mi lodemon-7b659c988b-78sgh 1m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 136m 521Mi 03:18:06 DEBUG --- stderr --- 03:18:06 DEBUG 03:18:08 INFO 03:18:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:18:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:18:08 INFO [loop_until]: OK (rc = 0) 03:18:08 DEBUG --- stdout --- 03:18:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6771Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 82m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 541m 3% 5406Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 290m 1% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 541m 3% 4988Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 895m 5% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 69m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1025m 6% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 637m 4% 14442Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 210m 1% 2040Mi 3% 03:18:08 DEBUG --- stderr --- 03:18:08 DEBUG 03:19:06 INFO 03:19:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:19:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:19:06 INFO [loop_until]: OK (rc = 0) 03:19:06 DEBUG --- stdout --- 03:19:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5762Mi am-55f77847b7-l482k 27m 5715Mi am-55f77847b7-qhqgg 30m 5827Mi ds-cts-0 7m 403Mi ds-cts-1 7m 394Mi ds-cts-2 6m 365Mi ds-idrepo-0 1509m 13721Mi ds-idrepo-1 812m 13741Mi ds-idrepo-2 593m 13819Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 462m 4089Mi idm-65858d8c4c-vdncx 448m 3743Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 135m 521Mi 03:19:06 DEBUG --- stderr --- 03:19:06 DEBUG 03:19:08 INFO 03:19:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:19:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:19:08 INFO [loop_until]: OK (rc = 0) 03:19:08 DEBUG --- stdout --- 03:19:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 556m 3% 5394Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 296m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 538m 3% 4993Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1575m 9% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 907m 5% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 849m 5% 14447Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 209m 1% 2042Mi 3% 03:19:08 DEBUG --- stderr --- 03:19:08 DEBUG 03:20:06 INFO 03:20:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:20:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:20:06 INFO [loop_until]: OK (rc = 0) 03:20:06 DEBUG --- stdout --- 03:20:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 31m 5762Mi am-55f77847b7-l482k 27m 5714Mi am-55f77847b7-qhqgg 29m 5827Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 13m 366Mi ds-idrepo-0 949m 13804Mi ds-idrepo-1 805m 13837Mi ds-idrepo-2 1603m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 470m 4094Mi idm-65858d8c4c-vdncx 462m 3747Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 140m 521Mi 03:20:06 DEBUG --- stderr --- 03:20:06 DEBUG 03:20:08 INFO 03:20:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:20:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:20:08 INFO [loop_until]: OK (rc = 0) 03:20:08 DEBUG --- stdout --- 03:20:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 88m 0% 6769Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 555m 3% 5397Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 296m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 530m 3% 4997Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1115m 7% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 610m 3% 14450Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1136Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1514m 9% 14450Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 200m 1% 2042Mi 3% 03:20:08 DEBUG --- stderr --- 03:20:08 DEBUG 03:21:06 INFO 03:21:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:21:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:21:06 INFO [loop_until]: OK (rc = 0) 03:21:06 DEBUG --- stdout --- 03:21:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 5765Mi am-55f77847b7-l482k 27m 5714Mi am-55f77847b7-qhqgg 28m 5828Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 885m 13811Mi ds-idrepo-1 623m 13804Mi ds-idrepo-2 649m 13844Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 464m 4100Mi idm-65858d8c4c-vdncx 469m 3751Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 146m 521Mi 03:21:06 DEBUG --- stderr --- 03:21:06 DEBUG 03:21:08 INFO 03:21:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:21:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:21:08 INFO [loop_until]: OK (rc = 0) 03:21:08 DEBUG --- stdout --- 03:21:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 82m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 6816Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 565m 3% 5403Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 301m 1% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 551m 3% 5001Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1056m 6% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 658m 4% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 638m 4% 14456Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 216m 1% 2043Mi 3% 03:21:08 DEBUG --- stderr --- 03:21:08 DEBUG 03:22:06 INFO 03:22:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:22:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:22:06 INFO [loop_until]: OK (rc = 0) 03:22:06 DEBUG --- stdout --- 03:22:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 5765Mi am-55f77847b7-l482k 27m 5718Mi am-55f77847b7-qhqgg 25m 5828Mi ds-cts-0 7m 403Mi ds-cts-1 9m 394Mi ds-cts-2 5m 366Mi ds-idrepo-0 1048m 13822Mi ds-idrepo-1 526m 13844Mi ds-idrepo-2 474m 13810Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 486m 4104Mi idm-65858d8c4c-vdncx 470m 3754Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 147m 522Mi 03:22:06 DEBUG --- stderr --- 03:22:06 DEBUG 03:22:08 INFO 03:22:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:22:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:22:08 INFO [loop_until]: OK (rc = 0) 03:22:08 DEBUG --- stdout --- 03:22:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 82m 0% 6952Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 82m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 569m 3% 5408Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 322m 2% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 560m 3% 5004Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1177m 7% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 550m 3% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 684m 4% 14443Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 212m 1% 2039Mi 3% 03:22:08 DEBUG --- stderr --- 03:22:08 DEBUG 03:23:06 INFO 03:23:06 INFO [loop_until]: kubectl --namespace=xlou top pods 03:23:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:23:07 INFO [loop_until]: OK (rc = 0) 03:23:07 DEBUG --- stdout --- 03:23:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5765Mi am-55f77847b7-l482k 24m 5718Mi am-55f77847b7-qhqgg 27m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 5m 366Mi ds-idrepo-0 731m 13807Mi ds-idrepo-1 577m 13799Mi ds-idrepo-2 432m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 526m 4108Mi idm-65858d8c4c-vdncx 449m 3759Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 136m 521Mi 03:23:07 DEBUG --- stderr --- 03:23:07 DEBUG 03:23:08 INFO 03:23:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:23:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:23:08 INFO [loop_until]: OK (rc = 0) 03:23:08 DEBUG --- stdout --- 03:23:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 617m 3% 5411Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 301m 1% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 521m 3% 5009Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 986m 6% 14507Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 645m 4% 14440Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 637m 4% 14477Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 207m 1% 2041Mi 3% 03:23:08 DEBUG --- stderr --- 03:23:08 DEBUG 03:24:07 INFO 03:24:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:24:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:24:07 INFO [loop_until]: OK (rc = 0) 03:24:07 DEBUG --- stdout --- 03:24:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 27m 5766Mi am-55f77847b7-l482k 25m 5718Mi am-55f77847b7-qhqgg 28m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 1241m 13830Mi ds-idrepo-1 1077m 13787Mi ds-idrepo-2 396m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 490m 4112Mi idm-65858d8c4c-vdncx 457m 3763Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 141m 521Mi 03:24:07 DEBUG --- stderr --- 03:24:07 DEBUG 03:24:08 INFO 03:24:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:24:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:24:08 INFO [loop_until]: OK (rc = 0) 03:24:08 DEBUG --- stdout --- 03:24:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 83m 0% 6770Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 580m 3% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 297m 1% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 543m 3% 5012Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1310m 8% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1176m 7% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 452m 2% 14457Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 210m 1% 2043Mi 3% 03:24:08 DEBUG --- stderr --- 03:24:08 DEBUG 03:25:07 INFO 03:25:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:25:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:25:07 INFO [loop_until]: OK (rc = 0) 03:25:07 DEBUG --- stdout --- 03:25:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 28m 5766Mi am-55f77847b7-l482k 24m 5718Mi am-55f77847b7-qhqgg 27m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 7m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 876m 13822Mi ds-idrepo-1 555m 13801Mi ds-idrepo-2 646m 13844Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 486m 4116Mi idm-65858d8c4c-vdncx 490m 3766Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 143m 521Mi 03:25:07 DEBUG --- stderr --- 03:25:07 DEBUG 03:25:08 INFO 03:25:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:25:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:25:08 INFO [loop_until]: OK (rc = 0) 03:25:08 DEBUG --- stdout --- 03:25:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 579m 3% 5419Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 301m 1% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 547m 3% 5018Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 938m 5% 14483Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 638m 4% 14453Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 651m 4% 14457Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 208m 1% 2043Mi 3% 03:25:08 DEBUG --- stderr --- 03:25:08 DEBUG 03:26:07 INFO 03:26:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:26:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:26:07 INFO [loop_until]: OK (rc = 0) 03:26:07 DEBUG --- stdout --- 03:26:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 28m 5766Mi am-55f77847b7-l482k 24m 5718Mi am-55f77847b7-qhqgg 26m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 1002m 13818Mi ds-idrepo-1 527m 13858Mi ds-idrepo-2 941m 13811Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 466m 4120Mi idm-65858d8c4c-vdncx 452m 3770Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 143m 521Mi 03:26:07 DEBUG --- stderr --- 03:26:07 DEBUG 03:26:08 INFO 03:26:08 INFO [loop_until]: kubectl --namespace=xlou top node 03:26:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:26:09 INFO [loop_until]: OK (rc = 0) 03:26:09 DEBUG --- stdout --- 03:26:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 83m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 80m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 79m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 564m 3% 5434Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 302m 1% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 579m 3% 5018Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1089m 6% 14483Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 415m 2% 14474Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 851m 5% 14450Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 207m 1% 2042Mi 3% 03:26:09 DEBUG --- stderr --- 03:26:09 DEBUG 03:27:07 INFO 03:27:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:27:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:27:07 INFO [loop_until]: OK (rc = 0) 03:27:07 DEBUG --- stdout --- 03:27:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 28m 5766Mi am-55f77847b7-l482k 31m 5718Mi am-55f77847b7-qhqgg 27m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 7m 394Mi ds-cts-2 6m 367Mi ds-idrepo-0 907m 13830Mi ds-idrepo-1 621m 13785Mi ds-idrepo-2 669m 13791Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 470m 4123Mi idm-65858d8c4c-vdncx 465m 3775Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 140m 522Mi 03:27:07 DEBUG --- stderr --- 03:27:07 DEBUG 03:27:09 INFO 03:27:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:27:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:27:09 INFO [loop_until]: OK (rc = 0) 03:27:09 DEBUG --- stdout --- 03:27:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 83m 0% 6772Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 545m 3% 5428Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 292m 1% 2173Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 544m 3% 5024Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 974m 6% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 617m 3% 14499Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 690m 4% 14463Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 201m 1% 2041Mi 3% 03:27:09 DEBUG --- stderr --- 03:27:09 DEBUG 03:28:07 INFO 03:28:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:28:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:28:07 INFO [loop_until]: OK (rc = 0) 03:28:07 DEBUG --- stdout --- 03:28:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 5766Mi am-55f77847b7-l482k 25m 5718Mi am-55f77847b7-qhqgg 27m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 5m 366Mi ds-idrepo-0 1681m 13779Mi ds-idrepo-1 1135m 13667Mi ds-idrepo-2 1329m 13822Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 462m 4127Mi idm-65858d8c4c-vdncx 453m 3779Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 141m 521Mi 03:28:07 DEBUG --- stderr --- 03:28:07 DEBUG 03:28:09 INFO 03:28:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:28:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:28:09 INFO [loop_until]: OK (rc = 0) 03:28:09 DEBUG --- stdout --- 03:28:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6947Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 80m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 573m 3% 5431Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 300m 1% 2170Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 542m 3% 5024Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1441m 9% 14488Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1163m 7% 14320Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1116m 7% 14460Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 209m 1% 2043Mi 3% 03:28:09 DEBUG --- stderr --- 03:28:09 DEBUG 03:29:07 INFO 03:29:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:29:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:29:07 INFO [loop_until]: OK (rc = 0) 03:29:07 DEBUG --- stdout --- 03:29:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 28m 5766Mi am-55f77847b7-l482k 27m 5718Mi am-55f77847b7-qhqgg 27m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 1145m 13834Mi ds-idrepo-1 570m 13754Mi ds-idrepo-2 621m 13828Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 467m 4130Mi idm-65858d8c4c-vdncx 444m 3782Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 131m 522Mi 03:29:07 DEBUG --- stderr --- 03:29:07 DEBUG 03:29:09 INFO 03:29:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:29:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:29:09 INFO [loop_until]: OK (rc = 0) 03:29:09 DEBUG --- stdout --- 03:29:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 87m 0% 6770Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 558m 3% 5436Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 288m 1% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 530m 3% 5030Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1116m 7% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 654m 4% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 698m 4% 14459Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 209m 1% 2042Mi 3% 03:29:09 DEBUG --- stderr --- 03:29:09 DEBUG 03:30:07 INFO 03:30:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:30:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:30:07 INFO [loop_until]: OK (rc = 0) 03:30:07 DEBUG --- stdout --- 03:30:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5766Mi am-55f77847b7-l482k 28m 5718Mi am-55f77847b7-qhqgg 26m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 7m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 2370m 13822Mi ds-idrepo-1 573m 13822Mi ds-idrepo-2 774m 13754Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 478m 4134Mi idm-65858d8c4c-vdncx 454m 3787Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 138m 522Mi 03:30:07 DEBUG --- stderr --- 03:30:07 DEBUG 03:30:09 INFO 03:30:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:30:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:30:09 INFO [loop_until]: OK (rc = 0) 03:30:09 DEBUG --- stdout --- 03:30:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 83m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 82m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 546m 3% 5439Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 305m 1% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 553m 3% 5035Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 2497m 15% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 632m 3% 14476Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 665m 4% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 199m 1% 2042Mi 3% 03:30:09 DEBUG --- stderr --- 03:30:09 DEBUG 03:31:07 INFO 03:31:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:31:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:31:07 INFO [loop_until]: OK (rc = 0) 03:31:07 DEBUG --- stdout --- 03:31:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 5766Mi am-55f77847b7-l482k 25m 5718Mi am-55f77847b7-qhqgg 27m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 937m 13815Mi ds-idrepo-1 678m 13842Mi ds-idrepo-2 1223m 13815Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 466m 4138Mi idm-65858d8c4c-vdncx 446m 3790Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 135m 522Mi 03:31:07 DEBUG --- stderr --- 03:31:07 DEBUG 03:31:09 INFO 03:31:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:31:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:31:09 INFO [loop_until]: OK (rc = 0) 03:31:09 DEBUG --- stdout --- 03:31:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6771Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6949Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 534m 3% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 300m 1% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 535m 3% 5053Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 995m 6% 14485Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 54m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 627m 3% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1117m 7% 14469Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 205m 1% 2045Mi 3% 03:31:09 DEBUG --- stderr --- 03:31:09 DEBUG 03:32:07 INFO 03:32:07 INFO [loop_until]: kubectl --namespace=xlou top pods 03:32:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:32:08 INFO [loop_until]: OK (rc = 0) 03:32:08 DEBUG --- stdout --- 03:32:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5766Mi am-55f77847b7-l482k 27m 5718Mi am-55f77847b7-qhqgg 33m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 861m 13840Mi ds-idrepo-1 374m 13808Mi ds-idrepo-2 1148m 13687Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 431m 4154Mi idm-65858d8c4c-vdncx 458m 3795Mi lodemon-7b659c988b-78sgh 4m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 139m 522Mi 03:32:08 DEBUG --- stderr --- 03:32:08 DEBUG 03:32:09 INFO 03:32:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:32:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:32:09 INFO [loop_until]: OK (rc = 0) 03:32:09 DEBUG --- stdout --- 03:32:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 87m 0% 6772Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 81m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 533m 3% 5462Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 304m 1% 2134Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 543m 3% 5045Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 946m 5% 14491Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 401m 2% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1050m 6% 14335Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 208m 1% 2037Mi 3% 03:32:09 DEBUG --- stderr --- 03:32:09 DEBUG 03:33:08 INFO 03:33:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:33:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:33:08 INFO [loop_until]: OK (rc = 0) 03:33:08 DEBUG --- stdout --- 03:33:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5766Mi am-55f77847b7-l482k 30m 5718Mi am-55f77847b7-qhqgg 27m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 1394m 13782Mi ds-idrepo-1 311m 13823Mi ds-idrepo-2 787m 13781Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 426m 4144Mi idm-65858d8c4c-vdncx 434m 3799Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 136m 522Mi 03:33:08 DEBUG --- stderr --- 03:33:08 DEBUG 03:33:09 INFO 03:33:09 INFO [loop_until]: kubectl --namespace=xlou top node 03:33:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:33:09 INFO [loop_until]: OK (rc = 0) 03:33:09 DEBUG --- stdout --- 03:33:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6952Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 87m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 515m 3% 5447Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 301m 1% 2135Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 525m 3% 5051Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1479m 9% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 611m 3% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 658m 4% 14428Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 205m 1% 2039Mi 3% 03:33:09 DEBUG --- stderr --- 03:33:09 DEBUG 03:34:08 INFO 03:34:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:34:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:34:08 INFO [loop_until]: OK (rc = 0) 03:34:08 DEBUG --- stdout --- 03:34:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5766Mi am-55f77847b7-l482k 28m 5718Mi am-55f77847b7-qhqgg 29m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 5m 394Mi ds-cts-2 5m 366Mi ds-idrepo-0 1702m 13723Mi ds-idrepo-1 1869m 13722Mi ds-idrepo-2 1228m 13726Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 473m 4167Mi idm-65858d8c4c-vdncx 479m 3802Mi lodemon-7b659c988b-78sgh 4m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 139m 522Mi 03:34:08 DEBUG --- stderr --- 03:34:08 DEBUG 03:34:10 INFO 03:34:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:34:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:34:10 INFO [loop_until]: OK (rc = 0) 03:34:10 DEBUG --- stdout --- 03:34:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 88m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 90m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 85m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 562m 3% 5466Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 304m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 557m 3% 5054Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1430m 8% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1688m 10% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1119m 7% 14364Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 209m 1% 2043Mi 3% 03:34:10 DEBUG --- stderr --- 03:34:10 DEBUG 03:35:08 INFO 03:35:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:35:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:35:08 INFO [loop_until]: OK (rc = 0) 03:35:08 DEBUG --- stdout --- 03:35:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 33m 5766Mi am-55f77847b7-l482k 27m 5718Mi am-55f77847b7-qhqgg 29m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 7m 394Mi ds-cts-2 6m 366Mi ds-idrepo-0 894m 13753Mi ds-idrepo-1 745m 13767Mi ds-idrepo-2 769m 13770Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 479m 4155Mi idm-65858d8c4c-vdncx 450m 3806Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 141m 520Mi 03:35:08 DEBUG --- stderr --- 03:35:08 DEBUG 03:35:10 INFO 03:35:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:35:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:35:10 INFO [loop_until]: OK (rc = 0) 03:35:10 DEBUG --- stdout --- 03:35:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 88m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6946Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 570m 3% 5456Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 303m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 545m 3% 5058Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1027m 6% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 637m 4% 14401Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 659m 4% 14401Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 212m 1% 2039Mi 3% 03:35:10 DEBUG --- stderr --- 03:35:10 DEBUG 03:36:08 INFO 03:36:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:36:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:36:08 INFO [loop_until]: OK (rc = 0) 03:36:08 DEBUG --- stdout --- 03:36:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5766Mi am-55f77847b7-l482k 29m 5718Mi am-55f77847b7-qhqgg 28m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 5m 394Mi ds-cts-2 9m 367Mi ds-idrepo-0 884m 13794Mi ds-idrepo-1 348m 13788Mi ds-idrepo-2 530m 13807Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 461m 4158Mi idm-65858d8c4c-vdncx 443m 3811Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 140m 520Mi 03:36:08 DEBUG --- stderr --- 03:36:08 DEBUG 03:36:10 INFO 03:36:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:36:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:36:10 INFO [loop_until]: OK (rc = 0) 03:36:10 DEBUG --- stdout --- 03:36:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 87m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 82m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 551m 3% 5461Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 296m 1% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 530m 3% 5061Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 829m 5% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 396m 2% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 490m 3% 14453Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 208m 1% 2044Mi 3% 03:36:10 DEBUG --- stderr --- 03:36:10 DEBUG 03:37:08 INFO 03:37:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:37:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:37:08 INFO [loop_until]: OK (rc = 0) 03:37:08 DEBUG --- stdout --- 03:37:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5766Mi am-55f77847b7-l482k 26m 5718Mi am-55f77847b7-qhqgg 33m 5829Mi ds-cts-0 6m 403Mi ds-cts-1 6m 395Mi ds-cts-2 5m 366Mi ds-idrepo-0 1323m 13673Mi ds-idrepo-1 813m 13752Mi ds-idrepo-2 772m 13816Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 460m 4178Mi idm-65858d8c4c-vdncx 457m 3815Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 136m 520Mi 03:37:08 DEBUG --- stderr --- 03:37:08 DEBUG 03:37:10 INFO 03:37:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:37:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:37:10 INFO [loop_until]: OK (rc = 0) 03:37:10 DEBUG --- stdout --- 03:37:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 86m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 545m 3% 5482Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 298m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 531m 3% 5065Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1688m 10% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 954m 6% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 461m 2% 14468Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 209m 1% 2041Mi 3% 03:37:10 DEBUG --- stderr --- 03:37:10 DEBUG 03:38:08 INFO 03:38:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:38:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:38:08 INFO [loop_until]: OK (rc = 0) 03:38:08 DEBUG --- stdout --- 03:38:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 32m 5768Mi am-55f77847b7-l482k 27m 5718Mi am-55f77847b7-qhqgg 31m 5831Mi ds-cts-0 6m 403Mi ds-cts-1 5m 395Mi ds-cts-2 6m 366Mi ds-idrepo-0 915m 13760Mi ds-idrepo-1 320m 13802Mi ds-idrepo-2 380m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 448m 4182Mi idm-65858d8c4c-vdncx 470m 3819Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 142m 521Mi 03:38:08 DEBUG --- stderr --- 03:38:08 DEBUG 03:38:10 INFO 03:38:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:38:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:38:10 INFO [loop_until]: OK (rc = 0) 03:38:10 DEBUG --- stdout --- 03:38:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6952Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 82m 0% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 542m 3% 5485Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 304m 1% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 563m 3% 5066Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1024m 6% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 391m 2% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 661m 4% 14504Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 215m 1% 2039Mi 3% 03:38:10 DEBUG --- stderr --- 03:38:10 DEBUG 03:39:08 INFO 03:39:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:39:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:39:08 INFO [loop_until]: OK (rc = 0) 03:39:08 DEBUG --- stdout --- 03:39:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 5768Mi am-55f77847b7-l482k 28m 5720Mi am-55f77847b7-qhqgg 25m 5831Mi ds-cts-0 6m 403Mi ds-cts-1 6m 395Mi ds-cts-2 6m 366Mi ds-idrepo-0 1109m 13806Mi ds-idrepo-1 912m 13850Mi ds-idrepo-2 583m 13859Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 444m 4170Mi idm-65858d8c4c-vdncx 474m 3823Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 136m 520Mi 03:39:08 DEBUG --- stderr --- 03:39:08 DEBUG 03:39:10 INFO 03:39:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:39:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:39:10 INFO [loop_until]: OK (rc = 0) 03:39:10 DEBUG --- stdout --- 03:39:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 88m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 85m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 86m 0% 6952Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 83m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 555m 3% 5476Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 305m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 562m 3% 5073Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 784m 4% 14495Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 639m 4% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 677m 4% 14463Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 204m 1% 2043Mi 3% 03:39:10 DEBUG --- stderr --- 03:39:10 DEBUG 03:40:08 INFO 03:40:08 INFO [loop_until]: kubectl --namespace=xlou top pods 03:40:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:40:09 INFO [loop_until]: OK (rc = 0) 03:40:09 DEBUG --- stdout --- 03:40:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 28m 5768Mi am-55f77847b7-l482k 24m 5720Mi am-55f77847b7-qhqgg 26m 5831Mi ds-cts-0 6m 403Mi ds-cts-1 7m 395Mi ds-cts-2 12m 371Mi ds-idrepo-0 1058m 13818Mi ds-idrepo-1 340m 13824Mi ds-idrepo-2 921m 13749Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 458m 4174Mi idm-65858d8c4c-vdncx 475m 3827Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 139m 520Mi 03:40:09 DEBUG --- stderr --- 03:40:09 DEBUG 03:40:10 INFO 03:40:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:40:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:40:10 INFO [loop_until]: OK (rc = 0) 03:40:10 DEBUG --- stdout --- 03:40:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 81m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 82m 0% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 6819Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 544m 3% 5477Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 288m 1% 2130Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 536m 3% 5075Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1099m 6% 14493Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 384m 2% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 746m 4% 14400Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 199m 1% 2044Mi 3% 03:40:10 DEBUG --- stderr --- 03:40:10 DEBUG 03:41:09 INFO 03:41:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:41:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:41:09 INFO [loop_until]: OK (rc = 0) 03:41:09 DEBUG --- stdout --- 03:41:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 30m 5768Mi am-55f77847b7-l482k 24m 5720Mi am-55f77847b7-qhqgg 26m 5831Mi ds-cts-0 13m 405Mi ds-cts-1 6m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 990m 13850Mi ds-idrepo-1 975m 13800Mi ds-idrepo-2 1430m 13781Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 438m 4178Mi idm-65858d8c4c-vdncx 472m 3830Mi lodemon-7b659c988b-78sgh 3m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 142m 521Mi 03:41:09 DEBUG --- stderr --- 03:41:09 DEBUG 03:41:10 INFO 03:41:10 INFO [loop_until]: kubectl --namespace=xlou top node 03:41:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:41:10 INFO [loop_until]: OK (rc = 0) 03:41:10 DEBUG --- stdout --- 03:41:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 78m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 521m 3% 5484Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 295m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 563m 3% 5079Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 975m 6% 14524Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 925m 5% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1326m 8% 14427Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 209m 1% 2044Mi 3% 03:41:10 DEBUG --- stderr --- 03:41:10 DEBUG 03:42:09 INFO 03:42:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:42:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:42:09 INFO [loop_until]: OK (rc = 0) 03:42:09 DEBUG --- stdout --- 03:42:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 29m 5768Mi am-55f77847b7-l482k 24m 5720Mi am-55f77847b7-qhqgg 27m 5831Mi ds-cts-0 6m 399Mi ds-cts-1 6m 395Mi ds-cts-2 5m 364Mi ds-idrepo-0 1444m 13731Mi ds-idrepo-1 435m 13781Mi ds-idrepo-2 820m 13840Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 459m 4197Mi idm-65858d8c4c-vdncx 468m 3835Mi lodemon-7b659c988b-78sgh 1m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 146m 521Mi 03:42:09 DEBUG --- stderr --- 03:42:09 DEBUG 03:42:11 INFO 03:42:11 INFO [loop_until]: kubectl --namespace=xlou top node 03:42:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:42:11 INFO [loop_until]: OK (rc = 0) 03:42:11 DEBUG --- stdout --- 03:42:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 84m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 83m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 76m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 548m 3% 5491Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 293m 1% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 554m 3% 5076Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1065m 6% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 402m 2% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 818m 5% 14493Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 218m 1% 2046Mi 3% 03:42:11 DEBUG --- stderr --- 03:42:11 DEBUG 03:43:09 INFO 03:43:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:43:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:43:09 INFO [loop_until]: OK (rc = 0) 03:43:09 DEBUG --- stdout --- 03:43:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 16m 5768Mi am-55f77847b7-l482k 16m 5720Mi am-55f77847b7-qhqgg 14m 5831Mi ds-cts-0 6m 399Mi ds-cts-1 7m 395Mi ds-cts-2 5m 364Mi ds-idrepo-0 798m 13821Mi ds-idrepo-1 171m 13796Mi ds-idrepo-2 399m 13828Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 248m 4185Mi idm-65858d8c4c-vdncx 244m 3837Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 101m 521Mi 03:43:09 DEBUG --- stderr --- 03:43:09 DEBUG 03:43:11 INFO 03:43:11 INFO [loop_until]: kubectl --namespace=xlou top node 03:43:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:43:11 INFO [loop_until]: OK (rc = 0) 03:43:11 DEBUG --- stdout --- 03:43:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1323Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 73m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 69m 0% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 253m 1% 5491Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 185m 1% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 299m 1% 5082Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 726m 4% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 197m 1% 14474Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 545m 3% 14480Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 179m 1% 2046Mi 3% 03:43:11 DEBUG --- stderr --- 03:43:11 DEBUG 03:44:09 INFO 03:44:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:44:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:44:09 INFO [loop_until]: OK (rc = 0) 03:44:09 DEBUG --- stdout --- 03:44:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 7m 5769Mi am-55f77847b7-l482k 6m 5720Mi am-55f77847b7-qhqgg 7m 5831Mi ds-cts-0 5m 400Mi ds-cts-1 6m 395Mi ds-cts-2 5m 364Mi ds-idrepo-0 13m 13821Mi ds-idrepo-1 9m 13796Mi ds-idrepo-2 12m 13772Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 8m 4185Mi idm-65858d8c4c-vdncx 8m 3837Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 119Mi 03:44:09 DEBUG --- stderr --- 03:44:09 DEBUG 03:44:11 INFO 03:44:11 INFO [loop_until]: kubectl --namespace=xlou top node 03:44:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:44:11 INFO [loop_until]: OK (rc = 0) 03:44:11 DEBUG --- stdout --- 03:44:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1324Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 5491Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 5086Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1645Mi 2% 03:44:11 DEBUG --- stderr --- 03:44:11 DEBUG 127.0.0.1 - - [12/Aug/2023 03:44:50] "GET /monitoring/average?start_time=23-08-12_02:14:19&stop_time=23-08-12_02:42:49 HTTP/1.1" 200 - 03:45:09 INFO 03:45:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:45:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:45:09 INFO [loop_until]: OK (rc = 0) 03:45:09 DEBUG --- stdout --- 03:45:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 7m 5768Mi am-55f77847b7-l482k 6m 5720Mi am-55f77847b7-qhqgg 7m 5831Mi ds-cts-0 5m 399Mi ds-cts-1 6m 395Mi ds-cts-2 6m 365Mi ds-idrepo-0 12m 13820Mi ds-idrepo-1 9m 13796Mi ds-idrepo-2 12m 13772Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 4185Mi idm-65858d8c4c-vdncx 7m 3837Mi lodemon-7b659c988b-78sgh 3m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 119Mi 03:45:09 DEBUG --- stderr --- 03:45:09 DEBUG 03:45:11 INFO 03:45:11 INFO [loop_until]: kubectl --namespace=xlou top node 03:45:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:45:11 INFO [loop_until]: OK (rc = 0) 03:45:11 DEBUG --- stdout --- 03:45:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6776Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 5491Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 73m 0% 5087Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 14496Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 64m 0% 14421Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 78m 0% 1649Mi 2% 03:45:11 DEBUG --- stderr --- 03:45:11 DEBUG 03:46:09 INFO 03:46:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:46:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:46:09 INFO [loop_until]: OK (rc = 0) 03:46:09 DEBUG --- stdout --- 03:46:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 119m 5770Mi am-55f77847b7-l482k 103m 5722Mi am-55f77847b7-qhqgg 81m 5834Mi ds-cts-0 6m 400Mi ds-cts-1 7m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 539m 13803Mi ds-idrepo-1 374m 13866Mi ds-idrepo-2 430m 13824Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 332m 4199Mi idm-65858d8c4c-vdncx 218m 3844Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 370m 513Mi 03:46:09 DEBUG --- stderr --- 03:46:09 DEBUG 03:46:11 INFO 03:46:11 INFO [loop_until]: kubectl --namespace=xlou top node 03:46:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:46:11 INFO [loop_until]: OK (rc = 0) 03:46:11 DEBUG --- stdout --- 03:46:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 182m 1% 6779Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 167m 1% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 163m 1% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 411m 2% 5493Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 242m 1% 2137Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 431m 2% 5092Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 850m 5% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 554m 3% 14542Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 542m 3% 14476Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 434m 2% 2030Mi 3% 03:46:11 DEBUG --- stderr --- 03:46:11 DEBUG 03:47:09 INFO 03:47:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:47:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:47:09 INFO [loop_until]: OK (rc = 0) 03:47:09 DEBUG --- stdout --- 03:47:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 72m 5771Mi am-55f77847b7-l482k 56m 5724Mi am-55f77847b7-qhqgg 63m 5836Mi ds-cts-0 6m 400Mi ds-cts-1 7m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 1291m 13809Mi ds-idrepo-1 1740m 13823Mi ds-idrepo-2 278m 13827Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 333m 4193Mi idm-65858d8c4c-vdncx 338m 3851Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 208m 509Mi 03:47:09 DEBUG --- stderr --- 03:47:09 DEBUG 03:47:11 INFO 03:47:11 INFO [loop_until]: kubectl --namespace=xlou top node 03:47:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:47:11 INFO [loop_until]: OK (rc = 0) 03:47:11 DEBUG --- stdout --- 03:47:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1325Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 120m 0% 6776Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 116m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 115m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 429m 2% 5500Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 290m 1% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 426m 2% 5098Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1146m 7% 14491Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1226m 7% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 496m 3% 14485Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 293m 1% 2035Mi 3% 03:47:11 DEBUG --- stderr --- 03:47:11 DEBUG 03:48:09 INFO 03:48:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:48:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:48:09 INFO [loop_until]: OK (rc = 0) 03:48:09 DEBUG --- stdout --- 03:48:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 55m 5770Mi am-55f77847b7-l482k 51m 5724Mi am-55f77847b7-qhqgg 53m 5836Mi ds-cts-0 7m 401Mi ds-cts-1 6m 395Mi ds-cts-2 7m 365Mi ds-idrepo-0 1013m 13831Mi ds-idrepo-1 468m 13842Mi ds-idrepo-2 269m 13829Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 324m 4196Mi idm-65858d8c4c-vdncx 328m 3854Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 189m 519Mi 03:48:09 DEBUG --- stderr --- 03:48:09 DEBUG 03:48:11 INFO 03:48:11 INFO [loop_until]: kubectl --namespace=xlou top node 03:48:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:48:11 INFO [loop_until]: OK (rc = 0) 03:48:11 DEBUG --- stdout --- 03:48:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 120m 0% 6775Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 410m 2% 5503Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 304m 1% 2144Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 416m 2% 5098Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1067m 6% 14521Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 577m 3% 14525Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 369m 2% 14484Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 251m 1% 2055Mi 3% 03:48:11 DEBUG --- stderr --- 03:48:11 DEBUG 03:49:09 INFO 03:49:09 INFO [loop_until]: kubectl --namespace=xlou top pods 03:49:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:49:09 INFO [loop_until]: OK (rc = 0) 03:49:09 DEBUG --- stdout --- 03:49:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 53m 5770Mi am-55f77847b7-l482k 47m 5724Mi am-55f77847b7-qhqgg 51m 5836Mi ds-cts-0 5m 401Mi ds-cts-1 6m 395Mi ds-cts-2 6m 365Mi ds-idrepo-0 952m 13841Mi ds-idrepo-1 722m 13838Mi ds-idrepo-2 1489m 13757Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 326m 4206Mi idm-65858d8c4c-vdncx 334m 3861Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 141m 520Mi 03:49:09 DEBUG --- stderr --- 03:49:09 DEBUG 03:49:11 INFO 03:49:11 INFO [loop_until]: kubectl --namespace=xlou top node 03:49:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:49:11 INFO [loop_until]: OK (rc = 0) 03:49:11 DEBUG --- stdout --- 03:49:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6782Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 409m 2% 5511Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 276m 1% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 423m 2% 5108Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 993m 6% 14529Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 666m 4% 14515Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1641m 10% 14399Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 215m 1% 2045Mi 3% 03:49:11 DEBUG --- stderr --- 03:49:11 DEBUG 03:50:10 INFO 03:50:10 INFO [loop_until]: kubectl --namespace=xlou top pods 03:50:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:50:10 INFO [loop_until]: OK (rc = 0) 03:50:10 DEBUG --- stdout --- 03:50:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 54m 5770Mi am-55f77847b7-l482k 50m 5724Mi am-55f77847b7-qhqgg 52m 5836Mi ds-cts-0 12m 400Mi ds-cts-1 7m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 634m 13829Mi ds-idrepo-1 314m 13810Mi ds-idrepo-2 262m 13815Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 308m 4208Mi idm-65858d8c4c-vdncx 313m 3863Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 142m 520Mi 03:50:10 DEBUG --- stderr --- 03:50:10 DEBUG 03:50:12 INFO 03:50:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:50:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:50:12 INFO [loop_until]: OK (rc = 0) 03:50:12 DEBUG --- stdout --- 03:50:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6776Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 390m 2% 5514Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 286m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 397m 2% 5111Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 813m 5% 14522Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 517m 3% 14547Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 67m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 319m 2% 14486Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 210m 1% 2043Mi 3% 03:50:12 DEBUG --- stderr --- 03:50:12 DEBUG 03:51:10 INFO 03:51:10 INFO [loop_until]: kubectl --namespace=xlou top pods 03:51:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:51:10 INFO [loop_until]: OK (rc = 0) 03:51:10 DEBUG --- stdout --- 03:51:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 56m 5771Mi am-55f77847b7-l482k 50m 5724Mi am-55f77847b7-qhqgg 53m 5836Mi ds-cts-0 6m 400Mi ds-cts-1 6m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 803m 13813Mi ds-idrepo-1 218m 13820Mi ds-idrepo-2 261m 13844Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 286m 4211Mi idm-65858d8c4c-vdncx 317m 3865Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 138m 520Mi 03:51:10 DEBUG --- stderr --- 03:51:10 DEBUG 03:51:12 INFO 03:51:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:51:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:51:12 INFO [loop_until]: OK (rc = 0) 03:51:12 DEBUG --- stdout --- 03:51:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6955Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 380m 2% 5517Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 284m 1% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 405m 2% 5114Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 845m 5% 14505Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 292m 1% 14510Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 321m 2% 14510Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 212m 1% 2040Mi 3% 03:51:12 DEBUG --- stderr --- 03:51:12 DEBUG 03:52:10 INFO 03:52:10 INFO [loop_until]: kubectl --namespace=xlou top pods 03:52:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:52:10 INFO [loop_until]: OK (rc = 0) 03:52:10 DEBUG --- stdout --- 03:52:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 54m 5770Mi am-55f77847b7-l482k 47m 5724Mi am-55f77847b7-qhqgg 50m 5836Mi ds-cts-0 5m 400Mi ds-cts-1 6m 395Mi ds-cts-2 6m 365Mi ds-idrepo-0 947m 13822Mi ds-idrepo-1 232m 13825Mi ds-idrepo-2 262m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 280m 4213Mi idm-65858d8c4c-vdncx 309m 3869Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 137m 520Mi 03:52:10 DEBUG --- stderr --- 03:52:10 DEBUG 03:52:12 INFO 03:52:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:52:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:52:12 INFO [loop_until]: OK (rc = 0) 03:52:12 DEBUG --- stdout --- 03:52:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6776Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 102m 0% 6823Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 376m 2% 5519Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 283m 1% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 383m 2% 5114Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1009m 6% 14519Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 269m 1% 14514Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 322m 2% 14492Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 207m 1% 2042Mi 3% 03:52:12 DEBUG --- stderr --- 03:52:12 DEBUG 03:53:10 INFO 03:53:10 INFO [loop_until]: kubectl --namespace=xlou top pods 03:53:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:53:10 INFO [loop_until]: OK (rc = 0) 03:53:10 DEBUG --- stdout --- 03:53:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 57m 5771Mi am-55f77847b7-l482k 49m 5724Mi am-55f77847b7-qhqgg 51m 5836Mi ds-cts-0 5m 401Mi ds-cts-1 7m 395Mi ds-cts-2 5m 366Mi ds-idrepo-0 894m 13827Mi ds-idrepo-1 245m 13824Mi ds-idrepo-2 285m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 307m 4215Mi idm-65858d8c4c-vdncx 310m 3869Mi lodemon-7b659c988b-78sgh 5m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 145m 524Mi 03:53:10 DEBUG --- stderr --- 03:53:10 DEBUG 03:53:12 INFO 03:53:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:53:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:53:12 INFO [loop_until]: OK (rc = 0) 03:53:12 DEBUG --- stdout --- 03:53:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 101m 0% 6821Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 391m 2% 5519Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 287m 1% 2150Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 392m 2% 5116Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 999m 6% 14550Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 284m 1% 14511Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 341m 2% 14493Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 207m 1% 2047Mi 3% 03:53:12 DEBUG --- stderr --- 03:53:12 DEBUG 03:54:10 INFO 03:54:10 INFO [loop_until]: kubectl --namespace=xlou top pods 03:54:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:54:10 INFO [loop_until]: OK (rc = 0) 03:54:10 DEBUG --- stdout --- 03:54:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 54m 5770Mi am-55f77847b7-l482k 50m 5724Mi am-55f77847b7-qhqgg 56m 5836Mi ds-cts-0 5m 400Mi ds-cts-1 5m 395Mi ds-cts-2 8m 367Mi ds-idrepo-0 864m 13853Mi ds-idrepo-1 253m 13808Mi ds-idrepo-2 287m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 306m 4217Mi idm-65858d8c4c-vdncx 305m 3871Mi lodemon-7b659c988b-78sgh 7m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 137m 524Mi 03:54:10 DEBUG --- stderr --- 03:54:10 DEBUG 03:54:12 INFO 03:54:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:54:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:54:12 INFO [loop_until]: OK (rc = 0) 03:54:12 DEBUG --- stdout --- 03:54:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6779Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6955Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 105m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 396m 2% 5521Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 289m 1% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 391m 2% 5120Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 741m 4% 14521Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 53m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 301m 1% 14502Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 352m 2% 14493Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 214m 1% 2046Mi 3% 03:54:12 DEBUG --- stderr --- 03:54:12 DEBUG 03:55:10 INFO 03:55:10 INFO [loop_until]: kubectl --namespace=xlou top pods 03:55:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:55:10 INFO [loop_until]: OK (rc = 0) 03:55:10 DEBUG --- stdout --- 03:55:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 55m 5770Mi am-55f77847b7-l482k 49m 5724Mi am-55f77847b7-qhqgg 50m 5836Mi ds-cts-0 9m 402Mi ds-cts-1 6m 395Mi ds-cts-2 6m 367Mi ds-idrepo-0 1251m 13822Mi ds-idrepo-1 980m 13700Mi ds-idrepo-2 1154m 13833Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 288m 4219Mi idm-65858d8c4c-vdncx 299m 3873Mi lodemon-7b659c988b-78sgh 5m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 136m 525Mi 03:55:10 DEBUG --- stderr --- 03:55:10 DEBUG 03:55:12 INFO 03:55:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:55:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:55:12 INFO [loop_until]: OK (rc = 0) 03:55:12 DEBUG --- stdout --- 03:55:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 109m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 104m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 373m 2% 5525Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 280m 1% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 389m 2% 5120Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1099m 6% 14522Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1219m 7% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1061m 6% 14484Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 207m 1% 2049Mi 3% 03:55:12 DEBUG --- stderr --- 03:55:12 DEBUG 03:56:10 INFO 03:56:10 INFO [loop_until]: kubectl --namespace=xlou top pods 03:56:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:56:10 INFO [loop_until]: OK (rc = 0) 03:56:10 DEBUG --- stdout --- 03:56:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 56m 5770Mi am-55f77847b7-l482k 51m 5724Mi am-55f77847b7-qhqgg 55m 5836Mi ds-cts-0 6m 403Mi ds-cts-1 6m 397Mi ds-cts-2 5m 367Mi ds-idrepo-0 2859m 13447Mi ds-idrepo-1 936m 13760Mi ds-idrepo-2 1221m 13863Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 292m 4220Mi idm-65858d8c4c-vdncx 332m 3876Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 148m 524Mi 03:56:10 DEBUG --- stderr --- 03:56:10 DEBUG 03:56:12 INFO 03:56:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:56:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:56:12 INFO [loop_until]: OK (rc = 0) 03:56:12 DEBUG --- stdout --- 03:56:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 361m 2% 5541Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 288m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 420m 2% 5120Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 3186m 20% 14162Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 682m 4% 14461Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1285m 8% 14536Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 216m 1% 2045Mi 3% 03:56:12 DEBUG --- stderr --- 03:56:12 DEBUG 03:57:10 INFO 03:57:10 INFO [loop_until]: kubectl --namespace=xlou top pods 03:57:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:57:10 INFO [loop_until]: OK (rc = 0) 03:57:10 DEBUG --- stdout --- 03:57:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 55m 5770Mi am-55f77847b7-l482k 51m 5724Mi am-55f77847b7-qhqgg 52m 5836Mi ds-cts-0 5m 404Mi ds-cts-1 6m 395Mi ds-cts-2 5m 368Mi ds-idrepo-0 971m 13515Mi ds-idrepo-1 436m 13779Mi ds-idrepo-2 540m 13841Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 302m 4222Mi idm-65858d8c4c-vdncx 321m 3878Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 141m 524Mi 03:57:10 DEBUG --- stderr --- 03:57:10 DEBUG 03:57:12 INFO 03:57:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:57:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:57:12 INFO [loop_until]: OK (rc = 0) 03:57:12 DEBUG --- stdout --- 03:57:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6776Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 384m 2% 5527Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 291m 1% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 373m 2% 5123Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1031m 6% 14210Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 546m 3% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 549m 3% 14499Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 205m 1% 2047Mi 3% 03:57:12 DEBUG --- stderr --- 03:57:12 DEBUG 03:58:11 INFO 03:58:11 INFO [loop_until]: kubectl --namespace=xlou top pods 03:58:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:58:11 INFO [loop_until]: OK (rc = 0) 03:58:11 DEBUG --- stdout --- 03:58:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 86m 5784Mi am-55f77847b7-l482k 50m 5724Mi am-55f77847b7-qhqgg 101m 5840Mi ds-cts-0 5m 402Mi ds-cts-1 7m 395Mi ds-cts-2 5m 368Mi ds-idrepo-0 680m 13558Mi ds-idrepo-1 260m 13818Mi ds-idrepo-2 910m 13504Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 314m 4224Mi idm-65858d8c4c-vdncx 319m 3879Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 143m 524Mi 03:58:11 DEBUG --- stderr --- 03:58:11 DEBUG 03:58:12 INFO 03:58:12 INFO [loop_until]: kubectl --namespace=xlou top node 03:58:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:58:12 INFO [loop_until]: OK (rc = 0) 03:58:12 DEBUG --- stdout --- 03:58:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 109m 0% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 144m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 105m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 405m 2% 5527Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 283m 1% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 390m 2% 5127Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1053m 6% 14262Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 298m 1% 14521Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1084m 6% 14173Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 215m 1% 2048Mi 3% 03:58:12 DEBUG --- stderr --- 03:58:12 DEBUG 03:59:11 INFO 03:59:11 INFO [loop_until]: kubectl --namespace=xlou top pods 03:59:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:59:11 INFO [loop_until]: OK (rc = 0) 03:59:11 DEBUG --- stdout --- 03:59:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 52m 5784Mi am-55f77847b7-l482k 80m 5739Mi am-55f77847b7-qhqgg 47m 5839Mi ds-cts-0 5m 402Mi ds-cts-1 6m 396Mi ds-cts-2 5m 368Mi ds-idrepo-0 1019m 13618Mi ds-idrepo-1 453m 13851Mi ds-idrepo-2 505m 13559Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 298m 4226Mi idm-65858d8c4c-vdncx 296m 3881Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 132m 524Mi 03:59:11 DEBUG --- stderr --- 03:59:11 DEBUG 03:59:13 INFO 03:59:13 INFO [loop_until]: kubectl --namespace=xlou top node 03:59:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 03:59:13 INFO [loop_until]: OK (rc = 0) 03:59:13 DEBUG --- stdout --- 03:59:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 107m 0% 6790Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 6846Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 388m 2% 5531Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 285m 1% 2138Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 380m 2% 5127Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1036m 6% 14325Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 56m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 263m 1% 14535Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 536m 3% 14229Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 206m 1% 2046Mi 3% 03:59:13 DEBUG --- stderr --- 03:59:13 DEBUG 04:00:11 INFO 04:00:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:00:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:00:11 INFO [loop_until]: OK (rc = 0) 04:00:11 DEBUG --- stdout --- 04:00:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 53m 5784Mi am-55f77847b7-l482k 48m 5739Mi am-55f77847b7-qhqgg 47m 5839Mi ds-cts-0 5m 402Mi ds-cts-1 6m 397Mi ds-cts-2 5m 368Mi ds-idrepo-0 931m 13683Mi ds-idrepo-1 228m 13823Mi ds-idrepo-2 271m 13620Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 301m 4227Mi idm-65858d8c4c-vdncx 313m 3883Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 144m 525Mi 04:00:11 DEBUG --- stderr --- 04:00:11 DEBUG 04:00:13 INFO 04:00:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:00:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:00:13 INFO [loop_until]: OK (rc = 0) 04:00:13 DEBUG --- stdout --- 04:00:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 102m 0% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 395m 2% 5532Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 286m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 401m 2% 5130Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 751m 4% 14383Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 55m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 283m 1% 14521Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 323m 2% 14288Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 211m 1% 2047Mi 3% 04:00:13 DEBUG --- stderr --- 04:00:13 DEBUG 04:01:11 INFO 04:01:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:01:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:01:11 INFO [loop_until]: OK (rc = 0) 04:01:11 DEBUG --- stdout --- 04:01:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 52m 5784Mi am-55f77847b7-l482k 49m 5739Mi am-55f77847b7-qhqgg 52m 5839Mi ds-cts-0 6m 402Mi ds-cts-1 6m 396Mi ds-cts-2 5m 368Mi ds-idrepo-0 1120m 13724Mi ds-idrepo-1 244m 13823Mi ds-idrepo-2 570m 13669Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 289m 4229Mi idm-65858d8c4c-vdncx 313m 3885Mi lodemon-7b659c988b-78sgh 4m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 138m 524Mi 04:01:11 DEBUG --- stderr --- 04:01:11 DEBUG 04:01:13 INFO 04:01:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:01:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:01:13 INFO [loop_until]: OK (rc = 0) 04:01:13 DEBUG --- stdout --- 04:01:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 108m 0% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 105m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 101m 0% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 376m 2% 5534Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 282m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 388m 2% 5133Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 993m 6% 14438Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 487m 3% 14519Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 560m 3% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 205m 1% 2049Mi 3% 04:01:13 DEBUG --- stderr --- 04:01:13 DEBUG 04:02:11 INFO 04:02:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:02:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:02:11 INFO [loop_until]: OK (rc = 0) 04:02:11 DEBUG --- stdout --- 04:02:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 54m 5784Mi am-55f77847b7-l482k 49m 5739Mi am-55f77847b7-qhqgg 49m 5839Mi ds-cts-0 5m 403Mi ds-cts-1 6m 395Mi ds-cts-2 5m 369Mi ds-idrepo-0 647m 13772Mi ds-idrepo-1 266m 13806Mi ds-idrepo-2 420m 13726Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 308m 4236Mi idm-65858d8c4c-vdncx 310m 3887Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 135m 525Mi 04:02:11 DEBUG --- stderr --- 04:02:11 DEBUG 04:02:13 INFO 04:02:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:02:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:02:13 INFO [loop_until]: OK (rc = 0) 04:02:13 DEBUG --- stdout --- 04:02:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1330Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 108m 0% 6789Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 104m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6843Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 391m 2% 5543Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 283m 1% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 399m 2% 5134Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 949m 5% 14501Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 405m 2% 14506Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1137Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 332m 2% 14408Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 203m 1% 2044Mi 3% 04:02:13 DEBUG --- stderr --- 04:02:13 DEBUG 04:03:11 INFO 04:03:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:03:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:03:11 INFO [loop_until]: OK (rc = 0) 04:03:11 DEBUG --- stdout --- 04:03:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 56m 5784Mi am-55f77847b7-l482k 47m 5739Mi am-55f77847b7-qhqgg 51m 5840Mi ds-cts-0 5m 402Mi ds-cts-1 6m 395Mi ds-cts-2 5m 367Mi ds-idrepo-0 1938m 13799Mi ds-idrepo-1 719m 13847Mi ds-idrepo-2 472m 13768Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 301m 4239Mi idm-65858d8c4c-vdncx 318m 3889Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 144m 524Mi 04:03:11 DEBUG --- stderr --- 04:03:11 DEBUG 04:03:13 INFO 04:03:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:03:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:03:13 INFO [loop_until]: OK (rc = 0) 04:03:13 DEBUG --- stdout --- 04:03:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1330Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6792Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 102m 0% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 399m 2% 5547Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 287m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 397m 2% 5131Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1850m 11% 14506Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 63m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 548m 3% 14551Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 564m 3% 14467Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 208m 1% 2043Mi 3% 04:03:13 DEBUG --- stderr --- 04:03:13 DEBUG 04:04:11 INFO 04:04:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:04:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:04:11 INFO [loop_until]: OK (rc = 0) 04:04:11 DEBUG --- stdout --- 04:04:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 54m 5784Mi am-55f77847b7-l482k 49m 5739Mi am-55f77847b7-qhqgg 53m 5840Mi ds-cts-0 5m 403Mi ds-cts-1 6m 396Mi ds-cts-2 10m 371Mi ds-idrepo-0 907m 13823Mi ds-idrepo-1 379m 13848Mi ds-idrepo-2 497m 13823Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 302m 4241Mi idm-65858d8c4c-vdncx 308m 3891Mi lodemon-7b659c988b-78sgh 6m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 122m 525Mi 04:04:11 DEBUG --- stderr --- 04:04:11 DEBUG 04:04:13 INFO 04:04:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:04:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:04:13 INFO [loop_until]: OK (rc = 0) 04:04:13 DEBUG --- stdout --- 04:04:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1335Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6793Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 113m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6844Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 385m 2% 5546Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 287m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 378m 2% 5142Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 968m 6% 14536Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 404m 2% 14553Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 354m 2% 14511Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 190m 1% 2046Mi 3% 04:04:13 DEBUG --- stderr --- 04:04:13 DEBUG 04:05:11 INFO 04:05:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:05:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:05:11 INFO [loop_until]: OK (rc = 0) 04:05:11 DEBUG --- stdout --- 04:05:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 53m 5784Mi am-55f77847b7-l482k 50m 5739Mi am-55f77847b7-qhqgg 51m 5840Mi ds-cts-0 5m 402Mi ds-cts-1 7m 396Mi ds-cts-2 5m 365Mi ds-idrepo-0 1197m 13368Mi ds-idrepo-1 478m 13843Mi ds-idrepo-2 2330m 13767Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 306m 4243Mi idm-65858d8c4c-vdncx 296m 3892Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 121m 525Mi 04:05:11 DEBUG --- stderr --- 04:05:11 DEBUG 04:05:13 INFO 04:05:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:05:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:05:13 INFO [loop_until]: OK (rc = 0) 04:05:13 DEBUG --- stdout --- 04:05:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6788Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6843Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 401m 2% 5549Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 285m 1% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 393m 2% 5139Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 964m 6% 14086Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 65m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 505m 3% 14552Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1499m 9% 14458Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 196m 1% 2048Mi 3% 04:05:13 DEBUG --- stderr --- 04:05:13 DEBUG 04:06:11 INFO 04:06:11 INFO [loop_until]: kubectl --namespace=xlou top pods 04:06:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:06:12 INFO [loop_until]: OK (rc = 0) 04:06:12 DEBUG --- stdout --- 04:06:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 52m 5784Mi am-55f77847b7-l482k 49m 5739Mi am-55f77847b7-qhqgg 52m 5840Mi ds-cts-0 5m 402Mi ds-cts-1 5m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 923m 13438Mi ds-idrepo-1 460m 13833Mi ds-idrepo-2 451m 13830Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 289m 4245Mi idm-65858d8c4c-vdncx 304m 3893Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 126m 525Mi 04:06:12 DEBUG --- stderr --- 04:06:12 DEBUG 04:06:13 INFO 04:06:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:06:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:06:13 INFO [loop_until]: OK (rc = 0) 04:06:13 DEBUG --- stdout --- 04:06:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 109m 0% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 392m 2% 5550Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 276m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 388m 2% 5141Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1086m 6% 14142Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 449m 2% 14534Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 551m 3% 14518Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 185m 1% 2049Mi 3% 04:06:13 DEBUG --- stderr --- 04:06:13 DEBUG 04:07:12 INFO 04:07:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:07:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:07:12 INFO [loop_until]: OK (rc = 0) 04:07:12 DEBUG --- stdout --- 04:07:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 56m 5784Mi am-55f77847b7-l482k 48m 5739Mi am-55f77847b7-qhqgg 50m 5840Mi ds-cts-0 5m 403Mi ds-cts-1 7m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 694m 13483Mi ds-idrepo-1 619m 13461Mi ds-idrepo-2 657m 13858Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 317m 4247Mi idm-65858d8c4c-vdncx 315m 3896Mi lodemon-7b659c988b-78sgh 4m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 122m 525Mi 04:07:12 DEBUG --- stderr --- 04:07:12 DEBUG 04:07:13 INFO 04:07:13 INFO [loop_until]: kubectl --namespace=xlou top node 04:07:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:07:14 INFO [loop_until]: OK (rc = 0) 04:07:14 DEBUG --- stdout --- 04:07:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6788Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 381m 2% 5551Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 294m 1% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 395m 2% 5143Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1014m 6% 14210Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 537m 3% 14170Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 547m 3% 14551Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 198m 1% 2048Mi 3% 04:07:14 DEBUG --- stderr --- 04:07:14 DEBUG 04:08:12 INFO 04:08:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:08:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:08:12 INFO [loop_until]: OK (rc = 0) 04:08:12 DEBUG --- stdout --- 04:08:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 56m 5784Mi am-55f77847b7-l482k 49m 5739Mi am-55f77847b7-qhqgg 51m 5840Mi ds-cts-0 6m 402Mi ds-cts-1 7m 396Mi ds-cts-2 5m 365Mi ds-idrepo-0 1759m 13555Mi ds-idrepo-1 244m 13514Mi ds-idrepo-2 1349m 13814Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 310m 4260Mi idm-65858d8c4c-vdncx 337m 3898Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 129m 525Mi 04:08:12 DEBUG --- stderr --- 04:08:12 DEBUG 04:08:14 INFO 04:08:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:08:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:08:14 INFO [loop_until]: OK (rc = 0) 04:08:14 DEBUG --- stdout --- 04:08:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1332Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 397m 2% 5559Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 290m 1% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 404m 2% 5147Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1312m 8% 14259Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 298m 1% 14218Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1253m 7% 14506Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 203m 1% 2050Mi 3% 04:08:14 DEBUG --- stderr --- 04:08:14 DEBUG 04:09:12 INFO 04:09:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:09:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:09:12 INFO [loop_until]: OK (rc = 0) 04:09:12 DEBUG --- stdout --- 04:09:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 53m 5784Mi am-55f77847b7-l482k 50m 5739Mi am-55f77847b7-qhqgg 51m 5839Mi ds-cts-0 5m 402Mi ds-cts-1 6m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 1106m 13473Mi ds-idrepo-1 409m 13558Mi ds-idrepo-2 237m 13816Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 297m 4262Mi idm-65858d8c4c-vdncx 316m 3900Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 123m 524Mi 04:09:12 DEBUG --- stderr --- 04:09:12 DEBUG 04:09:14 INFO 04:09:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:09:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:09:14 INFO [loop_until]: OK (rc = 0) 04:09:14 DEBUG --- stdout --- 04:09:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1333Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 106m 0% 6792Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 378m 2% 5565Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 280m 1% 2143Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 401m 2% 5148Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1217m 7% 14182Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 58m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 502m 3% 14271Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 292m 1% 14506Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 192m 1% 2051Mi 3% 04:09:14 DEBUG --- stderr --- 04:09:14 DEBUG 04:10:12 INFO 04:10:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:10:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:10:12 INFO [loop_until]: OK (rc = 0) 04:10:12 DEBUG --- stdout --- 04:10:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 56m 5784Mi am-55f77847b7-l482k 51m 5739Mi am-55f77847b7-qhqgg 51m 5840Mi ds-cts-0 5m 403Mi ds-cts-1 7m 396Mi ds-cts-2 5m 365Mi ds-idrepo-0 1202m 13520Mi ds-idrepo-1 264m 13584Mi ds-idrepo-2 252m 13837Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 308m 4270Mi idm-65858d8c4c-vdncx 316m 3902Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 125m 525Mi 04:10:12 DEBUG --- stderr --- 04:10:12 DEBUG 04:10:14 INFO 04:10:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:10:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:10:14 INFO [loop_until]: OK (rc = 0) 04:10:14 DEBUG --- stdout --- 04:10:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 82m 0% 1337Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6790Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 395m 2% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 293m 1% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 402m 2% 5147Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1145m 7% 14237Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 282m 1% 14301Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 315m 1% 14534Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 192m 1% 2051Mi 3% 04:10:14 DEBUG --- stderr --- 04:10:14 DEBUG 04:11:12 INFO 04:11:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:11:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:11:12 INFO [loop_until]: OK (rc = 0) 04:11:12 DEBUG --- stdout --- 04:11:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 63m 5784Mi am-55f77847b7-l482k 50m 5739Mi am-55f77847b7-qhqgg 51m 5840Mi ds-cts-0 5m 402Mi ds-cts-1 5m 395Mi ds-cts-2 5m 365Mi ds-idrepo-0 968m 13552Mi ds-idrepo-1 432m 13633Mi ds-idrepo-2 612m 13857Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 329m 4263Mi idm-65858d8c4c-vdncx 303m 3904Mi lodemon-7b659c988b-78sgh 5m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 125m 525Mi 04:11:12 DEBUG --- stderr --- 04:11:12 DEBUG 04:11:14 INFO 04:11:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:11:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:11:14 INFO [loop_until]: OK (rc = 0) 04:11:14 DEBUG --- stdout --- 04:11:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6792Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 405m 2% 5562Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 284m 1% 2147Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 389m 2% 5149Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 763m 4% 14277Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 62m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 284m 1% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 571m 3% 14545Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 195m 1% 2051Mi 3% 04:11:14 DEBUG --- stderr --- 04:11:14 DEBUG 04:12:12 INFO 04:12:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:12:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:12:12 INFO [loop_until]: OK (rc = 0) 04:12:12 DEBUG --- stdout --- 04:12:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 56m 5784Mi am-55f77847b7-l482k 51m 5739Mi am-55f77847b7-qhqgg 53m 5839Mi ds-cts-0 6m 404Mi ds-cts-1 6m 395Mi ds-cts-2 5m 366Mi ds-idrepo-0 1300m 13584Mi ds-idrepo-1 683m 13670Mi ds-idrepo-2 223m 13856Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 312m 4255Mi idm-65858d8c4c-vdncx 311m 3907Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 125m 525Mi 04:12:12 DEBUG --- stderr --- 04:12:12 DEBUG 04:12:14 INFO 04:12:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:12:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:12:14 INFO [loop_until]: OK (rc = 0) 04:12:14 DEBUG --- stdout --- 04:12:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1327Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6790Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 397m 2% 5556Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 295m 1% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 399m 2% 5154Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1422m 8% 14310Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 506m 3% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 518m 3% 14561Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 194m 1% 2046Mi 3% 04:12:14 DEBUG --- stderr --- 04:12:14 DEBUG 04:13:12 INFO 04:13:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:13:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:13:12 INFO [loop_until]: OK (rc = 0) 04:13:12 DEBUG --- stdout --- 04:13:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 58m 5784Mi am-55f77847b7-l482k 51m 5739Mi am-55f77847b7-qhqgg 55m 5840Mi ds-cts-0 6m 404Mi ds-cts-1 6m 398Mi ds-cts-2 5m 365Mi ds-idrepo-0 2313m 13682Mi ds-idrepo-1 1368m 13707Mi ds-idrepo-2 232m 13856Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 319m 4258Mi idm-65858d8c4c-vdncx 329m 3911Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 132m 525Mi 04:13:12 DEBUG --- stderr --- 04:13:12 DEBUG 04:13:14 INFO 04:13:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:13:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:13:14 INFO [loop_until]: OK (rc = 0) 04:13:14 DEBUG --- stdout --- 04:13:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6791Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 114m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 410m 2% 5560Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 290m 1% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 411m 2% 5156Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1657m 10% 14404Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1614m 10% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 272m 1% 14564Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 201m 1% 2048Mi 3% 04:13:14 DEBUG --- stderr --- 04:13:14 DEBUG 04:14:12 INFO 04:14:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:14:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:14:12 INFO [loop_until]: OK (rc = 0) 04:14:12 DEBUG --- stdout --- 04:14:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 57m 5784Mi am-55f77847b7-l482k 51m 5739Mi am-55f77847b7-qhqgg 53m 5840Mi ds-cts-0 5m 404Mi ds-cts-1 5m 398Mi ds-cts-2 5m 366Mi ds-idrepo-0 1044m 13725Mi ds-idrepo-1 561m 13491Mi ds-idrepo-2 1337m 13570Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 302m 4259Mi idm-65858d8c4c-vdncx 321m 3911Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 122m 526Mi 04:14:12 DEBUG --- stderr --- 04:14:12 DEBUG 04:14:14 INFO 04:14:14 INFO [loop_until]: kubectl --namespace=xlou top node 04:14:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:14:15 INFO [loop_until]: OK (rc = 0) 04:14:15 DEBUG --- stdout --- 04:14:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1328Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6789Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 396m 2% 5556Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 289m 1% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 389m 2% 5158Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1297m 8% 14446Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 354m 2% 14214Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 947m 5% 14274Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 193m 1% 2049Mi 3% 04:14:15 DEBUG --- stderr --- 04:14:15 DEBUG 04:15:12 INFO 04:15:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:15:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:15:12 INFO [loop_until]: OK (rc = 0) 04:15:12 DEBUG --- stdout --- 04:15:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 91m 5791Mi am-55f77847b7-l482k 49m 5739Mi am-55f77847b7-qhqgg 51m 5840Mi ds-cts-0 5m 404Mi ds-cts-1 6m 398Mi ds-cts-2 5m 366Mi ds-idrepo-0 1192m 13666Mi ds-idrepo-1 491m 13568Mi ds-idrepo-2 485m 13598Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 296m 4273Mi idm-65858d8c4c-vdncx 307m 3914Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 120m 525Mi 04:15:12 DEBUG --- stderr --- 04:15:12 DEBUG 04:15:15 INFO 04:15:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:15:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:15:15 INFO [loop_until]: OK (rc = 0) 04:15:15 DEBUG --- stdout --- 04:15:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 149m 0% 6795Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 385m 2% 5575Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 287m 1% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 395m 2% 5161Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 1331m 8% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 550m 3% 14290Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 287m 1% 14303Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 188m 1% 2047Mi 3% 04:15:15 DEBUG --- stderr --- 04:15:15 DEBUG 04:16:12 INFO 04:16:12 INFO [loop_until]: kubectl --namespace=xlou top pods 04:16:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:16:13 INFO [loop_until]: OK (rc = 0) 04:16:13 DEBUG --- stdout --- 04:16:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 6m 5791Mi am-55f77847b7-l482k 8m 5739Mi am-55f77847b7-qhqgg 10m 5840Mi ds-cts-0 5m 404Mi ds-cts-1 6m 398Mi ds-cts-2 5m 365Mi ds-idrepo-0 12m 13676Mi ds-idrepo-1 8m 13566Mi ds-idrepo-2 12m 13619Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 6m 4273Mi idm-65858d8c4c-vdncx 7m 3914Mi lodemon-7b659c988b-78sgh 2m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 45m 120Mi 04:16:13 DEBUG --- stderr --- 04:16:13 DEBUG 04:16:15 INFO 04:16:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:16:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:16:15 INFO [loop_until]: OK (rc = 0) 04:16:15 DEBUG --- stdout --- 04:16:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 6796Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 5577Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 5162Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14291Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 60m 0% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 103m 0% 1650Mi 2% 04:16:15 DEBUG --- stderr --- 04:16:15 DEBUG 04:17:13 INFO 04:17:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:17:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:17:13 INFO [loop_until]: OK (rc = 0) 04:17:13 DEBUG --- stdout --- 04:17:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 4Mi am-55f77847b7-d6t28 6m 5791Mi am-55f77847b7-l482k 9m 5739Mi am-55f77847b7-qhqgg 9m 5839Mi ds-cts-0 6m 405Mi ds-cts-1 6m 398Mi ds-cts-2 6m 365Mi ds-idrepo-0 10m 13676Mi ds-idrepo-1 9m 13566Mi ds-idrepo-2 12m 13619Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 7m 4259Mi idm-65858d8c4c-vdncx 7m 3914Mi lodemon-7b659c988b-78sgh 4m 66Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 1m 120Mi 04:17:13 DEBUG --- stderr --- 04:17:13 DEBUG 04:17:15 INFO 04:17:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:17:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:17:15 INFO [loop_until]: OK (rc = 0) 04:17:15 DEBUG --- stdout --- 04:17:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1326Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6795Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 5563Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 5162Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14288Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14321Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1649Mi 2% 04:17:15 DEBUG --- stderr --- 04:17:15 DEBUG 127.0.0.1 - - [12/Aug/2023 04:17:21] "GET /monitoring/average?start_time=23-08-12_02:46:50&stop_time=23-08-12_03:15:20 HTTP/1.1" 200 - 04:18:13 INFO 04:18:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:18:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:18:13 INFO [loop_until]: OK (rc = 0) 04:18:13 DEBUG --- stdout --- 04:18:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 5Mi am-55f77847b7-d6t28 7m 5791Mi am-55f77847b7-l482k 8m 5739Mi am-55f77847b7-qhqgg 56m 5845Mi ds-cts-0 5m 405Mi ds-cts-1 72m 398Mi ds-cts-2 61m 449Mi ds-idrepo-0 17m 13692Mi ds-idrepo-1 9m 13567Mi ds-idrepo-2 63m 13637Mi end-user-ui-6845bc78c7-5gwx2 1m 4Mi idm-65858d8c4c-4x2bg 5m 4259Mi idm-65858d8c4c-vdncx 7m 3913Mi lodemon-7b659c988b-78sgh 3m 67Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 220m 403Mi 04:18:13 DEBUG --- stderr --- 04:18:13 DEBUG 04:18:15 INFO 04:18:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:18:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:18:15 INFO [loop_until]: OK (rc = 0) 04:18:15 DEBUG --- stdout --- 04:18:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 90m 0% 1329Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 106m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 119m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 5563Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 129m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 5162Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 222m 1% 14427Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 67m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 14290Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 134m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 182m 1% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 80m 0% 1646Mi 2% 04:18:15 DEBUG --- stderr --- 04:18:15 DEBUG 04:19:13 INFO 04:19:13 INFO [loop_until]: kubectl --namespace=xlou top pods 04:19:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:19:13 INFO [loop_until]: OK (rc = 0) 04:19:13 DEBUG --- stdout --- 04:19:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-5thnv 1m 5Mi am-55f77847b7-d6t28 6m 5808Mi am-55f77847b7-l482k 9m 5739Mi am-55f77847b7-qhqgg 7m 5845Mi ds-cts-0 5m 405Mi ds-cts-1 6m 398Mi ds-cts-2 5m 367Mi ds-idrepo-0 231m 13672Mi ds-idrepo-1 94m 13565Mi ds-idrepo-2 103m 13618Mi end-user-ui-6845bc78c7-5gwx2 1m 5Mi idm-65858d8c4c-4x2bg 6m 4258Mi idm-65858d8c4c-vdncx 7m 3913Mi lodemon-7b659c988b-78sgh 2m 67Mi login-ui-74d6fb46c-m9zk9 1m 3Mi overseer-0-88bc47db9-ntwzj 513m 192Mi 04:19:13 DEBUG --- stderr --- 04:19:13 DEBUG 04:19:15 INFO 04:19:15 INFO [loop_until]: kubectl --namespace=xlou top node 04:19:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 04:19:15 INFO [loop_until]: OK (rc = 0) 04:19:15 DEBUG --- stdout --- 04:19:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1331Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 77m 0% 5564Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2142Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 76m 0% 5168Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 134m 0% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-4z9d 57m 0% 1094Mi 1% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 118m 0% 14296Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 186m 1% 14324Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 874m 5% 1723Mi 2% 04:19:15 DEBUG --- stderr --- 04:19:15 DEBUG 04:20:01 INFO Finished: True 04:20:01 INFO Waiting for threads to register finish flag 04:20:15 INFO Done. Have a nice day! :) 127.0.0.1 - - [12/Aug/2023 04:20:15] "GET /monitoring/stop HTTP/1.1" 200 - 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Cpu_cores_used_per_pod.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Memory_usage_per_pod.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Disk_tps_read_per_pod.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Disk_tps_writes_per_pod.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Cpu_cores_used_per_node.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Memory_usage_used_per_node.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Cpu_iowait_per_node.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Network_receive_per_node.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Network_transmit_per_node.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/am_cts_task_count_token_session.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/am_authentication_rate.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/am_authentication_count_per_pod.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/Cts_reaper_Deletion_count.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/AM_oauth2_authorization_codes.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/ds_pods_replication_delay.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/am_cts_reaper_cache_size.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/node_disk_read_bytes_total.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/node_disk_written_bytes_total.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/ds_backend_entry_count.json does not exist. Skipping... 04:20:18 INFO File /tmp/lodemon_data-23-08-12_01:43:27/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [12/Aug/2023 04:20:20] "GET /monitoring/process HTTP/1.1" 200 -