==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: lodemon-6cd9c44bd4-vnqvr Namespace: xlou Priority: 0 Node: gke-xlou-cdm-default-pool-f05840a3-2nsn/10.142.0.46 Start Time: Sat, 12 Aug 2023 07:35:18 +0000 Labels: app=lodemon app.kubernetes.io/name=lodemon pod-template-hash=6cd9c44bd4 skaffold.dev/run-id=16be08cc-8094-409a-90e9-3be5e38f4441 Annotations: Status: Running IP: 10.106.45.53 IPs: IP: 10.106.45.53 Controlled By: ReplicaSet/lodemon-6cd9c44bd4 Containers: lodemon: Container ID: containerd://45612709b9263933e60646a213244c7b2efffa0da514f5e6e70268b74b02c1d9 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:6c23848450de3f8e82f0a619a86abcd91fc890c6 Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:f419b98ce988c016f788d178b318b601ed56b4ebb6e1a8df68b3ff2a986af79d Port: 8080/TCP Host Port: 0/TCP Command: python3 Args: /lodestar/scripts/lodemon_run.py -W default State: Running Started: Sat, 12 Aug 2023 07:35:19 +0000 Ready: True Restart Count: 0 Limits: cpu: 2 memory: 2Gi Requests: cpu: 1 memory: 1Gi Liveness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Readiness: exec [cat /tmp/lodemon_alive] delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: SKAFFOLD_PROFILE: medium Mounts: /lodestar/config/config.yaml from config (rw,path="config.yaml") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nk2sl (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: ConfigMap (a volume populated by a ConfigMap) Name: lodemon-config Optional: false kube-api-access-nk2sl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== 08:35:20 INFO 08:35:20 INFO --------------------- Get expected number of pods --------------------- 08:35:20 INFO 08:35:20 INFO [loop_until]: kubectl --namespace=xlou get deployments --selector app=am --output jsonpath={.items[*].spec.replicas} 08:35:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:20 INFO [loop_until]: OK (rc = 0) 08:35:20 DEBUG --- stdout --- 08:35:20 DEBUG 3 08:35:20 DEBUG --- stderr --- 08:35:20 DEBUG 08:35:20 INFO 08:35:20 INFO ---------------------------- Get pod list ---------------------------- 08:35:20 INFO 08:35:20 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=am --output jsonpath={.items[*].metadata.name} 08:35:20 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 08:35:20 INFO [loop_until]: OK (rc = 0) 08:35:20 DEBUG --- stdout --- 08:35:20 DEBUG am-55f77847b7-2vpdz am-55f77847b7-mbr4x am-55f77847b7-mfzwm 08:35:20 DEBUG --- stderr --- 08:35:20 DEBUG 08:35:20 INFO 08:35:20 INFO -------------- Check pod am-55f77847b7-2vpdz is running -------------- 08:35:20 INFO 08:35:20 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-2vpdz -o=jsonpath={.status.phase} | grep "Running" 08:35:20 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:20 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:20 INFO [loop_until]: OK (rc = 0) 08:35:20 DEBUG --- stdout --- 08:35:20 DEBUG Running 08:35:20 DEBUG --- stderr --- 08:35:20 DEBUG 08:35:20 INFO 08:35:20 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-2vpdz -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:20 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:20 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:20 INFO [loop_until]: OK (rc = 0) 08:35:20 DEBUG --- stdout --- 08:35:20 DEBUG true 08:35:20 DEBUG --- stderr --- 08:35:20 DEBUG 08:35:20 INFO 08:35:20 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-2vpdz --output jsonpath={.status.startTime} 08:35:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:20 INFO [loop_until]: OK (rc = 0) 08:35:20 DEBUG --- stdout --- 08:35:20 DEBUG 2023-08-12T07:25:36Z 08:35:20 DEBUG --- stderr --- 08:35:20 DEBUG 08:35:20 INFO 08:35:20 INFO ------- Check pod am-55f77847b7-2vpdz filesystem is accessible ------- 08:35:20 INFO 08:35:20 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-2vpdz --container openam -- ls / | grep "bin" 08:35:20 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:20 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:20 INFO [loop_until]: OK (rc = 0) 08:35:20 DEBUG --- stdout --- 08:35:20 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 08:35:20 DEBUG --- stderr --- 08:35:20 DEBUG 08:35:20 INFO 08:35:20 INFO ------------- Check pod am-55f77847b7-2vpdz restart count ------------- 08:35:20 INFO 08:35:20 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-2vpdz --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:20 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:20 INFO [loop_until]: OK (rc = 0) 08:35:20 DEBUG --- stdout --- 08:35:20 DEBUG 0 08:35:20 DEBUG --- stderr --- 08:35:20 DEBUG 08:35:20 INFO Pod am-55f77847b7-2vpdz has been restarted 0 times. 08:35:20 INFO 08:35:20 INFO -------------- Check pod am-55f77847b7-mbr4x is running -------------- 08:35:20 INFO 08:35:20 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-mbr4x -o=jsonpath={.status.phase} | grep "Running" 08:35:20 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG Running 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-mbr4x -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG true 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-mbr4x --output jsonpath={.status.startTime} 08:35:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG 2023-08-12T07:25:36Z 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO ------- Check pod am-55f77847b7-mbr4x filesystem is accessible ------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-mbr4x --container openam -- ls / | grep "bin" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO ------------- Check pod am-55f77847b7-mbr4x restart count ------------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-mbr4x --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG 0 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO Pod am-55f77847b7-mbr4x has been restarted 0 times. 08:35:21 INFO 08:35:21 INFO -------------- Check pod am-55f77847b7-mfzwm is running -------------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-mfzwm -o=jsonpath={.status.phase} | grep "Running" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG Running 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pods am-55f77847b7-mfzwm -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG true 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-mfzwm --output jsonpath={.status.startTime} 08:35:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG 2023-08-12T07:25:36Z 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO ------- Check pod am-55f77847b7-mfzwm filesystem is accessible ------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou exec am-55f77847b7-mfzwm --container openam -- ls / | grep "bin" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG bin boot dev entrypoint.sh etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO ------------- Check pod am-55f77847b7-mfzwm restart count ------------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pod am-55f77847b7-mfzwm --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG 0 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO Pod am-55f77847b7-mfzwm has been restarted 0 times. 08:35:21 INFO 08:35:21 INFO --------------------- Get expected number of pods --------------------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get deployment --selector app=idm --output jsonpath={.items[*].spec.replicas} 08:35:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG 2 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO ---------------------------- Get pod list ---------------------------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=idm --output jsonpath={.items[*].metadata.name} 08:35:21 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG idm-65858d8c4c-5vh78 idm-65858d8c4c-gpz8d 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO -------------- Check pod idm-65858d8c4c-5vh78 is running -------------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-5vh78 -o=jsonpath={.status.phase} | grep "Running" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG Running 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-5vh78 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG true 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-5vh78 --output jsonpath={.status.startTime} 08:35:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG 2023-08-12T07:25:36Z 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO ------- Check pod idm-65858d8c4c-5vh78 filesystem is accessible ------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-5vh78 --container openidm -- ls / | grep "bin" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO 08:35:21 INFO ------------ Check pod idm-65858d8c4c-5vh78 restart count ------------ 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-5vh78 --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:21 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:21 INFO [loop_until]: OK (rc = 0) 08:35:21 DEBUG --- stdout --- 08:35:21 DEBUG 0 08:35:21 DEBUG --- stderr --- 08:35:21 DEBUG 08:35:21 INFO Pod idm-65858d8c4c-5vh78 has been restarted 0 times. 08:35:21 INFO 08:35:21 INFO -------------- Check pod idm-65858d8c4c-gpz8d is running -------------- 08:35:21 INFO 08:35:21 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-gpz8d -o=jsonpath={.status.phase} | grep "Running" 08:35:21 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG Running 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pods idm-65858d8c4c-gpz8d -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:22 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG true 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-gpz8d --output jsonpath={.status.startTime} 08:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG 2023-08-12T07:25:36Z 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO ------- Check pod idm-65858d8c4c-gpz8d filesystem is accessible ------- 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou exec idm-65858d8c4c-gpz8d --container openidm -- ls / | grep "bin" 08:35:22 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG Dockerfile.java-11 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO ------------ Check pod idm-65858d8c4c-gpz8d restart count ------------ 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pod idm-65858d8c4c-gpz8d --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG 0 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO Pod idm-65858d8c4c-gpz8d has been restarted 0 times. 08:35:22 INFO 08:35:22 INFO --------------------- Get expected number of pods --------------------- 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-idrepo --output jsonpath={.items[*].spec.replicas} 08:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG 3 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO ---------------------------- Get pod list ---------------------------- 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-idrepo --output jsonpath={.items[*].metadata.name} 08:35:22 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG ds-idrepo-0 ds-idrepo-1 ds-idrepo-2 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO ------------------ Check pod ds-idrepo-0 is running ------------------ 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.phase} | grep "Running" 08:35:22 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG Running 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:22 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG true 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.startTime} 08:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG 2023-08-12T06:52:43Z 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO ----------- Check pod ds-idrepo-0 filesystem is accessible ----------- 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-0 --container ds -- ls / | grep "bin" 08:35:22 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO ----------------- Check pod ds-idrepo-0 restart count ----------------- 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-0 --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG 0 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO Pod ds-idrepo-0 has been restarted 0 times. 08:35:22 INFO 08:35:22 INFO ------------------ Check pod ds-idrepo-1 is running ------------------ 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.phase} | grep "Running" 08:35:22 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG Running 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:22 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG true 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.startTime} 08:35:22 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:22 INFO [loop_until]: OK (rc = 0) 08:35:22 DEBUG --- stdout --- 08:35:22 DEBUG 2023-08-12T07:03:46Z 08:35:22 DEBUG --- stderr --- 08:35:22 DEBUG 08:35:22 INFO 08:35:22 INFO ----------- Check pod ds-idrepo-1 filesystem is accessible ----------- 08:35:22 INFO 08:35:22 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-1 --container ds -- ls / | grep "bin" 08:35:22 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO ----------------- Check pod ds-idrepo-1 restart count ----------------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-1 --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG 0 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO Pod ds-idrepo-1 has been restarted 0 times. 08:35:23 INFO 08:35:23 INFO ------------------ Check pod ds-idrepo-2 is running ------------------ 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.phase} | grep "Running" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG Running 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pods ds-idrepo-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG true 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.startTime} 08:35:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG 2023-08-12T07:14:38Z 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO ----------- Check pod ds-idrepo-2 filesystem is accessible ----------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou exec ds-idrepo-2 --container ds -- ls / | grep "bin" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO ----------------- Check pod ds-idrepo-2 restart count ----------------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pod ds-idrepo-2 --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG 0 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO Pod ds-idrepo-2 has been restarted 0 times. 08:35:23 INFO 08:35:23 INFO --------------------- Get expected number of pods --------------------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get statefulsets --selector app=ds-cts --output jsonpath={.items[*].spec.replicas} 08:35:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG 3 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO ---------------------------- Get pod list ---------------------------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pods --selector app=ds-cts --output jsonpath={.items[*].metadata.name} 08:35:23 INFO [loop_until]: (max_time=180, interval=10, expected_rc=[0] 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG ds-cts-0 ds-cts-1 ds-cts-2 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO -------------------- Check pod ds-cts-0 is running -------------------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.phase} | grep "Running" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG Running 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-0 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG true 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.startTime} 08:35:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG 2023-08-12T06:52:43Z 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO ------------- Check pod ds-cts-0 filesystem is accessible ------------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-0 --container ds -- ls / | grep "bin" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO ------------------ Check pod ds-cts-0 restart count ------------------ 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-0 --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG 0 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO Pod ds-cts-0 has been restarted 0 times. 08:35:23 INFO 08:35:23 INFO -------------------- Check pod ds-cts-1 is running -------------------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.phase} | grep "Running" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG Running 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-1 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG true 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.startTime} 08:35:23 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:23 INFO [loop_until]: OK (rc = 0) 08:35:23 DEBUG --- stdout --- 08:35:23 DEBUG 2023-08-12T06:53:06Z 08:35:23 DEBUG --- stderr --- 08:35:23 DEBUG 08:35:23 INFO 08:35:23 INFO ------------- Check pod ds-cts-1 filesystem is accessible ------------- 08:35:23 INFO 08:35:23 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-1 --container ds -- ls / | grep "bin" 08:35:23 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:24 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:24 INFO [loop_until]: OK (rc = 0) 08:35:24 DEBUG --- stdout --- 08:35:24 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 08:35:24 DEBUG --- stderr --- 08:35:24 DEBUG 08:35:24 INFO 08:35:24 INFO ------------------ Check pod ds-cts-1 restart count ------------------ 08:35:24 INFO 08:35:24 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-1 --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:24 INFO [loop_until]: OK (rc = 0) 08:35:24 DEBUG --- stdout --- 08:35:24 DEBUG 0 08:35:24 DEBUG --- stderr --- 08:35:24 DEBUG 08:35:24 INFO Pod ds-cts-1 has been restarted 0 times. 08:35:24 INFO 08:35:24 INFO -------------------- Check pod ds-cts-2 is running -------------------- 08:35:24 INFO 08:35:24 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.phase} | grep "Running" 08:35:24 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:24 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:24 INFO [loop_until]: OK (rc = 0) 08:35:24 DEBUG --- stdout --- 08:35:24 DEBUG Running 08:35:24 DEBUG --- stderr --- 08:35:24 DEBUG 08:35:24 INFO 08:35:24 INFO [loop_until]: kubectl --namespace=xlou get pods ds-cts-2 -o=jsonpath={.status.containerStatuses[*].ready} | grep "true" 08:35:24 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:24 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:24 INFO [loop_until]: OK (rc = 0) 08:35:24 DEBUG --- stdout --- 08:35:24 DEBUG true 08:35:24 DEBUG --- stderr --- 08:35:24 DEBUG 08:35:24 INFO 08:35:24 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.startTime} 08:35:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:24 INFO [loop_until]: OK (rc = 0) 08:35:24 DEBUG --- stdout --- 08:35:24 DEBUG 2023-08-12T06:53:29Z 08:35:24 DEBUG --- stderr --- 08:35:24 DEBUG 08:35:24 INFO 08:35:24 INFO ------------- Check pod ds-cts-2 filesystem is accessible ------------- 08:35:24 INFO 08:35:24 INFO [loop_until]: kubectl --namespace=xlou exec ds-cts-2 --container ds -- ls / | grep "bin" 08:35:24 INFO [loop_until]: (max_time=360, interval=5, expected_rc=[0] 08:35:24 INFO [loop_until]: Function succeeded after 0s (rc=0) - expected pattern found 08:35:24 INFO [loop_until]: OK (rc = 0) 08:35:24 DEBUG --- stdout --- 08:35:24 DEBUG Dockerfile.java-17 bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 08:35:24 DEBUG --- stderr --- 08:35:24 DEBUG 08:35:24 INFO 08:35:24 INFO ------------------ Check pod ds-cts-2 restart count ------------------ 08:35:24 INFO 08:35:24 INFO [loop_until]: kubectl --namespace=xlou get pod ds-cts-2 --output jsonpath={.status.containerStatuses[*].restartCount} 08:35:24 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:24 INFO [loop_until]: OK (rc = 0) 08:35:24 DEBUG --- stdout --- 08:35:24 DEBUG 0 08:35:24 DEBUG --- stderr --- 08:35:24 DEBUG 08:35:24 INFO Pod ds-cts-2 has been restarted 0 times. * Serving Flask app 'lodemon_run' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:8080 * Running on http://10.106.45.53:8080 Press CTRL+C to quit 08:35:55 INFO 08:35:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:55 INFO [loop_until]: OK (rc = 0) 08:35:55 DEBUG --- stdout --- 08:35:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:55 DEBUG --- stderr --- 08:35:55 DEBUG 08:35:55 INFO 08:35:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:55 INFO [loop_until]: OK (rc = 0) 08:35:55 DEBUG --- stdout --- 08:35:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:55 DEBUG --- stderr --- 08:35:55 DEBUG 08:35:55 INFO 08:35:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:55 INFO [loop_until]: OK (rc = 0) 08:35:55 DEBUG --- stdout --- 08:35:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:55 DEBUG --- stderr --- 08:35:55 DEBUG 08:35:55 INFO 08:35:55 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:55 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:55 INFO [loop_until]: OK (rc = 0) 08:35:55 DEBUG --- stdout --- 08:35:55 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:55 DEBUG --- stderr --- 08:35:55 DEBUG 08:35:56 INFO 08:35:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:56 INFO [loop_until]: OK (rc = 0) 08:35:56 DEBUG --- stdout --- 08:35:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:56 DEBUG --- stderr --- 08:35:56 DEBUG 08:35:56 INFO 08:35:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:56 INFO [loop_until]: OK (rc = 0) 08:35:56 DEBUG --- stdout --- 08:35:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:56 DEBUG --- stderr --- 08:35:56 DEBUG 08:35:56 INFO 08:35:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:56 INFO [loop_until]: OK (rc = 0) 08:35:56 DEBUG --- stdout --- 08:35:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:56 DEBUG --- stderr --- 08:35:56 DEBUG 08:35:56 INFO 08:35:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:56 INFO [loop_until]: OK (rc = 0) 08:35:56 DEBUG --- stdout --- 08:35:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:56 DEBUG --- stderr --- 08:35:56 DEBUG 08:35:56 INFO 08:35:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:56 INFO [loop_until]: OK (rc = 0) 08:35:56 DEBUG --- stdout --- 08:35:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:56 DEBUG --- stderr --- 08:35:56 DEBUG 08:35:56 INFO 08:35:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:56 INFO [loop_until]: OK (rc = 0) 08:35:56 DEBUG --- stdout --- 08:35:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:56 DEBUG --- stderr --- 08:35:56 DEBUG 08:35:56 INFO 08:35:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:56 INFO [loop_until]: OK (rc = 0) 08:35:56 DEBUG --- stdout --- 08:35:56 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:56 DEBUG --- stderr --- 08:35:56 DEBUG 08:35:56 INFO 08:35:56 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:56 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:57 INFO [loop_until]: OK (rc = 0) 08:35:57 DEBUG --- stdout --- 08:35:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:57 DEBUG --- stderr --- 08:35:57 DEBUG 08:35:57 INFO 08:35:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:57 INFO [loop_until]: OK (rc = 0) 08:35:57 DEBUG --- stdout --- 08:35:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:57 DEBUG --- stderr --- 08:35:57 DEBUG 08:35:57 INFO 08:35:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:57 INFO [loop_until]: OK (rc = 0) 08:35:57 DEBUG --- stdout --- 08:35:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:57 DEBUG --- stderr --- 08:35:57 DEBUG 08:35:57 INFO 08:35:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:57 INFO [loop_until]: OK (rc = 0) 08:35:57 DEBUG --- stdout --- 08:35:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:57 DEBUG --- stderr --- 08:35:57 DEBUG 08:35:57 INFO 08:35:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:57 INFO [loop_until]: OK (rc = 0) 08:35:57 DEBUG --- stdout --- 08:35:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:57 DEBUG --- stderr --- 08:35:57 DEBUG 08:35:57 INFO 08:35:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:57 INFO [loop_until]: OK (rc = 0) 08:35:57 DEBUG --- stdout --- 08:35:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:57 DEBUG --- stderr --- 08:35:57 DEBUG 08:35:57 INFO 08:35:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:57 INFO [loop_until]: OK (rc = 0) 08:35:57 DEBUG --- stdout --- 08:35:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:57 DEBUG --- stderr --- 08:35:57 DEBUG 08:35:57 INFO 08:35:57 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:57 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:57 INFO [loop_until]: OK (rc = 0) 08:35:57 DEBUG --- stdout --- 08:35:57 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:57 DEBUG --- stderr --- 08:35:57 DEBUG 08:35:58 INFO 08:35:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [loop_until]: OK (rc = 0) 08:35:58 DEBUG --- stdout --- 08:35:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:58 DEBUG --- stderr --- 08:35:58 DEBUG 08:35:58 INFO 08:35:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [loop_until]: OK (rc = 0) 08:35:58 DEBUG --- stdout --- 08:35:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:58 DEBUG --- stderr --- 08:35:58 DEBUG 08:35:58 INFO 08:35:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [loop_until]: OK (rc = 0) 08:35:58 DEBUG --- stdout --- 08:35:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:58 DEBUG --- stderr --- 08:35:58 DEBUG 08:35:58 INFO 08:35:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [loop_until]: OK (rc = 0) 08:35:58 DEBUG --- stdout --- 08:35:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:58 DEBUG --- stderr --- 08:35:58 DEBUG 08:35:58 INFO 08:35:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [loop_until]: OK (rc = 0) 08:35:58 DEBUG --- stdout --- 08:35:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:58 DEBUG --- stderr --- 08:35:58 DEBUG 08:35:58 INFO 08:35:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [loop_until]: OK (rc = 0) 08:35:58 DEBUG --- stdout --- 08:35:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:58 DEBUG --- stderr --- 08:35:58 DEBUG 08:35:58 INFO 08:35:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [loop_until]: OK (rc = 0) 08:35:58 DEBUG --- stdout --- 08:35:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:58 DEBUG --- stderr --- 08:35:58 DEBUG 08:35:58 INFO 08:35:58 INFO [loop_until]: kubectl get services -o=jsonpath='{.items[?(@.metadata.labels.app=="kube-prometheus-stack-prometheus")]}' --all-namespaces 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [loop_until]: OK (rc = 0) 08:35:58 DEBUG --- stdout --- 08:35:58 DEBUG {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\":true}","meta.helm.sh/release-name":"prometheus-operator","meta.helm.sh/release-namespace":"monitoring"},"creationTimestamp":"2023-05-27T02:35:19Z","labels":{"app":"kube-prometheus-stack-prometheus","app.kubernetes.io/instance":"prometheus-operator","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/part-of":"kube-prometheus-stack","app.kubernetes.io/version":"46.4.1","chart":"kube-prometheus-stack-46.4.1","heritage":"Helm","release":"prometheus-operator","self-monitor":"true"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/part-of":{},"f:app.kubernetes.io/version":{},"f:chart":{},"f:heritage":{},"f:release":{},"f:self-monitor":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":9090,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}},"manager":"helm","operation":"Update","time":"2023-05-27T02:35:19Z"}],"name":"prometheus-operator-kube-p-prometheus","namespace":"monitoring","resourceVersion":"7148","uid":"eb1f35e8-cd91-4a12-8fe3-2e8bdf841da5"},"spec":{"clusterIP":"10.106.49.67","clusterIPs":["10.106.49.67"],"internalTrafficPolicy":"Cluster","ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"name":"http-web","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app.kubernetes.io/name":"prometheus","prometheus":"prometheus-operator-kube-p-prometheus"},"sessionAffinity":"None","type":"ClusterIP"},"status":{"loadBalancer":{}}} 08:35:58 DEBUG --- stderr --- 08:35:58 DEBUG 08:35:58 INFO Initializing monitoring instance threads 08:35:58 DEBUG Monitoring instance thread list: [, , , , , , , , , , , , , , , , , , , , , , , , , , , , ] 08:35:58 INFO Starting instance threads 08:35:58 INFO 08:35:58 INFO Thread started 08:35:58 INFO [loop_until]: kubectl --namespace=xlou top node 08:35:58 INFO 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO Thread started 08:35:58 INFO [loop_until]: kubectl --namespace=xlou top pods 08:35:58 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:35:58 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758" 08:35:58 INFO Thread started 08:35:58 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758" 08:35:58 INFO Thread started 08:35:58 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758" 08:35:58 INFO Thread started 08:35:58 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758" 08:35:58 INFO Thread started 08:35:58 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691825758" 08:35:58 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759" 08:35:59 INFO Thread started Exception in thread Thread-23: 08:35:59 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Exception in thread Thread-24: 08:35:59 INFO Thread started Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner 08:35:59 INFO Thread started Exception in thread Thread-25: 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691825759" self.run() 08:35:59 INFO Thread started Traceback (most recent call last): 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691825759" File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/usr/local/lib/python3.9/threading.py", line 910, in run 08:35:59 INFO Thread started self.run() 08:35:59 INFO Thread started Exception in thread Thread-28: 08:35:59 INFO [http_cmd]: curl --insecure -L --request GET "http://prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local:9090/api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759" 08:35:59 INFO Thread started File "/usr/local/lib/python3.9/threading.py", line 910, in run 08:35:59 INFO All threads has been started Traceback (most recent call last): self.run() File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self._target(*self._args, **self._kwargs) 127.0.0.1 - - [12/Aug/2023 08:35:59] "GET /monitoring/start HTTP/1.1" 200 - self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run instance.run() self._target(*self._args, **self._kwargs) if self.prom_data['functions']: File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop KeyError: 'functions' File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run if self.prom_data['functions']: 08:35:59 INFO [loop_until]: OK (rc = 0) instance.run() 08:35:59 DEBUG --- stdout --- if self.prom_data['functions']: 08:35:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 23m 2350Mi am-55f77847b7-mbr4x 16m 2417Mi am-55f77847b7-mfzwm 32m 3266Mi ds-cts-0 8m 357Mi ds-cts-1 9m 367Mi ds-cts-2 8m 342Mi ds-idrepo-0 16m 10302Mi ds-idrepo-1 23m 10309Mi ds-idrepo-2 37m 10303Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 10m 1381Mi idm-65858d8c4c-gpz8d 7m 3446Mi lodemon-6cd9c44bd4-vnqvr 271m 60Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 14Mi 08:35:59 DEBUG --- stderr --- File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 285, in run 08:35:59 DEBUG KeyError: 'functions' KeyError: 'functions' if self.prom_data['functions']: KeyError: 'functions' 08:35:59 INFO [loop_until]: OK (rc = 0) 08:35:59 DEBUG --- stdout --- 08:35:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 125m 0% 1344Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 71m 0% 4284Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 73m 0% 3549Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 3460Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4758Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 118m 0% 2113Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2637Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 86m 0% 10945Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 66m 0% 10937Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1070Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 77m 0% 10940Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1623Mi 2% 08:35:59 DEBUG --- stderr --- 08:35:59 DEBUG 08:36:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:00 WARNING Response is NONE 08:36:00 DEBUG Exception is preset. Setting retry_loop to true 08:36:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:02 WARNING Response is NONE 08:36:02 WARNING Response is NONE 08:36:02 DEBUG Exception is preset. Setting retry_loop to true 08:36:02 DEBUG Exception is preset. Setting retry_loop to true 08:36:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:06 WARNING Response is NONE 08:36:06 WARNING Response is NONE 08:36:06 DEBUG Exception is preset. Setting retry_loop to true 08:36:06 DEBUG Exception is preset. Setting retry_loop to true 08:36:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:06 WARNING Response is NONE 08:36:06 WARNING Response is NONE 08:36:06 DEBUG Exception is preset. Setting retry_loop to true 08:36:06 DEBUG Exception is preset. Setting retry_loop to true 08:36:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:06 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:11 WARNING Response is NONE 08:36:11 DEBUG Exception is preset. Setting retry_loop to true 08:36:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:13 WARNING Response is NONE 08:36:13 WARNING Response is NONE 08:36:13 DEBUG Exception is preset. Setting retry_loop to true 08:36:13 DEBUG Exception is preset. Setting retry_loop to true 08:36:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:14 WARNING Response is NONE 08:36:14 WARNING Response is NONE 08:36:14 DEBUG Exception is preset. Setting retry_loop to true 08:36:14 DEBUG Exception is preset. Setting retry_loop to true 08:36:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:17 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:17 WARNING Response is NONE 08:36:17 DEBUG Exception is preset. Setting retry_loop to true 08:36:17 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:19 WARNING Response is NONE 08:36:19 DEBUG Exception is preset. Setting retry_loop to true 08:36:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:19 WARNING Response is NONE 08:36:19 DEBUG Exception is preset. Setting retry_loop to true 08:36:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:22 WARNING Response is NONE 08:36:22 DEBUG Exception is preset. Setting retry_loop to true 08:36:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:24 WARNING Response is NONE 08:36:24 DEBUG Exception is preset. Setting retry_loop to true 08:36:24 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:25 WARNING Response is NONE 08:36:25 DEBUG Exception is preset. Setting retry_loop to true 08:36:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:26 WARNING Response is NONE 08:36:26 DEBUG Exception is preset. Setting retry_loop to true 08:36:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:28 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:28 WARNING Response is NONE 08:36:28 DEBUG Exception is preset. Setting retry_loop to true 08:36:28 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:30 WARNING Response is NONE 08:36:30 DEBUG Exception is preset. Setting retry_loop to true 08:36:30 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:31 WARNING Response is NONE 08:36:31 DEBUG Exception is preset. Setting retry_loop to true 08:36:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:33 WARNING Response is NONE 08:36:33 DEBUG Exception is preset. Setting retry_loop to true 08:36:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:35 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:35 WARNING Response is NONE 08:36:35 DEBUG Exception is preset. Setting retry_loop to true 08:36:35 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:37 WARNING Response is NONE 08:36:37 DEBUG Exception is preset. Setting retry_loop to true 08:36:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:39 WARNING Response is NONE 08:36:39 DEBUG Exception is preset. Setting retry_loop to true 08:36:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:41 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:41 WARNING Response is NONE 08:36:41 DEBUG Exception is preset. Setting retry_loop to true 08:36:41 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:42 WARNING Response is NONE 08:36:42 WARNING Response is NONE 08:36:42 DEBUG Exception is preset. Setting retry_loop to true 08:36:42 DEBUG Exception is preset. Setting retry_loop to true 08:36:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:44 WARNING Response is NONE 08:36:44 DEBUG Exception is preset. Setting retry_loop to true 08:36:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:46 WARNING Response is NONE 08:36:46 DEBUG Exception is preset. Setting retry_loop to true 08:36:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:48 WARNING Response is NONE 08:36:48 DEBUG Exception is preset. Setting retry_loop to true 08:36:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:50 WARNING Response is NONE 08:36:50 DEBUG Exception is preset. Setting retry_loop to true 08:36:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:52 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:52 WARNING Response is NONE 08:36:52 DEBUG Exception is preset. Setting retry_loop to true 08:36:52 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:53 WARNING Response is NONE 08:36:53 DEBUG Exception is preset. Setting retry_loop to true 08:36:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:36:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_cpu_usage_seconds_total%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:55 WARNING Response is NONE 08:36:55 DEBUG Exception is preset. Setting retry_loop to true 08:36:55 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-3: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:36:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_writes_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:57 WARNING Response is NONE 08:36:57 DEBUG Exception is preset. Setting retry_loop to true 08:36:57 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-6: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:36:59 INFO 08:36:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:36:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:36:59 INFO 08:36:59 INFO [loop_until]: kubectl --namespace=xlou top node 08:36:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:36:59 INFO [loop_until]: OK (rc = 0) 08:36:59 DEBUG --- stdout --- 08:36:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 21m 2352Mi am-55f77847b7-mbr4x 16m 2417Mi am-55f77847b7-mfzwm 11m 3267Mi ds-cts-0 10m 371Mi ds-cts-1 12m 369Mi ds-cts-2 10m 341Mi ds-idrepo-0 371m 10308Mi ds-idrepo-1 31m 10315Mi ds-idrepo-2 131m 10310Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 11m 1393Mi idm-65858d8c4c-gpz8d 9m 3443Mi lodemon-6cd9c44bd4-vnqvr 3m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 187m 48Mi 08:36:59 DEBUG --- stderr --- 08:36:59 DEBUG 08:36:59 INFO [loop_until]: OK (rc = 0) 08:36:59 DEBUG --- stdout --- 08:36:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4285Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 3552Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 71m 0% 3460Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 4755Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2116Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 2650Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 96m 0% 10957Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 275m 1% 10946Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1067Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 82m 0% 10943Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 250m 1% 1623Mi 2% 08:36:59 DEBUG --- stderr --- 08:36:59 DEBUG 08:36:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:36:59 WARNING Response is NONE 08:36:59 DEBUG Exception is preset. Setting retry_loop to true 08:36:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_memory_usage_bytes%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:01 WARNING Response is NONE 08:37:01 DEBUG Exception is preset. Setting retry_loop to true 08:37:01 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-4: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:37:03 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:03 WARNING Response is NONE 08:37:03 DEBUG Exception is preset. Setting retry_loop to true 08:37:03 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:04 WARNING Response is NONE 08:37:04 DEBUG Exception is preset. Setting retry_loop to true 08:37:04 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:05 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_cpu_usage_seconds_total%3Asum_irate%29by%28node%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:05 WARNING Response is NONE 08:37:05 DEBUG Exception is preset. Setting retry_loop to true 08:37:05 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-7: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:37:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:08 WARNING Response is NONE 08:37:08 DEBUG Exception is preset. Setting retry_loop to true 08:37:08 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:10 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:10 WARNING Response is NONE 08:37:10 DEBUG Exception is preset. Setting retry_loop to true 08:37:10 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:14 WARNING Response is NONE 08:37:14 DEBUG Exception is preset. Setting retry_loop to true 08:37:14 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:15 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amIdentityStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:15 WARNING Response is NONE 08:37:15 DEBUG Exception is preset. Setting retry_loop to true 08:37:15 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-15: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:37:18 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:18 WARNING Response is NONE 08:37:18 DEBUG Exception is preset. Setting retry_loop to true 08:37:18 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:19 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:19 WARNING Response is NONE 08:37:19 DEBUG Exception is preset. Setting retry_loop to true 08:37:19 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:21 WARNING Response is NONE 08:37:21 DEBUG Exception is preset. Setting retry_loop to true 08:37:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:25 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:25 WARNING Response is NONE 08:37:25 DEBUG Exception is preset. Setting retry_loop to true 08:37:25 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:29 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28container_fs_reads_total%7Bnamespace%3D%27xlou%27%2Cjob%3D%27kubelet%27%2Cmetrics_path%3D%27/metrics/cadvisor%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825758 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:29 WARNING Response is NONE 08:37:29 DEBUG Exception is preset. Setting retry_loop to true 08:37:29 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-5: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:37:30 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27cfgStore%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:30 WARNING Response is NONE 08:37:30 DEBUG Exception is preset. Setting retry_loop to true 08:37:30 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-16: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:37:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_cpu_seconds_total%7Bmode%3D%27iowait%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:32 WARNING Response is NONE 08:37:32 DEBUG Exception is preset. Setting retry_loop to true 08:37:32 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-9: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:37:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:33 WARNING Response is NONE 08:37:33 DEBUG Exception is preset. Setting retry_loop to true 08:37:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:36 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:36 WARNING Response is NONE 08:37:36 DEBUG Exception is preset. Setting retry_loop to true 08:37:36 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:44 WARNING Response is NONE 08:37:44 DEBUG Exception is preset. Setting retry_loop to true 08:37:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:47 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:47 WARNING Response is NONE 08:37:47 DEBUG Exception is preset. Setting retry_loop to true 08:37:47 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:55 WARNING Response is NONE 08:37:55 DEBUG Exception is preset. Setting retry_loop to true 08:37:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:37:58 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_reaper_search_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:37:58 WARNING Response is NONE 08:37:58 DEBUG Exception is preset. Setting retry_loop to true Exception in thread Thread-19: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run 08:37:58 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:37:59 INFO 08:37:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:37:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:37:59 INFO 08:37:59 INFO [loop_until]: kubectl --namespace=xlou top node 08:37:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:37:59 INFO [loop_until]: OK (rc = 0) 08:37:59 DEBUG --- stdout --- 08:37:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 20m 2353Mi am-55f77847b7-mbr4x 15m 2418Mi am-55f77847b7-mfzwm 13m 3267Mi ds-cts-0 7m 372Mi ds-cts-1 8m 370Mi ds-cts-2 5m 343Mi ds-idrepo-0 24m 10312Mi ds-idrepo-1 21m 10316Mi ds-idrepo-2 24m 10310Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 7m 1404Mi idm-65858d8c4c-gpz8d 7m 3443Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 48Mi 08:37:59 DEBUG --- stderr --- 08:37:59 DEBUG 08:37:59 INFO [loop_until]: OK (rc = 0) 08:37:59 DEBUG --- stdout --- 08:37:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 4286Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3552Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 68m 0% 3465Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4755Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2111Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2661Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 80m 0% 10955Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 10947Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1069Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 85m 0% 10946Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 68m 0% 1625Mi 2% 08:37:59 DEBUG --- stderr --- 08:37:59 DEBUG 08:38:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_authentication_count%7Bnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:06 WARNING Response is NONE 08:38:06 DEBUG Exception is preset. Setting retry_loop to true 08:38:06 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-18: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out')). Checking if error is transient one 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 WARNING Response is NONE 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 DEBUG Exception is preset. Setting retry_loop to true 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:09 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:20 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:20 WARNING Response is NONE 08:38:20 DEBUG Exception is preset. Setting retry_loop to true 08:38:20 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:22 WARNING Response is NONE 08:38:22 WARNING Response is NONE 08:38:22 DEBUG Exception is preset. Setting retry_loop to true 08:38:22 DEBUG Exception is preset. Setting retry_loop to true 08:38:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:26 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:26 WARNING Response is NONE 08:38:26 WARNING Response is NONE 08:38:26 WARNING Response is NONE 08:38:26 WARNING Response is NONE 08:38:26 WARNING Response is NONE 08:38:26 WARNING Response is NONE 08:38:26 WARNING Response is NONE 08:38:26 DEBUG Exception is preset. Setting retry_loop to true 08:38:26 DEBUG Exception is preset. Setting retry_loop to true 08:38:26 DEBUG Exception is preset. Setting retry_loop to true 08:38:26 DEBUG Exception is preset. Setting retry_loop to true 08:38:26 DEBUG Exception is preset. Setting retry_loop to true 08:38:26 DEBUG Exception is preset. Setting retry_loop to true 08:38:26 DEBUG Exception is preset. Setting retry_loop to true 08:38:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:26 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:31 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:31 WARNING Response is NONE 08:38:31 DEBUG Exception is preset. Setting retry_loop to true 08:38:31 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:33 WARNING Response is NONE 08:38:33 WARNING Response is NONE 08:38:33 DEBUG Exception is preset. Setting retry_loop to true 08:38:33 DEBUG Exception is preset. Setting retry_loop to true 08:38:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:33 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:34 WARNING Response is NONE 08:38:34 DEBUG Exception is preset. Setting retry_loop to true 08:38:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:34 WARNING Response is NONE 08:38:34 DEBUG Exception is preset. Setting retry_loop to true 08:38:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:37 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:37 WARNING Response is NONE 08:38:37 DEBUG Exception is preset. Setting retry_loop to true 08:38:37 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:39 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:39 WARNING Response is NONE 08:38:39 WARNING Response is NONE 08:38:39 DEBUG Exception is preset. Setting retry_loop to true 08:38:39 DEBUG Exception is preset. Setting retry_loop to true 08:38:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:39 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:42 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:42 WARNING Response is NONE 08:38:42 DEBUG Exception is preset. Setting retry_loop to true 08:38:42 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:44 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:44 WARNING Response is NONE 08:38:44 DEBUG Exception is preset. Setting retry_loop to true 08:38:44 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:45 WARNING Response is NONE 08:38:45 DEBUG Exception is preset. Setting retry_loop to true 08:38:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:46 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:46 WARNING Response is NONE 08:38:46 DEBUG Exception is preset. Setting retry_loop to true 08:38:46 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:48 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:48 WARNING Response is NONE 08:38:48 DEBUG Exception is preset. Setting retry_loop to true 08:38:48 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:50 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:50 WARNING Response is NONE 08:38:50 DEBUG Exception is preset. Setting retry_loop to true 08:38:50 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:51 WARNING Response is NONE 08:38:51 DEBUG Exception is preset. Setting retry_loop to true 08:38:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:53 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:53 WARNING Response is NONE 08:38:53 DEBUG Exception is preset. Setting retry_loop to true 08:38:53 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:55 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:55 WARNING Response is NONE 08:38:55 DEBUG Exception is preset. Setting retry_loop to true 08:38:55 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:57 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:57 WARNING Response is NONE 08:38:57 DEBUG Exception is preset. Setting retry_loop to true 08:38:57 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:38:59 INFO 08:38:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:38:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:38:59 INFO 08:38:59 INFO [loop_until]: kubectl --namespace=xlou top node 08:38:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:38:59 INFO [loop_until]: OK (rc = 0) 08:38:59 DEBUG --- stdout --- 08:38:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 12m 2353Mi am-55f77847b7-mbr4x 13m 2421Mi am-55f77847b7-mfzwm 13m 3268Mi ds-cts-0 6m 372Mi ds-cts-1 12m 370Mi ds-cts-2 9m 344Mi ds-idrepo-0 28m 10313Mi ds-idrepo-1 34m 10313Mi ds-idrepo-2 49m 10311Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 8m 1412Mi idm-65858d8c4c-gpz8d 7m 3443Mi lodemon-6cd9c44bd4-vnqvr 3m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 273m 98Mi 08:38:59 DEBUG --- stderr --- 08:38:59 DEBUG 08:38:59 INFO [loop_until]: OK (rc = 0) 08:38:59 DEBUG --- stdout --- 08:38:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 4287Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3550Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3460Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4754Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2113Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 2668Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 104m 0% 10960Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 75m 0% 10949Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1071Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 80m 0% 10945Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 363m 2% 1624Mi 2% 08:38:59 DEBUG --- stderr --- 08:38:59 DEBUG 08:38:59 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:38:59 WARNING Response is NONE 08:38:59 DEBUG Exception is preset. Setting retry_loop to true 08:38:59 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:00 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:00 WARNING Response is NONE 08:39:00 DEBUG Exception is preset. Setting retry_loop to true 08:39:00 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:01 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:01 WARNING Response is NONE 08:39:01 DEBUG Exception is preset. Setting retry_loop to true 08:39:01 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:02 WARNING Response is NONE 08:39:02 DEBUG Exception is preset. Setting retry_loop to true 08:39:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:04 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_read_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:04 WARNING Response is NONE 08:39:04 DEBUG Exception is preset. Setting retry_loop to true 08:39:04 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-26: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:06 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27amCts%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:06 WARNING Response is NONE 08:39:06 DEBUG Exception is preset. Setting retry_loop to true 08:39:06 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-14: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:08 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_ttl_entries_deleted_count%7Bnamespace%3D%27xlou%27%2Cbackend%3D~%27amCts%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:08 WARNING Response is NONE 08:39:08 DEBUG Exception is preset. Setting retry_loop to true 08:39:08 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-21: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:11 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:11 WARNING Response is NONE 08:39:11 DEBUG Exception is preset. Setting retry_loop to true 08:39:11 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:12 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_receive_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:12 WARNING Response is NONE 08:39:12 DEBUG Exception is preset. Setting retry_loop to true 08:39:12 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-10: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:13 WARNING Response is NONE 08:39:13 DEBUG Exception is preset. Setting retry_loop to true 08:39:13 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:14 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_oauth2_grant_count%7Bnamespace%3D%27xlou%27%2Cgrant_type%3D~%27authorization-code%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:14 WARNING Response is NONE 08:39:14 DEBUG Exception is preset. Setting retry_loop to true 08:39:14 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-20: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:21 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:21 WARNING Response is NONE 08:39:21 DEBUG Exception is preset. Setting retry_loop to true 08:39:21 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:22 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:22 WARNING Response is NONE 08:39:22 DEBUG Exception is preset. Setting retry_loop to true 08:39:22 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:23 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:23 WARNING Response is NONE 08:39:23 DEBUG Exception is preset. Setting retry_loop to true 08:39:23 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:24 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_network_transmit_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%21%3D%27lo%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:24 WARNING Response is NONE 08:39:24 DEBUG Exception is preset. Setting retry_loop to true 08:39:24 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-11: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:27 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:27 WARNING Response is NONE 08:39:27 DEBUG Exception is preset. Setting retry_loop to true 08:39:27 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:32 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:32 WARNING Response is NONE 08:39:32 DEBUG Exception is preset. Setting retry_loop to true 08:39:32 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:33 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28node_namespace_pod_container%3Acontainer_memory_working_set_bytes%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:33 WARNING Response is NONE 08:39:33 DEBUG Exception is preset. Setting retry_loop to true 08:39:33 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-8: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:34 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:34 WARNING Response is NONE 08:39:34 DEBUG Exception is preset. Setting retry_loop to true 08:39:34 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:38 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:38 WARNING Response is NONE 08:39:38 DEBUG Exception is preset. Setting retry_loop to true 08:39:38 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:40 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:40 WARNING Response is NONE 08:39:40 WARNING Response is NONE 08:39:40 WARNING Response is NONE 08:39:40 DEBUG Exception is preset. Setting retry_loop to true 08:39:40 DEBUG Exception is preset. Setting retry_loop to true 08:39:40 DEBUG Exception is preset. Setting retry_loop to true 08:39:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:40 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:43 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_replication_replica_remote_replicas_receive_delay_seconds+%7Bnamespace%3D%27xlou%27%2Cdomain_name%3D%27ou%3Dtokens%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:43 WARNING Response is NONE 08:39:43 DEBUG Exception is preset. Setting retry_loop to true 08:39:43 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-22: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:45 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:45 WARNING Response is NONE 08:39:45 DEBUG Exception is preset. Setting retry_loop to true 08:39:45 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:49 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28irate%28am_session_count%7Bsession_type%3D~%27authentication-.%2A%27%2Coperation%3D%27create%27%2Cnamespace%3D%27xlou%27%2Coutcome%3D%27success%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:49 WARNING Response is NONE 08:39:49 DEBUG Exception is preset. Setting retry_loop to true 08:39:49 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-13: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:39:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:51 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:51 WARNING Response is NONE 08:39:51 WARNING Response is NONE 08:39:51 WARNING Response is NONE 08:39:51 DEBUG Exception is preset. Setting retry_loop to true 08:39:51 DEBUG Exception is preset. Setting retry_loop to true 08:39:51 DEBUG Exception is preset. Setting retry_loop to true 08:39:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:51 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:56 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:39:56 WARNING Response is NONE 08:39:56 DEBUG Exception is preset. Setting retry_loop to true 08:39:56 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:39:59 INFO 08:39:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:39:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:39:59 INFO 08:39:59 INFO [loop_until]: kubectl --namespace=xlou top node 08:39:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:39:59 INFO [loop_until]: OK (rc = 0) 08:39:59 DEBUG --- stdout --- 08:39:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 11m 2362Mi am-55f77847b7-mbr4x 10m 2421Mi am-55f77847b7-mfzwm 13m 3267Mi ds-cts-0 11m 372Mi ds-cts-1 8m 370Mi ds-cts-2 6m 344Mi ds-idrepo-0 15m 10315Mi ds-idrepo-1 39m 10316Mi ds-idrepo-2 22m 10315Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 8m 1426Mi idm-65858d8c4c-gpz8d 7m 3443Mi lodemon-6cd9c44bd4-vnqvr 3m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 98Mi 08:39:59 DEBUG --- stderr --- 08:39:59 DEBUG 08:39:59 INFO [loop_until]: OK (rc = 0) 08:39:59 DEBUG --- stdout --- 08:39:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 68m 0% 4284Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3554Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3472Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4753Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2682Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 71m 0% 10961Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 1095Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 10953Mi 18% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1072Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 92m 0% 10944Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 67m 0% 1626Mi 2% 08:39:59 DEBUG --- stderr --- 08:39:59 DEBUG 08:40:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:40:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:40:02 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:40:02 WARNING Response is NONE 08:40:02 WARNING Response is NONE 08:40:02 WARNING Response is NONE 08:40:02 DEBUG Exception is preset. Setting retry_loop to true 08:40:02 DEBUG Exception is preset. Setting retry_loop to true 08:40:02 DEBUG Exception is preset. Setting retry_loop to true 08:40:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:40:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:40:02 WARNING We received known exception. Trying to recover, sleeping for 10 secs before retry... 08:40:07 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28ds_backend_db_cache_misses_internal_nodes%7Bbackend%3D%27idmRepo%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:40:07 WARNING Response is NONE 08:40:07 DEBUG Exception is preset. Setting retry_loop to true 08:40:07 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-17: Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:40:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28node_disk_io_time_seconds_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28instance%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:40:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28rate%28am_cts_task_count%7Btoken_type%3D%27session%27%2Cnamespace%3D%27xlou%27%7D%5B60s%5D%29%29by%28pod%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:40:13 WARNING Got connection reset error: HTTPConnectionPool(host='prometheus-operator-kube-p-prometheus.monitoring.svc.cluster.local', port=9090): Max retries exceeded with url: /api/v1/query?query=sum%28avg_over_time%28node_disk_written_bytes_total%7Bjob%3D%27node-exporter%27%2Cdevice%3D~%27nvme.%2B%7Crbd.%2B%7Csd.%2B%7Cvd.%2B%7Cxvd.%2B%7Cdasd.%2B%27%7D%5B60s%5D%29%29by%28node%29&time=1691825759 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')). Checking if error is transient one 08:40:13 WARNING Response is NONE 08:40:13 WARNING Response is NONE 08:40:13 WARNING Response is NONE 08:40:13 DEBUG Exception is preset. Setting retry_loop to true 08:40:13 DEBUG Exception is preset. Setting retry_loop to true 08:40:13 DEBUG Exception is preset. Setting retry_loop to true 08:40:13 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-29: 08:40:13 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Traceback (most recent call last): 08:40:13 WARNING Hit retry pattern for a 5 time. Proceeding to check response anyway. Exception in thread Thread-12: Exception in thread Thread-27: Traceback (most recent call last): Traceback (most recent call last): File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 299, in run response = http_cmd.get(url=url_encoded, retries=5) response = http_cmd.get(url=url_encoded, retries=5) response = http_cmd.get(url=url_encoded, retries=5) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 277, in get return self.request_cmd(url=url, **kwargs) return self.request_cmd(url=url, **kwargs) File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd return self.request_cmd(url=url, **kwargs) raise FailException('Failed to obtain response from server...') File "/home/jenkins/lodestar/shared/lib/utils/HttpCmd.py", line 381, in request_cmd raise FailException('Failed to obtain response from server...') shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: raise FailException('Failed to obtain response from server...') Traceback (most recent call last): shared.lib.utils.exception.FailException: Failed to obtain response from server... During handling of the above exception, another exception occurred: File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner Traceback (most recent call last): Traceback (most recent call last): File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner self.run() self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run self.run() File "/usr/local/lib/python3.9/threading.py", line 910, in run File "/usr/local/lib/python3.9/threading.py", line 910, in run self._target(*self._args, **self._kwargs) self._target(*self._args, **self._kwargs) File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop self._target(*self._args, **self._kwargs) instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/lodemon_service.py", line 152, in execute_monitoring_instance_in_loop instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run instance.run() File "/home/jenkins/lodestar/shared/lib/monitoring/monitoring.py", line 315, in run self.logger(f'Query: {query} failed with: {e}') self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable TypeError: 'LodestarLogger' object is not callable self.logger(f'Query: {query} failed with: {e}') TypeError: 'LodestarLogger' object is not callable 08:40:59 INFO 08:40:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:40:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:40:59 INFO [loop_until]: OK (rc = 0) 08:40:59 DEBUG --- stdout --- 08:40:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 11m 2373Mi am-55f77847b7-mbr4x 117m 2649Mi am-55f77847b7-mfzwm 83m 3293Mi ds-cts-0 81m 373Mi ds-cts-1 94m 371Mi ds-cts-2 79m 345Mi ds-idrepo-0 946m 10788Mi ds-idrepo-1 28m 10317Mi ds-idrepo-2 110m 10320Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 12m 1434Mi idm-65858d8c4c-gpz8d 6m 3444Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 479m 373Mi 08:40:59 DEBUG --- stderr --- 08:40:59 DEBUG 08:40:59 INFO 08:40:59 INFO [loop_until]: kubectl --namespace=xlou top node 08:40:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:40:59 INFO [loop_until]: OK (rc = 0) 08:40:59 DEBUG --- stdout --- 08:40:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 151m 0% 4324Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 199m 1% 3776Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3482Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 124m 0% 4774Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 110m 0% 2735Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 145m 0% 1096Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 180m 1% 10978Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 210m 1% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 951m 5% 11340Mi 19% gke-xlou-cdm-ds-32e4dcb1-n920 135m 0% 1073Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 176m 1% 10949Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 604m 3% 1910Mi 3% 08:40:59 DEBUG --- stderr --- 08:40:59 DEBUG 08:41:59 INFO 08:41:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:41:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:41:59 INFO [loop_until]: OK (rc = 0) 08:41:59 DEBUG --- stdout --- 08:41:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 12m 2382Mi am-55f77847b7-mbr4x 12m 2657Mi am-55f77847b7-mfzwm 15m 3294Mi ds-cts-0 172m 374Mi ds-cts-1 103m 377Mi ds-cts-2 96m 346Mi ds-idrepo-0 3075m 13311Mi ds-idrepo-1 212m 10318Mi ds-idrepo-2 267m 10320Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 11m 1489Mi idm-65858d8c4c-gpz8d 7m 3458Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1109m 381Mi 08:41:59 DEBUG --- stderr --- 08:41:59 DEBUG 08:41:59 INFO 08:41:59 INFO [loop_until]: kubectl --namespace=xlou top node 08:41:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:41:59 INFO [loop_until]: OK (rc = 0) 08:41:59 DEBUG --- stdout --- 08:41:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 70m 0% 4312Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3782Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3496Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4770Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2744Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 157m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 234m 1% 10969Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 361m 2% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3237m 20% 13823Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 144m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 261m 1% 10948Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1182m 7% 1902Mi 3% 08:41:59 DEBUG --- stderr --- 08:41:59 DEBUG 08:42:59 INFO 08:42:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:42:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:42:59 INFO [loop_until]: OK (rc = 0) 08:42:59 DEBUG --- stdout --- 08:42:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 9m 2394Mi am-55f77847b7-mbr4x 18m 2666Mi am-55f77847b7-mfzwm 15m 3294Mi ds-cts-0 7m 376Mi ds-cts-1 11m 382Mi ds-cts-2 10m 346Mi ds-idrepo-0 2822m 13311Mi ds-idrepo-1 31m 10318Mi ds-idrepo-2 18m 10322Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 9m 1501Mi idm-65858d8c4c-gpz8d 9m 3459Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1135m 382Mi 08:42:59 DEBUG --- stderr --- 08:42:59 DEBUG 08:42:59 INFO 08:42:59 INFO [loop_until]: kubectl --namespace=xlou top node 08:42:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:42:59 INFO [loop_until]: OK (rc = 0) 08:42:59 DEBUG --- stdout --- 08:42:59 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 74m 0% 4315Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 3794Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3506Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4771Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2112Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2759Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 10970Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2875m 18% 13870Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 83m 0% 10947Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1249m 7% 1902Mi 3% 08:42:59 DEBUG --- stderr --- 08:42:59 DEBUG 08:43:59 INFO 08:43:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:43:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:43:59 INFO [loop_until]: OK (rc = 0) 08:43:59 DEBUG --- stdout --- 08:43:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 11m 2406Mi am-55f77847b7-mbr4x 11m 2672Mi am-55f77847b7-mfzwm 10m 3294Mi ds-cts-0 7m 376Mi ds-cts-1 9m 382Mi ds-cts-2 8m 346Mi ds-idrepo-0 2827m 13473Mi ds-idrepo-1 19m 10323Mi ds-idrepo-2 30m 10324Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 9m 1513Mi idm-65858d8c4c-gpz8d 8m 3459Mi lodemon-6cd9c44bd4-vnqvr 1m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1231m 385Mi 08:43:59 DEBUG --- stderr --- 08:43:59 DEBUG 08:43:59 INFO 08:43:59 INFO [loop_until]: kubectl --namespace=xlou top node 08:43:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:44:00 INFO [loop_until]: OK (rc = 0) 08:44:00 DEBUG --- stdout --- 08:44:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 4310Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3803Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3516Mi 5% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4770Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2769Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 83m 0% 10975Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2992m 18% 14082Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 10954Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1254m 7% 1906Mi 3% 08:44:00 DEBUG --- stderr --- 08:44:00 DEBUG 08:44:59 INFO 08:44:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:44:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:44:59 INFO [loop_until]: OK (rc = 0) 08:44:59 DEBUG --- stdout --- 08:44:59 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 9m 2415Mi am-55f77847b7-mbr4x 11m 2685Mi am-55f77847b7-mfzwm 12m 3295Mi ds-cts-0 6m 376Mi ds-cts-1 9m 382Mi ds-cts-2 7m 347Mi ds-idrepo-0 3070m 13649Mi ds-idrepo-1 30m 10323Mi ds-idrepo-2 21m 10324Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 13m 1525Mi idm-65858d8c4c-gpz8d 9m 3461Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1306m 385Mi 08:44:59 DEBUG --- stderr --- 08:44:59 DEBUG 08:45:00 INFO 08:45:00 INFO [loop_until]: kubectl --namespace=xlou top node 08:45:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:45:00 INFO [loop_until]: OK (rc = 0) 08:45:00 DEBUG --- stdout --- 08:45:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 4313Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3813Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 3528Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 4773Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 2780Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 10973Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2990m 18% 14200Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 84m 0% 10951Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1388m 8% 1907Mi 3% 08:45:00 DEBUG --- stderr --- 08:45:00 DEBUG 08:45:59 INFO 08:45:59 INFO [loop_until]: kubectl --namespace=xlou top pods 08:45:59 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:46:00 INFO [loop_until]: OK (rc = 0) 08:46:00 DEBUG --- stdout --- 08:46:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 10m 2426Mi am-55f77847b7-mbr4x 13m 2694Mi am-55f77847b7-mfzwm 11m 3296Mi ds-cts-0 8m 376Mi ds-cts-1 9m 383Mi ds-cts-2 7m 347Mi ds-idrepo-0 2963m 13650Mi ds-idrepo-1 23m 10323Mi ds-idrepo-2 18m 10325Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 9m 1536Mi idm-65858d8c4c-gpz8d 12m 3461Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1148m 98Mi 08:46:00 DEBUG --- stderr --- 08:46:00 DEBUG 08:46:00 INFO 08:46:00 INFO [loop_until]: kubectl --namespace=xlou top node 08:46:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:46:00 INFO [loop_until]: OK (rc = 0) 08:46:00 DEBUG --- stdout --- 08:46:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 4314Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3826Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3539Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4770Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2123Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2792Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 71m 0% 10974Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2705m 17% 14190Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 78m 0% 10954Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 1256m 7% 1622Mi 2% 08:46:00 DEBUG --- stderr --- 08:46:00 DEBUG 08:47:00 INFO 08:47:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:47:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:47:00 INFO [loop_until]: OK (rc = 0) 08:47:00 DEBUG --- stdout --- 08:47:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 11m 2437Mi am-55f77847b7-mbr4x 11m 2706Mi am-55f77847b7-mfzwm 11m 3296Mi ds-cts-0 7m 376Mi ds-cts-1 11m 384Mi ds-cts-2 6m 347Mi ds-idrepo-0 11m 13649Mi ds-idrepo-1 16m 10323Mi ds-idrepo-2 20m 10327Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 15m 1545Mi idm-65858d8c4c-gpz8d 14m 3466Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 98Mi 08:47:00 DEBUG --- stderr --- 08:47:00 DEBUG 08:47:00 INFO 08:47:00 INFO [loop_until]: kubectl --namespace=xlou top node 08:47:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:47:00 INFO [loop_until]: OK (rc = 0) 08:47:00 DEBUG --- stdout --- 08:47:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 4314Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 3837Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 66m 0% 3550Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 4776Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 77m 0% 2802Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 80m 0% 10974Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14191Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 77m 0% 10955Mi 18% gke-xlou-cdm-frontend-a8771548-k40m 72m 0% 1624Mi 2% 08:47:00 DEBUG --- stderr --- 08:47:00 DEBUG 08:48:00 INFO 08:48:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:48:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:48:00 INFO [loop_until]: OK (rc = 0) 08:48:00 DEBUG --- stdout --- 08:48:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 9m 2448Mi am-55f77847b7-mbr4x 9m 2715Mi am-55f77847b7-mfzwm 10m 3298Mi ds-cts-0 7m 376Mi ds-cts-1 9m 383Mi ds-cts-2 6m 347Mi ds-idrepo-0 16m 13650Mi ds-idrepo-1 3000m 12245Mi ds-idrepo-2 23m 10330Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 8m 1558Mi idm-65858d8c4c-gpz8d 7m 3466Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1017m 384Mi 08:48:00 DEBUG --- stderr --- 08:48:00 DEBUG 08:48:00 INFO 08:48:00 INFO [loop_until]: kubectl --namespace=xlou top node 08:48:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:48:00 INFO [loop_until]: OK (rc = 0) 08:48:00 DEBUG --- stdout --- 08:48:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 4316Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 3847Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 3563Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4777Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 2814Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 10975Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2979m 18% 13315Mi 22% gke-xlou-cdm-frontend-a8771548-k40m 1180m 7% 1908Mi 3% 08:48:00 DEBUG --- stderr --- 08:48:00 DEBUG 08:49:00 INFO 08:49:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:49:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:49:00 INFO [loop_until]: OK (rc = 0) 08:49:00 DEBUG --- stdout --- 08:49:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 13m 2458Mi am-55f77847b7-mbr4x 12m 2727Mi am-55f77847b7-mfzwm 8m 3298Mi ds-cts-0 8m 376Mi ds-cts-1 7m 383Mi ds-cts-2 7m 348Mi ds-idrepo-0 17m 13649Mi ds-idrepo-1 2992m 13365Mi ds-idrepo-2 18m 10330Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 11m 1567Mi idm-65858d8c4c-gpz8d 6m 3466Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1245m 384Mi 08:49:00 DEBUG --- stderr --- 08:49:00 DEBUG 08:49:00 INFO 08:49:00 INFO [loop_until]: kubectl --namespace=xlou top node 08:49:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:49:00 INFO [loop_until]: OK (rc = 0) 08:49:00 DEBUG --- stdout --- 08:49:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 66m 0% 4316Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 72m 0% 3866Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3574Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 4776Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2119Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2828Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 10979Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 65m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2923m 18% 13857Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1309m 8% 1915Mi 3% 08:49:00 DEBUG --- stderr --- 08:49:00 DEBUG 08:50:00 INFO 08:50:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:50:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:50:00 INFO [loop_until]: OK (rc = 0) 08:50:00 DEBUG --- stdout --- 08:50:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 10m 2468Mi am-55f77847b7-mbr4x 12m 2736Mi am-55f77847b7-mfzwm 9m 3298Mi ds-cts-0 6m 376Mi ds-cts-1 12m 383Mi ds-cts-2 8m 347Mi ds-idrepo-0 18m 13649Mi ds-idrepo-1 2924m 13401Mi ds-idrepo-2 16m 10331Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 8m 1580Mi idm-65858d8c4c-gpz8d 7m 3466Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1267m 384Mi 08:50:00 DEBUG --- stderr --- 08:50:00 DEBUG 08:50:00 INFO 08:50:00 INFO [loop_until]: kubectl --namespace=xlou top node 08:50:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:50:00 INFO [loop_until]: OK (rc = 0) 08:50:00 DEBUG --- stdout --- 08:50:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 4316Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 68m 0% 3869Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 64m 0% 3583Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 4777Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 117m 0% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2837Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 71m 0% 10980Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1098Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 14192Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3142m 19% 13942Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1376m 8% 1906Mi 3% 08:50:00 DEBUG --- stderr --- 08:50:00 DEBUG 08:51:00 INFO 08:51:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:51:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:51:00 INFO [loop_until]: OK (rc = 0) 08:51:00 DEBUG --- stdout --- 08:51:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 8m 2483Mi am-55f77847b7-mbr4x 9m 2747Mi am-55f77847b7-mfzwm 8m 3298Mi ds-cts-0 9m 376Mi ds-cts-1 7m 383Mi ds-cts-2 5m 347Mi ds-idrepo-0 10m 13649Mi ds-idrepo-1 3082m 13427Mi ds-idrepo-2 22m 10332Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 9m 1592Mi idm-65858d8c4c-gpz8d 10m 3467Mi lodemon-6cd9c44bd4-vnqvr 1m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1333m 384Mi 08:51:00 DEBUG --- stderr --- 08:51:00 DEBUG 08:51:00 INFO 08:51:00 INFO [loop_until]: kubectl --namespace=xlou top node 08:51:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:51:00 INFO [loop_until]: OK (rc = 0) 08:51:00 DEBUG --- stdout --- 08:51:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 4316Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 3876Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3592Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 4778Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2122Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 2848Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 75m 0% 10984Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 58m 0% 14194Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1074Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3217m 20% 13970Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 1404m 8% 1906Mi 3% 08:51:00 DEBUG --- stderr --- 08:51:00 DEBUG 08:52:00 INFO 08:52:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:52:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:52:00 INFO [loop_until]: OK (rc = 0) 08:52:00 DEBUG --- stdout --- 08:52:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 10m 2492Mi am-55f77847b7-mbr4x 22m 2761Mi am-55f77847b7-mfzwm 10m 3299Mi ds-cts-0 7m 376Mi ds-cts-1 7m 384Mi ds-cts-2 5m 347Mi ds-idrepo-0 10m 13649Mi ds-idrepo-1 3245m 13654Mi ds-idrepo-2 14m 10333Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 9m 1601Mi idm-65858d8c4c-gpz8d 7m 3467Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1396m 384Mi 08:52:00 DEBUG --- stderr --- 08:52:00 DEBUG 08:52:00 INFO 08:52:00 INFO [loop_until]: kubectl --namespace=xlou top node 08:52:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:52:00 INFO [loop_until]: OK (rc = 0) 08:52:00 DEBUG --- stdout --- 08:52:00 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1345Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 64m 0% 4317Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 75m 0% 3892Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 3610Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4779Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 124m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2858Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 10982Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14196Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3312m 20% 14200Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1501m 9% 1907Mi 3% 08:52:00 DEBUG --- stderr --- 08:52:00 DEBUG 08:53:00 INFO 08:53:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:53:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:53:00 INFO [loop_until]: OK (rc = 0) 08:53:00 DEBUG --- stdout --- 08:53:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 10m 2501Mi am-55f77847b7-mbr4x 8m 2771Mi am-55f77847b7-mfzwm 9m 3298Mi ds-cts-0 7m 376Mi ds-cts-1 7m 384Mi ds-cts-2 6m 348Mi ds-idrepo-0 13m 13649Mi ds-idrepo-1 18m 13686Mi ds-idrepo-2 18m 10334Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 8m 1615Mi idm-65858d8c4c-gpz8d 6m 3467Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 98Mi 08:53:00 DEBUG --- stderr --- 08:53:00 DEBUG 08:53:01 INFO 08:53:01 INFO [loop_until]: kubectl --namespace=xlou top node 08:53:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:53:01 INFO [loop_until]: OK (rc = 0) 08:53:01 DEBUG --- stdout --- 08:53:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 4316Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 3902Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3615Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4777Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2871Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 10982Mi 18% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1100Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 62m 0% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1622Mi 2% 08:53:01 DEBUG --- stderr --- 08:53:01 DEBUG 08:54:00 INFO 08:54:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:54:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:54:00 INFO [loop_until]: OK (rc = 0) 08:54:00 DEBUG --- stdout --- 08:54:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 9m 2513Mi am-55f77847b7-mbr4x 9m 2782Mi am-55f77847b7-mfzwm 9m 3299Mi ds-cts-0 7m 376Mi ds-cts-1 8m 383Mi ds-cts-2 6m 349Mi ds-idrepo-0 18m 13650Mi ds-idrepo-1 16m 13686Mi ds-idrepo-2 2610m 12084Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 7m 1625Mi idm-65858d8c4c-gpz8d 7m 3467Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 946m 371Mi 08:54:00 DEBUG --- stderr --- 08:54:00 DEBUG 08:54:01 INFO 08:54:01 INFO [loop_until]: kubectl --namespace=xlou top node 08:54:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:54:01 INFO [loop_until]: OK (rc = 0) 08:54:01 DEBUG --- stdout --- 08:54:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 65m 0% 4318Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 3917Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 3626Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4777Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2117Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 2885Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2819m 17% 12681Mi 21% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 67m 0% 14198Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 78m 0% 14222Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1031m 6% 1894Mi 3% 08:54:01 DEBUG --- stderr --- 08:54:01 DEBUG 08:55:00 INFO 08:55:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:55:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:55:00 INFO [loop_until]: OK (rc = 0) 08:55:00 DEBUG --- stdout --- 08:55:00 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 9m 2524Mi am-55f77847b7-mbr4x 10m 2792Mi am-55f77847b7-mfzwm 10m 3299Mi ds-cts-0 7m 376Mi ds-cts-1 7m 384Mi ds-cts-2 6m 349Mi ds-idrepo-0 15m 13649Mi ds-idrepo-1 16m 13682Mi ds-idrepo-2 2741m 13396Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 7m 1636Mi idm-65858d8c4c-gpz8d 7m 3467Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1113m 371Mi 08:55:00 DEBUG --- stderr --- 08:55:00 DEBUG 08:55:01 INFO 08:55:01 INFO [loop_until]: kubectl --namespace=xlou top node 08:55:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:55:01 INFO [loop_until]: OK (rc = 0) 08:55:01 DEBUG --- stdout --- 08:55:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 4317Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 64m 0% 3923Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 3634Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4776Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2114Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 68m 0% 2894Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2820m 17% 13915Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1099Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14219Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1181m 7% 1894Mi 3% 08:55:01 DEBUG --- stderr --- 08:55:01 DEBUG 08:56:00 INFO 08:56:00 INFO [loop_until]: kubectl --namespace=xlou top pods 08:56:00 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:56:01 INFO [loop_until]: OK (rc = 0) 08:56:01 DEBUG --- stdout --- 08:56:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 8m 2533Mi am-55f77847b7-mbr4x 9m 2808Mi am-55f77847b7-mfzwm 7m 3299Mi ds-cts-0 6m 378Mi ds-cts-1 9m 383Mi ds-cts-2 5m 349Mi ds-idrepo-0 11m 13650Mi ds-idrepo-1 17m 13682Mi ds-idrepo-2 3034m 13402Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 8m 1648Mi idm-65858d8c4c-gpz8d 11m 3467Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1194m 373Mi 08:56:01 DEBUG --- stderr --- 08:56:01 DEBUG 08:56:01 INFO 08:56:01 INFO [loop_until]: kubectl --namespace=xlou top node 08:56:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:56:01 INFO [loop_until]: OK (rc = 0) 08:56:01 DEBUG --- stdout --- 08:56:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 4317Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 3939Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 58m 0% 3642Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 4778Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2115Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2905Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2979m 18% 13965Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14198Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 74m 0% 14218Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1285m 8% 1895Mi 3% 08:56:01 DEBUG --- stderr --- 08:56:01 DEBUG 08:57:01 INFO 08:57:01 INFO [loop_until]: kubectl --namespace=xlou top pods 08:57:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:57:01 INFO [loop_until]: OK (rc = 0) 08:57:01 DEBUG --- stdout --- 08:57:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 9m 2546Mi am-55f77847b7-mbr4x 10m 2839Mi am-55f77847b7-mfzwm 9m 3305Mi ds-cts-0 7m 377Mi ds-cts-1 9m 385Mi ds-cts-2 5m 350Mi ds-idrepo-0 10m 13649Mi ds-idrepo-1 16m 13682Mi ds-idrepo-2 2809m 13441Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 18m 1658Mi idm-65858d8c4c-gpz8d 7m 3468Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1237m 373Mi 08:57:01 DEBUG --- stderr --- 08:57:01 DEBUG 08:57:01 INFO 08:57:01 INFO [loop_until]: kubectl --namespace=xlou top node 08:57:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:57:01 INFO [loop_until]: OK (rc = 0) 08:57:01 DEBUG --- stdout --- 08:57:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 4323Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 62m 0% 3968Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 3654Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4776Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 123m 0% 2118Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 79m 0% 2911Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2996m 18% 14002Mi 23% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 14197Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 71m 0% 14220Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1318m 8% 1895Mi 3% 08:57:01 DEBUG --- stderr --- 08:57:01 DEBUG 08:58:01 INFO 08:58:01 INFO [loop_until]: kubectl --namespace=xlou top pods 08:58:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:58:01 INFO [loop_until]: OK (rc = 0) 08:58:01 DEBUG --- stdout --- 08:58:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 16m 2582Mi am-55f77847b7-mbr4x 8m 2854Mi am-55f77847b7-mfzwm 8m 3305Mi ds-cts-0 8m 377Mi ds-cts-1 9m 385Mi ds-cts-2 7m 350Mi ds-idrepo-0 11m 13649Mi ds-idrepo-1 16m 13680Mi ds-idrepo-2 3094m 13676Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 8m 1671Mi idm-65858d8c4c-gpz8d 6m 3468Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1315m 373Mi 08:58:01 DEBUG --- stderr --- 08:58:01 DEBUG 08:58:01 INFO 08:58:01 INFO [loop_until]: kubectl --namespace=xlou top node 08:58:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:58:01 INFO [loop_until]: OK (rc = 0) 08:58:01 DEBUG --- stdout --- 08:58:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 4325Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3987Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 67m 0% 3691Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 4779Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 117m 0% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 2929Mi 4% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3099m 19% 14225Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14198Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 14218Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1361m 8% 1894Mi 3% 08:58:01 DEBUG --- stderr --- 08:58:01 DEBUG 08:59:01 INFO 08:59:01 INFO [loop_until]: kubectl --namespace=xlou top pods 08:59:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:59:01 INFO [loop_until]: OK (rc = 0) 08:59:01 DEBUG --- stdout --- 08:59:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 10m 2594Mi am-55f77847b7-mbr4x 9m 2867Mi am-55f77847b7-mfzwm 8m 3305Mi ds-cts-0 6m 376Mi ds-cts-1 8m 385Mi ds-cts-2 5m 350Mi ds-idrepo-0 11m 13649Mi ds-idrepo-1 22m 13683Mi ds-idrepo-2 17m 13662Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 7m 1681Mi idm-65858d8c4c-gpz8d 6m 3468Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 98Mi 08:59:01 DEBUG --- stderr --- 08:59:01 DEBUG 08:59:01 INFO 08:59:01 INFO [loop_until]: kubectl --namespace=xlou top node 08:59:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 08:59:01 INFO [loop_until]: OK (rc = 0) 08:59:01 DEBUG --- stdout --- 08:59:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 4322Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 65m 0% 3997Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 65m 0% 3707Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 73m 0% 4778Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 137m 0% 2120Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 2943Mi 5% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 69m 0% 14209Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1101Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 63m 0% 14198Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 72m 0% 14219Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1625Mi 2% 08:59:01 DEBUG --- stderr --- 08:59:01 DEBUG 09:00:01 INFO 09:00:01 INFO [loop_until]: kubectl --namespace=xlou top pods 09:00:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:00:01 INFO [loop_until]: OK (rc = 0) 09:00:01 DEBUG --- stdout --- 09:00:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 8m 2599Mi am-55f77847b7-mbr4x 68m 2930Mi am-55f77847b7-mfzwm 60m 3332Mi ds-cts-0 8m 378Mi ds-cts-1 9m 386Mi ds-cts-2 8m 350Mi ds-idrepo-0 11m 13651Mi ds-idrepo-1 76m 13688Mi ds-idrepo-2 23m 13661Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1086m 3569Mi idm-65858d8c4c-gpz8d 999m 3608Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1518m 404Mi 09:00:01 DEBUG --- stderr --- 09:00:01 DEBUG 09:00:01 INFO 09:00:01 INFO [loop_until]: kubectl --namespace=xlou top node 09:00:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:00:01 INFO [loop_until]: OK (rc = 0) 09:00:01 DEBUG --- stdout --- 09:00:01 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 4330Mi 7% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 4058Mi 6% gke-xlou-cdm-default-pool-f05840a3-9p4b 141m 0% 3823Mi 6% gke-xlou-cdm-default-pool-f05840a3-bf2g 1191m 7% 4919Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2121Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1605m 10% 4820Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 333m 2% 14155Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 641m 4% 14199Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 303m 1% 14228Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1705m 10% 2011Mi 3% 09:00:01 DEBUG --- stderr --- 09:00:01 DEBUG 09:01:01 INFO 09:01:01 INFO [loop_until]: kubectl --namespace=xlou top pods 09:01:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:01:01 INFO [loop_until]: OK (rc = 0) 09:01:01 DEBUG --- stdout --- 09:01:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 51m 3157Mi am-55f77847b7-mbr4x 60m 3384Mi am-55f77847b7-mfzwm 50m 3774Mi ds-cts-0 6m 378Mi ds-cts-1 8m 387Mi ds-cts-2 8m 351Mi ds-idrepo-0 2447m 13651Mi ds-idrepo-1 703m 13624Mi ds-idrepo-2 644m 13648Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 3238m 3793Mi idm-65858d8c4c-gpz8d 2705m 3706Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 492m 514Mi 09:01:01 DEBUG --- stderr --- 09:01:01 DEBUG 09:01:01 INFO 09:01:01 INFO [loop_until]: kubectl --namespace=xlou top node 09:01:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:01:02 INFO [loop_until]: OK (rc = 0) 09:01:02 DEBUG --- stdout --- 09:01:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 106m 0% 4867Mi 8% gke-xlou-cdm-default-pool-f05840a3-976h 118m 0% 4623Mi 7% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 4354Mi 7% gke-xlou-cdm-default-pool-f05840a3-bf2g 2900m 18% 5044Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 809m 5% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3358m 21% 5043Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 691m 4% 14195Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2502m 15% 14199Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1077Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 772m 4% 14158Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 518m 3% 2034Mi 3% 09:01:02 DEBUG --- stderr --- 09:01:02 DEBUG 09:02:01 INFO 09:02:01 INFO [loop_until]: kubectl --namespace=xlou top pods 09:02:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:02:01 INFO [loop_until]: OK (rc = 0) 09:02:01 DEBUG --- stdout --- 09:02:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 63m 3786Mi am-55f77847b7-mbr4x 51m 3952Mi am-55f77847b7-mfzwm 65m 4451Mi ds-cts-0 7m 378Mi ds-cts-1 8m 388Mi ds-cts-2 7m 350Mi ds-idrepo-0 2512m 13653Mi ds-idrepo-1 813m 13629Mi ds-idrepo-2 654m 13654Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2653m 3797Mi idm-65858d8c4c-gpz8d 2164m 3720Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 439m 515Mi 09:02:01 DEBUG --- stderr --- 09:02:01 DEBUG 09:02:02 INFO 09:02:02 INFO [loop_until]: kubectl --namespace=xlou top node 09:02:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:02:02 INFO [loop_until]: OK (rc = 0) 09:02:02 DEBUG --- stdout --- 09:02:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 5452Mi 9% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 5202Mi 8% gke-xlou-cdm-default-pool-f05840a3-9p4b 118m 0% 4935Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 2309m 14% 5028Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 786m 4% 2153Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2809m 17% 5049Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 814m 5% 14202Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2648m 16% 14196Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1075Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 905m 5% 14153Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 496m 3% 2031Mi 3% 09:02:02 DEBUG --- stderr --- 09:02:02 DEBUG 09:03:01 INFO 09:03:01 INFO [loop_until]: kubectl --namespace=xlou top pods 09:03:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:03:01 INFO [loop_until]: OK (rc = 0) 09:03:01 DEBUG --- stdout --- 09:03:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 39m 4152Mi am-55f77847b7-mbr4x 56m 4632Mi am-55f77847b7-mfzwm 57m 4919Mi ds-cts-0 6m 378Mi ds-cts-1 11m 387Mi ds-cts-2 7m 350Mi ds-idrepo-0 3196m 13784Mi ds-idrepo-1 1288m 13670Mi ds-idrepo-2 1190m 13660Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2785m 3806Mi idm-65858d8c4c-gpz8d 2288m 3729Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 424m 515Mi 09:03:01 DEBUG --- stderr --- 09:03:01 DEBUG 09:03:02 INFO 09:03:02 INFO [loop_until]: kubectl --namespace=xlou top node 09:03:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:03:02 INFO [loop_until]: OK (rc = 0) 09:03:02 DEBUG --- stdout --- 09:03:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 6078Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 115m 0% 5780Mi 9% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 5261Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 2407m 15% 5037Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 806m 5% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2947m 18% 5057Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1192m 7% 14193Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3296m 20% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1243m 7% 14152Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 501m 3% 2034Mi 3% 09:03:02 DEBUG --- stderr --- 09:03:02 DEBUG 09:04:01 INFO 09:04:01 INFO [loop_until]: kubectl --namespace=xlou top pods 09:04:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:04:01 INFO [loop_until]: OK (rc = 0) 09:04:01 DEBUG --- stdout --- 09:04:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 45m 4153Mi am-55f77847b7-mbr4x 49m 5220Mi am-55f77847b7-mfzwm 50m 5063Mi ds-cts-0 8m 379Mi ds-cts-1 7m 387Mi ds-cts-2 6m 350Mi ds-idrepo-0 2488m 13788Mi ds-idrepo-1 847m 13778Mi ds-idrepo-2 703m 13776Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2649m 3810Mi idm-65858d8c4c-gpz8d 2075m 3736Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 400m 515Mi 09:04:01 DEBUG --- stderr --- 09:04:01 DEBUG 09:04:02 INFO 09:04:02 INFO [loop_until]: kubectl --namespace=xlou top node 09:04:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:04:02 INFO [loop_until]: OK (rc = 0) 09:04:02 DEBUG --- stdout --- 09:04:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6078Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6354Mi 10% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 5264Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 2268m 14% 5043Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 772m 4% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2831m 17% 5061Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 738m 4% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2513m 15% 14324Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 875m 5% 14306Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 463m 2% 2033Mi 3% 09:04:02 DEBUG --- stderr --- 09:04:02 DEBUG 09:05:01 INFO 09:05:01 INFO [loop_until]: kubectl --namespace=xlou top pods 09:05:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:05:01 INFO [loop_until]: OK (rc = 0) 09:05:01 DEBUG --- stdout --- 09:05:01 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 47m 4158Mi am-55f77847b7-mbr4x 64m 5771Mi am-55f77847b7-mfzwm 45m 5064Mi ds-cts-0 7m 379Mi ds-cts-1 12m 387Mi ds-cts-2 7m 351Mi ds-idrepo-0 2672m 13820Mi ds-idrepo-1 833m 13782Mi ds-idrepo-2 530m 13783Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2721m 3816Mi idm-65858d8c4c-gpz8d 2321m 3742Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 440m 516Mi 09:05:01 DEBUG --- stderr --- 09:05:01 DEBUG 09:05:02 INFO 09:05:02 INFO [loop_until]: kubectl --namespace=xlou top node 09:05:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:05:02 INFO [loop_until]: OK (rc = 0) 09:05:02 DEBUG --- stdout --- 09:05:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6079Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 119m 0% 6897Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 5267Mi 8% gke-xlou-cdm-default-pool-f05840a3-bf2g 2409m 15% 5050Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 817m 5% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2916m 18% 5066Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 689m 4% 14327Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2768m 17% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1076Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 908m 5% 14310Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 504m 3% 2035Mi 3% 09:05:02 DEBUG --- stderr --- 09:05:02 DEBUG 09:06:01 INFO 09:06:01 INFO [loop_until]: kubectl --namespace=xlou top pods 09:06:01 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:06:02 INFO [loop_until]: OK (rc = 0) 09:06:02 DEBUG --- stdout --- 09:06:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 4169Mi am-55f77847b7-mbr4x 44m 5771Mi am-55f77847b7-mfzwm 44m 5064Mi ds-cts-0 7m 379Mi ds-cts-1 9m 387Mi ds-cts-2 7m 351Mi ds-idrepo-0 2550m 13821Mi ds-idrepo-1 831m 13784Mi ds-idrepo-2 838m 13785Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2695m 3823Mi idm-65858d8c4c-gpz8d 2212m 3749Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 421m 516Mi 09:06:02 DEBUG --- stderr --- 09:06:02 DEBUG 09:06:02 INFO 09:06:02 INFO [loop_until]: kubectl --namespace=xlou top node 09:06:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:06:02 INFO [loop_until]: OK (rc = 0) 09:06:02 DEBUG --- stdout --- 09:06:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6078Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6898Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 5280Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2354m 14% 5058Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 774m 4% 2152Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2878m 18% 5074Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 784m 4% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2738m 17% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 832m 5% 14313Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 492m 3% 2035Mi 3% 09:06:02 DEBUG --- stderr --- 09:06:02 DEBUG 09:07:02 INFO 09:07:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:07:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:07:02 INFO [loop_until]: OK (rc = 0) 09:07:02 DEBUG --- stdout --- 09:07:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 45m 4171Mi am-55f77847b7-mbr4x 44m 5771Mi am-55f77847b7-mfzwm 44m 5067Mi ds-cts-0 6m 379Mi ds-cts-1 9m 387Mi ds-cts-2 7m 351Mi ds-idrepo-0 2842m 13822Mi ds-idrepo-1 868m 13785Mi ds-idrepo-2 556m 13788Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2889m 3829Mi idm-65858d8c4c-gpz8d 2232m 3755Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 416m 516Mi 09:07:02 DEBUG --- stderr --- 09:07:02 DEBUG 09:07:02 INFO 09:07:02 INFO [loop_until]: kubectl --namespace=xlou top node 09:07:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:07:02 INFO [loop_until]: OK (rc = 0) 09:07:02 DEBUG --- stdout --- 09:07:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 103m 0% 6081Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6896Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 5279Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2412m 15% 5063Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 812m 5% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2927m 18% 5083Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 796m 5% 14331Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2942m 18% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 936m 5% 14313Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 477m 3% 2035Mi 3% 09:07:02 DEBUG --- stderr --- 09:07:02 DEBUG 09:08:02 INFO 09:08:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:08:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:08:02 INFO [loop_until]: OK (rc = 0) 09:08:02 DEBUG --- stdout --- 09:08:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 43m 4305Mi am-55f77847b7-mbr4x 51m 5799Mi am-55f77847b7-mfzwm 45m 5067Mi ds-cts-0 6m 379Mi ds-cts-1 10m 387Mi ds-cts-2 7m 352Mi ds-idrepo-0 2766m 13823Mi ds-idrepo-1 1051m 13793Mi ds-idrepo-2 937m 13822Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2781m 3837Mi idm-65858d8c4c-gpz8d 2274m 3761Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 435m 516Mi 09:08:02 DEBUG --- stderr --- 09:08:02 DEBUG 09:08:02 INFO 09:08:02 INFO [loop_until]: kubectl --namespace=xlou top node 09:08:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:08:02 INFO [loop_until]: OK (rc = 0) 09:08:02 DEBUG --- stdout --- 09:08:02 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6082Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6925Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 5442Mi 9% gke-xlou-cdm-default-pool-f05840a3-bf2g 2394m 15% 5070Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 790m 4% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2893m 18% 5089Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 887m 5% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3052m 19% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1235m 7% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 503m 3% 2035Mi 3% 09:08:02 DEBUG --- stderr --- 09:08:02 DEBUG 09:09:02 INFO 09:09:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:09:02 INFO [loop_until]: OK (rc = 0) 09:09:02 DEBUG --- stdout --- 09:09:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 50m 4906Mi am-55f77847b7-mbr4x 54m 5801Mi am-55f77847b7-mfzwm 46m 5068Mi ds-cts-0 7m 379Mi ds-cts-1 7m 387Mi ds-cts-2 8m 351Mi ds-idrepo-0 2790m 13823Mi ds-idrepo-1 919m 13829Mi ds-idrepo-2 1048m 13794Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2699m 3846Mi idm-65858d8c4c-gpz8d 2237m 3768Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 415m 517Mi 09:09:02 DEBUG --- stderr --- 09:09:02 DEBUG 09:09:02 INFO 09:09:02 INFO [loop_until]: kubectl --namespace=xlou top node 09:09:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:09:03 INFO [loop_until]: OK (rc = 0) 09:09:03 DEBUG --- stdout --- 09:09:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 6083Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 120m 0% 6931Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 105m 0% 6015Mi 10% gke-xlou-cdm-default-pool-f05840a3-bf2g 2426m 15% 5075Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 806m 5% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2777m 17% 5096Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1035m 6% 14330Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2970m 18% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1181m 7% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 484m 3% 2034Mi 3% 09:09:03 DEBUG --- stderr --- 09:09:03 DEBUG 09:10:02 INFO 09:10:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:10:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:10:02 INFO [loop_until]: OK (rc = 0) 09:10:02 DEBUG --- stdout --- 09:10:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 114m 5546Mi am-55f77847b7-mbr4x 45m 5802Mi am-55f77847b7-mfzwm 57m 5201Mi ds-cts-0 7m 379Mi ds-cts-1 11m 387Mi ds-cts-2 7m 352Mi ds-idrepo-0 2843m 13823Mi ds-idrepo-1 930m 13828Mi ds-idrepo-2 751m 13822Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2721m 3856Mi idm-65858d8c4c-gpz8d 2223m 3773Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 421m 516Mi 09:10:02 DEBUG --- stderr --- 09:10:02 DEBUG 09:10:03 INFO 09:10:03 INFO [loop_until]: kubectl --namespace=xlou top node 09:10:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:10:03 INFO [loop_until]: OK (rc = 0) 09:10:03 DEBUG --- stdout --- 09:10:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6310Mi 10% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6932Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 165m 1% 6617Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2450m 15% 5079Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 787m 4% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2826m 17% 5105Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 770m 4% 14363Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2950m 18% 14362Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1078Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 950m 5% 14348Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 476m 2% 2034Mi 3% 09:10:03 DEBUG --- stderr --- 09:10:03 DEBUG 09:11:02 INFO 09:11:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:11:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:11:02 INFO [loop_until]: OK (rc = 0) 09:11:02 DEBUG --- stdout --- 09:11:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 43m 5703Mi am-55f77847b7-mbr4x 50m 5785Mi am-55f77847b7-mfzwm 58m 5746Mi ds-cts-0 8m 380Mi ds-cts-1 18m 385Mi ds-cts-2 7m 352Mi ds-idrepo-0 2817m 13824Mi ds-idrepo-1 977m 13819Mi ds-idrepo-2 704m 13823Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2812m 3863Mi idm-65858d8c4c-gpz8d 2219m 3781Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 422m 517Mi 09:11:02 DEBUG --- stderr --- 09:11:02 DEBUG 09:11:03 INFO 09:11:03 INFO [loop_until]: kubectl --namespace=xlou top node 09:11:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:11:03 INFO [loop_until]: OK (rc = 0) 09:11:03 DEBUG --- stdout --- 09:11:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 119m 0% 6758Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6913Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2371m 14% 5087Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 809m 5% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2896m 18% 5111Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 77m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1017m 6% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2937m 18% 14357Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 950m 5% 14349Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 490m 3% 2034Mi 3% 09:11:03 DEBUG --- stderr --- 09:11:03 DEBUG 09:12:02 INFO 09:12:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:12:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:12:02 INFO [loop_until]: OK (rc = 0) 09:12:02 DEBUG --- stdout --- 09:12:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 39m 5704Mi am-55f77847b7-mbr4x 45m 5786Mi am-55f77847b7-mfzwm 45m 5746Mi ds-cts-0 6m 380Mi ds-cts-1 9m 384Mi ds-cts-2 7m 352Mi ds-idrepo-0 3814m 13823Mi ds-idrepo-1 1294m 13812Mi ds-idrepo-2 1346m 13800Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2655m 3868Mi idm-65858d8c4c-gpz8d 2248m 3788Mi lodemon-6cd9c44bd4-vnqvr 1m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 402m 517Mi 09:12:02 DEBUG --- stderr --- 09:12:02 DEBUG 09:12:03 INFO 09:12:03 INFO [loop_until]: kubectl --namespace=xlou top node 09:12:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:12:03 INFO [loop_until]: OK (rc = 0) 09:12:03 DEBUG --- stdout --- 09:12:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6761Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 104m 0% 6912Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6812Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2315m 14% 5092Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 786m 4% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2757m 17% 5118Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1380m 8% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3935m 24% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1394m 8% 14343Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 475m 2% 2033Mi 3% 09:12:03 DEBUG --- stderr --- 09:12:03 DEBUG 09:13:02 INFO 09:13:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:13:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:13:02 INFO [loop_until]: OK (rc = 0) 09:13:02 DEBUG --- stdout --- 09:13:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 39m 5704Mi am-55f77847b7-mbr4x 48m 5826Mi am-55f77847b7-mfzwm 42m 5746Mi ds-cts-0 7m 380Mi ds-cts-1 11m 385Mi ds-cts-2 7m 352Mi ds-idrepo-0 3413m 13763Mi ds-idrepo-1 1125m 13777Mi ds-idrepo-2 876m 13750Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2733m 3877Mi idm-65858d8c4c-gpz8d 2211m 3795Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 431m 517Mi 09:13:02 DEBUG --- stderr --- 09:13:02 DEBUG 09:13:03 INFO 09:13:03 INFO [loop_until]: kubectl --namespace=xlou top node 09:13:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:13:03 INFO [loop_until]: OK (rc = 0) 09:13:03 DEBUG --- stdout --- 09:13:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6758Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2384m 15% 5100Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 800m 5% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2915m 18% 5133Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 930m 5% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3489m 21% 14292Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1018m 6% 14287Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 489m 3% 2033Mi 3% 09:13:03 DEBUG --- stderr --- 09:13:03 DEBUG 09:14:02 INFO 09:14:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:14:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:14:02 INFO [loop_until]: OK (rc = 0) 09:14:02 DEBUG --- stdout --- 09:14:02 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 44m 5705Mi am-55f77847b7-mbr4x 39m 5825Mi am-55f77847b7-mfzwm 43m 5746Mi ds-cts-0 6m 380Mi ds-cts-1 8m 385Mi ds-cts-2 6m 352Mi ds-idrepo-0 2947m 13824Mi ds-idrepo-1 778m 13813Mi ds-idrepo-2 973m 13808Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2615m 3884Mi idm-65858d8c4c-gpz8d 2207m 3801Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 404m 518Mi 09:14:02 DEBUG --- stderr --- 09:14:02 DEBUG 09:14:03 INFO 09:14:03 INFO [loop_until]: kubectl --namespace=xlou top node 09:14:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:14:03 INFO [loop_until]: OK (rc = 0) 09:14:03 DEBUG --- stdout --- 09:14:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 105m 0% 6757Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6953Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2366m 14% 5108Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 772m 4% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2837m 17% 5137Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1049m 6% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2857m 17% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1031m 6% 14368Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 480m 3% 2035Mi 3% 09:14:03 DEBUG --- stderr --- 09:14:03 DEBUG 09:15:02 INFO 09:15:02 INFO [loop_until]: kubectl --namespace=xlou top pods 09:15:02 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:15:03 INFO [loop_until]: OK (rc = 0) 09:15:03 DEBUG --- stdout --- 09:15:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 48m 5706Mi am-55f77847b7-mbr4x 38m 5825Mi am-55f77847b7-mfzwm 42m 5746Mi ds-cts-0 6m 380Mi ds-cts-1 8m 385Mi ds-cts-2 8m 352Mi ds-idrepo-0 3439m 13821Mi ds-idrepo-1 1468m 13821Mi ds-idrepo-2 1477m 13827Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2750m 3891Mi idm-65858d8c4c-gpz8d 2295m 3806Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 405m 518Mi 09:15:03 DEBUG --- stderr --- 09:15:03 DEBUG 09:15:03 INFO 09:15:03 INFO [loop_until]: kubectl --namespace=xlou top node 09:15:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:15:03 INFO [loop_until]: OK (rc = 0) 09:15:03 DEBUG --- stdout --- 09:15:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6757Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6948Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 101m 0% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2338m 14% 5116Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 811m 5% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2848m 17% 5144Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1248m 7% 14378Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 54m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3734m 23% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1385m 8% 14339Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 486m 3% 2035Mi 3% 09:15:03 DEBUG --- stderr --- 09:15:03 DEBUG 09:16:03 INFO 09:16:03 INFO [loop_until]: kubectl --namespace=xlou top pods 09:16:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:16:03 INFO [loop_until]: OK (rc = 0) 09:16:03 DEBUG --- stdout --- 09:16:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 43m 5711Mi am-55f77847b7-mbr4x 40m 5825Mi am-55f77847b7-mfzwm 43m 5746Mi ds-cts-0 6m 380Mi ds-cts-1 10m 385Mi ds-cts-2 7m 352Mi ds-idrepo-0 2926m 13827Mi ds-idrepo-1 775m 13829Mi ds-idrepo-2 941m 13843Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2704m 3899Mi idm-65858d8c4c-gpz8d 2166m 3815Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 413m 518Mi 09:16:03 DEBUG --- stderr --- 09:16:03 DEBUG 09:16:03 INFO 09:16:03 INFO [loop_until]: kubectl --namespace=xlou top node 09:16:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:16:03 INFO [loop_until]: OK (rc = 0) 09:16:03 DEBUG --- stdout --- 09:16:03 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6761Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2369m 14% 5124Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 801m 5% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2808m 17% 5149Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 805m 5% 14382Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2946m 18% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 875m 5% 14354Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 482m 3% 2033Mi 3% 09:16:03 DEBUG --- stderr --- 09:16:03 DEBUG 09:17:03 INFO 09:17:03 INFO [loop_until]: kubectl --namespace=xlou top pods 09:17:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:17:03 INFO [loop_until]: OK (rc = 0) 09:17:03 DEBUG --- stdout --- 09:17:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5711Mi am-55f77847b7-mbr4x 42m 5825Mi am-55f77847b7-mfzwm 49m 5746Mi ds-cts-0 8m 380Mi ds-cts-1 7m 385Mi ds-cts-2 8m 352Mi ds-idrepo-0 3092m 13824Mi ds-idrepo-1 1043m 13837Mi ds-idrepo-2 890m 13835Mi end-user-ui-6845bc78c7-jrqhg 1m 4Mi idm-65858d8c4c-5vh78 2798m 3906Mi idm-65858d8c4c-gpz8d 2278m 3821Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 406m 518Mi 09:17:03 DEBUG --- stderr --- 09:17:03 DEBUG 09:17:03 INFO 09:17:03 INFO [loop_until]: kubectl --namespace=xlou top node 09:17:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:17:04 INFO [loop_until]: OK (rc = 0) 09:17:04 DEBUG --- stdout --- 09:17:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 104m 0% 6760Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2399m 15% 5129Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 822m 5% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2930m 18% 5157Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 958m 6% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3167m 19% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1040m 6% 14368Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 489m 3% 2034Mi 3% 09:17:04 DEBUG --- stderr --- 09:17:04 DEBUG 09:18:03 INFO 09:18:03 INFO [loop_until]: kubectl --namespace=xlou top pods 09:18:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:18:03 INFO [loop_until]: OK (rc = 0) 09:18:03 DEBUG --- stdout --- 09:18:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 43m 5714Mi am-55f77847b7-mbr4x 40m 5825Mi am-55f77847b7-mfzwm 44m 5752Mi ds-cts-0 6m 380Mi ds-cts-1 8m 387Mi ds-cts-2 8m 352Mi ds-idrepo-0 3765m 13847Mi ds-idrepo-1 1531m 13826Mi ds-idrepo-2 1937m 13830Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2680m 3913Mi idm-65858d8c4c-gpz8d 2275m 3827Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 418m 519Mi 09:18:03 DEBUG --- stderr --- 09:18:03 DEBUG 09:18:04 INFO 09:18:04 INFO [loop_until]: kubectl --namespace=xlou top node 09:18:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:18:04 INFO [loop_until]: OK (rc = 0) 09:18:04 DEBUG --- stdout --- 09:18:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 106m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6820Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2391m 15% 5135Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 797m 5% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2864m 18% 5164Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1444m 9% 14367Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3727m 23% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1378m 8% 14347Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 493m 3% 2034Mi 3% 09:18:04 DEBUG --- stderr --- 09:18:04 DEBUG 09:19:03 INFO 09:19:03 INFO [loop_until]: kubectl --namespace=xlou top pods 09:19:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:19:03 INFO [loop_until]: OK (rc = 0) 09:19:03 DEBUG --- stdout --- 09:19:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5719Mi am-55f77847b7-mbr4x 41m 5825Mi am-55f77847b7-mfzwm 43m 5756Mi ds-cts-0 6m 380Mi ds-cts-1 7m 387Mi ds-cts-2 8m 354Mi ds-idrepo-0 2994m 13837Mi ds-idrepo-1 857m 13838Mi ds-idrepo-2 770m 13845Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2649m 3921Mi idm-65858d8c4c-gpz8d 2135m 3834Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 410m 519Mi 09:19:03 DEBUG --- stderr --- 09:19:03 DEBUG 09:19:04 INFO 09:19:04 INFO [loop_until]: kubectl --namespace=xlou top node 09:19:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:19:04 INFO [loop_until]: OK (rc = 0) 09:19:04 DEBUG --- stdout --- 09:19:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6770Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6951Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 100m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2340m 14% 5142Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 787m 4% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2802m 17% 5171Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 820m 5% 14400Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3024m 19% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 905m 5% 14368Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 476m 2% 2038Mi 3% 09:19:04 DEBUG --- stderr --- 09:19:04 DEBUG 09:20:03 INFO 09:20:03 INFO [loop_until]: kubectl --namespace=xlou top pods 09:20:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:20:03 INFO [loop_until]: OK (rc = 0) 09:20:03 DEBUG --- stdout --- 09:20:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 43m 5719Mi am-55f77847b7-mbr4x 41m 5825Mi am-55f77847b7-mfzwm 40m 5756Mi ds-cts-0 8m 380Mi ds-cts-1 8m 387Mi ds-cts-2 11m 352Mi ds-idrepo-0 3315m 13806Mi ds-idrepo-1 1275m 13818Mi ds-idrepo-2 1191m 13824Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2648m 3928Mi idm-65858d8c4c-gpz8d 2145m 3841Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 392m 519Mi 09:20:03 DEBUG --- stderr --- 09:20:03 DEBUG 09:20:04 INFO 09:20:04 INFO [loop_until]: kubectl --namespace=xlou top node 09:20:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:20:04 INFO [loop_until]: OK (rc = 0) 09:20:04 DEBUG --- stdout --- 09:20:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6766Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6950Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2283m 14% 5145Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 798m 5% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2832m 17% 5181Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1387m 8% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3381m 21% 14345Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1305m 8% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 477m 3% 2048Mi 3% 09:20:04 DEBUG --- stderr --- 09:20:04 DEBUG 09:21:03 INFO 09:21:03 INFO [loop_until]: kubectl --namespace=xlou top pods 09:21:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:21:03 INFO [loop_until]: OK (rc = 0) 09:21:03 DEBUG --- stdout --- 09:21:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5719Mi am-55f77847b7-mbr4x 39m 5829Mi am-55f77847b7-mfzwm 41m 5756Mi ds-cts-0 9m 380Mi ds-cts-1 7m 387Mi ds-cts-2 7m 352Mi ds-idrepo-0 2960m 13823Mi ds-idrepo-1 1006m 13842Mi ds-idrepo-2 673m 13835Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2655m 3935Mi idm-65858d8c4c-gpz8d 2275m 3848Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 409m 519Mi 09:21:03 DEBUG --- stderr --- 09:21:03 DEBUG 09:21:04 INFO 09:21:04 INFO [loop_until]: kubectl --namespace=xlou top node 09:21:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:21:04 INFO [loop_until]: OK (rc = 0) 09:21:04 DEBUG --- stdout --- 09:21:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 6771Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 98m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2318m 14% 5157Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 808m 5% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2834m 17% 5187Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 901m 5% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3160m 19% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 882m 5% 14370Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 468m 2% 2036Mi 3% 09:21:04 DEBUG --- stderr --- 09:21:04 DEBUG 09:22:03 INFO 09:22:03 INFO [loop_until]: kubectl --namespace=xlou top pods 09:22:03 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:22:03 INFO [loop_until]: OK (rc = 0) 09:22:03 DEBUG --- stdout --- 09:22:03 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5720Mi am-55f77847b7-mbr4x 39m 5829Mi am-55f77847b7-mfzwm 40m 5756Mi ds-cts-0 7m 380Mi ds-cts-1 7m 387Mi ds-cts-2 7m 352Mi ds-idrepo-0 3140m 13815Mi ds-idrepo-1 1077m 13839Mi ds-idrepo-2 861m 13844Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2625m 3941Mi idm-65858d8c4c-gpz8d 2219m 3854Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 409m 519Mi 09:22:03 DEBUG --- stderr --- 09:22:03 DEBUG 09:22:04 INFO 09:22:04 INFO [loop_until]: kubectl --namespace=xlou top node 09:22:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:22:04 INFO [loop_until]: OK (rc = 0) 09:22:04 DEBUG --- stdout --- 09:22:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6770Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2371m 14% 5164Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 815m 5% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2892m 18% 5194Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 826m 5% 14387Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3021m 19% 14365Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1003m 6% 14364Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 482m 3% 2037Mi 3% 09:22:04 DEBUG --- stderr --- 09:22:04 DEBUG 09:23:04 INFO 09:23:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:23:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:23:04 INFO [loop_until]: OK (rc = 0) 09:23:04 DEBUG --- stdout --- 09:23:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 39m 5719Mi am-55f77847b7-mbr4x 39m 5829Mi am-55f77847b7-mfzwm 41m 5756Mi ds-cts-0 11m 380Mi ds-cts-1 7m 387Mi ds-cts-2 6m 352Mi ds-idrepo-0 4373m 13851Mi ds-idrepo-1 1561m 13815Mi ds-idrepo-2 1217m 13822Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2750m 3951Mi idm-65858d8c4c-gpz8d 2135m 3861Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 405m 520Mi 09:23:04 DEBUG --- stderr --- 09:23:04 DEBUG 09:23:04 INFO 09:23:04 INFO [loop_until]: kubectl --namespace=xlou top node 09:23:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:23:04 INFO [loop_until]: OK (rc = 0) 09:23:04 DEBUG --- stdout --- 09:23:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6770Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6955Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2318m 14% 5172Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 805m 5% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2922m 18% 5200Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1614m 10% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 64m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4008m 25% 14360Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1725m 10% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 484m 3% 2039Mi 3% 09:23:04 DEBUG --- stderr --- 09:23:04 DEBUG 09:24:04 INFO 09:24:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:24:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:24:04 INFO [loop_until]: OK (rc = 0) 09:24:04 DEBUG --- stdout --- 09:24:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5720Mi am-55f77847b7-mbr4x 39m 5829Mi am-55f77847b7-mfzwm 42m 5756Mi ds-cts-0 6m 380Mi ds-cts-1 8m 387Mi ds-cts-2 9m 353Mi ds-idrepo-0 3472m 13828Mi ds-idrepo-1 1030m 13826Mi ds-idrepo-2 708m 13833Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2737m 3958Mi idm-65858d8c4c-gpz8d 2156m 3869Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 397m 520Mi 09:24:04 DEBUG --- stderr --- 09:24:04 DEBUG 09:24:04 INFO 09:24:04 INFO [loop_until]: kubectl --namespace=xlou top node 09:24:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:24:04 INFO [loop_until]: OK (rc = 0) 09:24:04 DEBUG --- stdout --- 09:24:04 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6769Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 100m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2357m 14% 5179Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 822m 5% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2930m 18% 5213Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 761m 4% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3338m 21% 14379Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1079Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1052m 6% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 469m 2% 2036Mi 3% 09:24:04 DEBUG --- stderr --- 09:24:04 DEBUG 09:25:04 INFO 09:25:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:25:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:25:04 INFO [loop_until]: OK (rc = 0) 09:25:04 DEBUG --- stdout --- 09:25:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5720Mi am-55f77847b7-mbr4x 40m 5829Mi am-55f77847b7-mfzwm 49m 5756Mi ds-cts-0 6m 380Mi ds-cts-1 7m 388Mi ds-cts-2 7m 353Mi ds-idrepo-0 2960m 13822Mi ds-idrepo-1 1357m 13821Mi ds-idrepo-2 1450m 13798Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2685m 3965Mi idm-65858d8c4c-gpz8d 2184m 3876Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 400m 520Mi 09:25:04 DEBUG --- stderr --- 09:25:04 DEBUG 09:25:04 INFO 09:25:04 INFO [loop_until]: kubectl --namespace=xlou top node 09:25:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:25:05 INFO [loop_until]: OK (rc = 0) 09:25:05 DEBUG --- stdout --- 09:25:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 102m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2339m 14% 5186Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 798m 5% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2725m 17% 5215Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1275m 8% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3185m 20% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1395m 8% 14359Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 465m 2% 2036Mi 3% 09:25:05 DEBUG --- stderr --- 09:25:05 DEBUG 09:26:04 INFO 09:26:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:26:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:26:04 INFO [loop_until]: OK (rc = 0) 09:26:04 DEBUG --- stdout --- 09:26:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 46m 5720Mi am-55f77847b7-mbr4x 41m 5829Mi am-55f77847b7-mfzwm 44m 5756Mi ds-cts-0 7m 380Mi ds-cts-1 8m 387Mi ds-cts-2 10m 353Mi ds-idrepo-0 3397m 13806Mi ds-idrepo-1 664m 13821Mi ds-idrepo-2 794m 13854Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2882m 3976Mi idm-65858d8c4c-gpz8d 2273m 3887Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 413m 520Mi 09:26:04 DEBUG --- stderr --- 09:26:04 DEBUG 09:26:05 INFO 09:26:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:26:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:26:05 INFO [loop_until]: OK (rc = 0) 09:26:05 DEBUG --- stdout --- 09:26:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6768Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 100m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2346m 14% 5200Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 812m 5% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 3000m 18% 5230Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 723m 4% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3664m 23% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 957m 6% 14392Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 474m 2% 2036Mi 3% 09:26:05 DEBUG --- stderr --- 09:26:05 DEBUG 09:27:04 INFO 09:27:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:27:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:27:04 INFO [loop_until]: OK (rc = 0) 09:27:04 DEBUG --- stdout --- 09:27:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 39m 5723Mi am-55f77847b7-mbr4x 40m 5829Mi am-55f77847b7-mfzwm 44m 5759Mi ds-cts-0 8m 380Mi ds-cts-1 7m 387Mi ds-cts-2 8m 353Mi ds-idrepo-0 2851m 13850Mi ds-idrepo-1 795m 13835Mi ds-idrepo-2 698m 13854Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2681m 3986Mi idm-65858d8c4c-gpz8d 2252m 3895Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 407m 520Mi 09:27:04 DEBUG --- stderr --- 09:27:04 DEBUG 09:27:05 INFO 09:27:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:27:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:27:05 INFO [loop_until]: OK (rc = 0) 09:27:05 DEBUG --- stdout --- 09:27:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6775Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2254m 14% 5208Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 808m 5% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2884m 18% 5238Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 723m 4% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2926m 18% 14385Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 891m 5% 14372Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 474m 2% 2035Mi 3% 09:27:05 DEBUG --- stderr --- 09:27:05 DEBUG 09:28:04 INFO 09:28:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:28:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:28:04 INFO [loop_until]: OK (rc = 0) 09:28:04 DEBUG --- stdout --- 09:28:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5723Mi am-55f77847b7-mbr4x 43m 5829Mi am-55f77847b7-mfzwm 41m 5759Mi ds-cts-0 7m 381Mi ds-cts-1 7m 387Mi ds-cts-2 8m 353Mi ds-idrepo-0 3029m 13837Mi ds-idrepo-1 772m 13828Mi ds-idrepo-2 628m 13833Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2742m 3992Mi idm-65858d8c4c-gpz8d 2295m 3902Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 415m 521Mi 09:28:04 DEBUG --- stderr --- 09:28:04 DEBUG 09:28:05 INFO 09:28:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:28:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:28:05 INFO [loop_until]: OK (rc = 0) 09:28:05 DEBUG --- stdout --- 09:28:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6775Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2425m 15% 5210Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 817m 5% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2918m 18% 5248Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 714m 4% 14388Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3192m 20% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 844m 5% 14368Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 478m 3% 2039Mi 3% 09:28:05 DEBUG --- stderr --- 09:28:05 DEBUG 09:29:04 INFO 09:29:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:29:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:29:04 INFO [loop_until]: OK (rc = 0) 09:29:04 DEBUG --- stdout --- 09:29:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5723Mi am-55f77847b7-mbr4x 40m 5831Mi am-55f77847b7-mfzwm 42m 5759Mi ds-cts-0 6m 380Mi ds-cts-1 7m 387Mi ds-cts-2 7m 352Mi ds-idrepo-0 3104m 13822Mi ds-idrepo-1 893m 13823Mi ds-idrepo-2 1294m 13796Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2750m 4002Mi idm-65858d8c4c-gpz8d 2295m 3908Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 407m 521Mi 09:29:04 DEBUG --- stderr --- 09:29:04 DEBUG 09:29:05 INFO 09:29:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:29:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:29:05 INFO [loop_until]: OK (rc = 0) 09:29:05 DEBUG --- stdout --- 09:29:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 91m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2401m 15% 5216Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 792m 4% 2146Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2879m 18% 5256Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1280m 8% 14361Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2976m 18% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 68m 0% 1092Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 910m 5% 14367Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 469m 2% 2037Mi 3% 09:29:05 DEBUG --- stderr --- 09:29:05 DEBUG 09:30:04 INFO 09:30:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:30:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:30:04 INFO [loop_until]: OK (rc = 0) 09:30:04 DEBUG --- stdout --- 09:30:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 30m 5723Mi am-55f77847b7-mbr4x 29m 5831Mi am-55f77847b7-mfzwm 23m 5759Mi ds-cts-0 11m 381Mi ds-cts-1 8m 387Mi ds-cts-2 8m 354Mi ds-idrepo-0 2829m 13853Mi ds-idrepo-1 792m 13820Mi ds-idrepo-2 468m 13841Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1869m 4007Mi idm-65858d8c4c-gpz8d 1463m 3913Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 243m 520Mi 09:30:04 DEBUG --- stderr --- 09:30:04 DEBUG 09:30:05 INFO 09:30:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:30:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:30:05 INFO [loop_until]: OK (rc = 0) 09:30:05 DEBUG --- stdout --- 09:30:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 80m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 76m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 85m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1035m 6% 5223Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 454m 2% 2149Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1291m 8% 5261Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 307m 1% 14386Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1911m 12% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 656m 4% 14368Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 266m 1% 2039Mi 3% 09:30:05 DEBUG --- stderr --- 09:30:05 DEBUG 09:31:04 INFO 09:31:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:31:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:31:04 INFO [loop_until]: OK (rc = 0) 09:31:04 DEBUG --- stdout --- 09:31:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 7m 5723Mi am-55f77847b7-mbr4x 6m 5831Mi am-55f77847b7-mfzwm 7m 5759Mi ds-cts-0 18m 379Mi ds-cts-1 8m 387Mi ds-cts-2 5m 354Mi ds-idrepo-0 11m 13792Mi ds-idrepo-1 14m 13771Mi ds-idrepo-2 9m 13841Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 9m 4007Mi idm-65858d8c4c-gpz8d 7m 3913Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 104Mi 09:31:04 DEBUG --- stderr --- 09:31:04 DEBUG 09:31:05 INFO 09:31:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:31:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:31:05 INFO [loop_until]: OK (rc = 0) 09:31:05 DEBUG --- stdout --- 09:31:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1363Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 71m 0% 5224Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 72m 0% 5262Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 60m 0% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 65m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 64m 0% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 67m 0% 14317Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1632Mi 2% 09:31:05 DEBUG --- stderr --- 09:31:05 DEBUG 127.0.0.1 - - [12/Aug/2023 09:31:48] "GET /monitoring/average?start_time=23-08-12_08:01:17&stop_time=23-08-12_08:29:47 HTTP/1.1" 200 - 09:32:04 INFO 09:32:04 INFO [loop_until]: kubectl --namespace=xlou top pods 09:32:04 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:32:04 INFO [loop_until]: OK (rc = 0) 09:32:04 DEBUG --- stdout --- 09:32:04 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 7m 5723Mi am-55f77847b7-mbr4x 6m 5831Mi am-55f77847b7-mfzwm 6m 5759Mi ds-cts-0 6m 380Mi ds-cts-1 7m 387Mi ds-cts-2 5m 354Mi ds-idrepo-0 10m 13791Mi ds-idrepo-1 14m 13772Mi ds-idrepo-2 11m 13842Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 8m 4007Mi idm-65858d8c4c-gpz8d 11m 3913Mi lodemon-6cd9c44bd4-vnqvr 2m 65Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 2m 104Mi 09:32:04 DEBUG --- stderr --- 09:32:04 DEBUG 09:32:05 INFO 09:32:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:32:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:32:05 INFO [loop_until]: OK (rc = 0) 09:32:05 DEBUG --- stdout --- 09:32:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 80m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 59m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 60m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 74m 0% 5225Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 126m 0% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 5260Mi 8% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 61m 0% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 69m 0% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14318Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 598m 3% 1985Mi 3% 09:32:05 DEBUG --- stderr --- 09:32:05 DEBUG 09:33:05 INFO 09:33:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:33:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:33:05 INFO [loop_until]: OK (rc = 0) 09:33:05 DEBUG --- stdout --- 09:33:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 37m 5723Mi am-55f77847b7-mbr4x 35m 5831Mi am-55f77847b7-mfzwm 36m 5759Mi ds-cts-0 6m 380Mi ds-cts-1 7m 387Mi ds-cts-2 9m 354Mi ds-idrepo-0 2258m 13826Mi ds-idrepo-1 523m 13787Mi ds-idrepo-2 516m 13839Mi end-user-ui-6845bc78c7-jrqhg 1m 4Mi idm-65858d8c4c-5vh78 2326m 4031Mi idm-65858d8c4c-gpz8d 1938m 3932Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 754m 509Mi 09:33:05 DEBUG --- stderr --- 09:33:05 DEBUG 09:33:05 INFO 09:33:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:33:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:33:05 INFO [loop_until]: OK (rc = 0) 09:33:05 DEBUG --- stdout --- 09:33:05 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 92m 0% 6772Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 84m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2011m 12% 5243Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 705m 4% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2474m 15% 5278Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 629m 3% 14392Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1103Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2681m 16% 14380Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 671m 4% 14328Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 838m 5% 2023Mi 3% 09:33:05 DEBUG --- stderr --- 09:33:05 DEBUG 09:34:05 INFO 09:34:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:34:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:34:05 INFO [loop_until]: OK (rc = 0) 09:34:05 DEBUG --- stdout --- 09:34:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5723Mi am-55f77847b7-mbr4x 40m 5831Mi am-55f77847b7-mfzwm 44m 5759Mi ds-cts-0 6m 381Mi ds-cts-1 7m 387Mi ds-cts-2 6m 354Mi ds-idrepo-0 2788m 13847Mi ds-idrepo-1 656m 13819Mi ds-idrepo-2 685m 13853Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2489m 4041Mi idm-65858d8c4c-gpz8d 2071m 3941Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 410m 519Mi 09:34:05 DEBUG --- stderr --- 09:34:05 DEBUG 09:34:05 INFO 09:34:05 INFO [loop_until]: kubectl --namespace=xlou top node 09:34:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:34:06 INFO [loop_until]: OK (rc = 0) 09:34:06 DEBUG --- stdout --- 09:34:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2236m 14% 5251Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 816m 5% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2670m 16% 5292Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 645m 4% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1102Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2752m 17% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 727m 4% 14358Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 467m 2% 2035Mi 3% 09:34:06 DEBUG --- stderr --- 09:34:06 DEBUG 09:35:05 INFO 09:35:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:35:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:35:05 INFO [loop_until]: OK (rc = 0) 09:35:05 DEBUG --- stdout --- 09:35:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 43m 5723Mi am-55f77847b7-mbr4x 42m 5831Mi am-55f77847b7-mfzwm 42m 5759Mi ds-cts-0 9m 381Mi ds-cts-1 7m 387Mi ds-cts-2 7m 354Mi ds-idrepo-0 3412m 13827Mi ds-idrepo-1 1269m 13799Mi ds-idrepo-2 1062m 13827Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2548m 4053Mi idm-65858d8c4c-gpz8d 1993m 3948Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 409m 521Mi 09:35:05 DEBUG --- stderr --- 09:35:05 DEBUG 09:35:06 INFO 09:35:06 INFO [loop_until]: kubectl --namespace=xlou top node 09:35:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:35:06 INFO [loop_until]: OK (rc = 0) 09:35:06 DEBUG --- stdout --- 09:35:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6775Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2169m 13% 5257Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 808m 5% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2720m 17% 5306Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1002m 6% 14389Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3411m 21% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1260m 7% 14348Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 485m 3% 2035Mi 3% 09:35:06 DEBUG --- stderr --- 09:35:06 DEBUG 09:36:05 INFO 09:36:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:36:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:36:05 INFO [loop_until]: OK (rc = 0) 09:36:05 DEBUG --- stdout --- 09:36:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5723Mi am-55f77847b7-mbr4x 41m 5831Mi am-55f77847b7-mfzwm 44m 5759Mi ds-cts-0 6m 380Mi ds-cts-1 9m 387Mi ds-cts-2 6m 354Mi ds-idrepo-0 2753m 13849Mi ds-idrepo-1 734m 13827Mi ds-idrepo-2 609m 13851Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2486m 4058Mi idm-65858d8c4c-gpz8d 2112m 3954Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 408m 522Mi 09:36:05 DEBUG --- stderr --- 09:36:05 DEBUG 09:36:06 INFO 09:36:06 INFO [loop_until]: kubectl --namespace=xlou top node 09:36:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:36:06 INFO [loop_until]: OK (rc = 0) 09:36:06 DEBUG --- stdout --- 09:36:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2152m 13% 5262Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 808m 5% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2606m 16% 5312Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 614m 3% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2727m 17% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 689m 4% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 468m 2% 2037Mi 3% 09:36:06 DEBUG --- stderr --- 09:36:06 DEBUG 09:37:05 INFO 09:37:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:37:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:37:05 INFO [loop_until]: OK (rc = 0) 09:37:05 DEBUG --- stdout --- 09:37:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5723Mi am-55f77847b7-mbr4x 40m 5831Mi am-55f77847b7-mfzwm 44m 5759Mi ds-cts-0 7m 382Mi ds-cts-1 6m 388Mi ds-cts-2 7m 354Mi ds-idrepo-0 2726m 13848Mi ds-idrepo-1 624m 13844Mi ds-idrepo-2 526m 13851Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2446m 4066Mi idm-65858d8c4c-gpz8d 2031m 3961Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 400m 525Mi 09:37:05 DEBUG --- stderr --- 09:37:05 DEBUG 09:37:06 INFO 09:37:06 INFO [loop_until]: kubectl --namespace=xlou top node 09:37:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:37:06 INFO [loop_until]: OK (rc = 0) 09:37:06 DEBUG --- stdout --- 09:37:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6772Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2153m 13% 5271Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 804m 5% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2629m 16% 5319Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 620m 3% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2674m 16% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 708m 4% 14389Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 446m 2% 2041Mi 3% 09:37:06 DEBUG --- stderr --- 09:37:06 DEBUG 09:38:05 INFO 09:38:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:38:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:38:05 INFO [loop_until]: OK (rc = 0) 09:38:05 DEBUG --- stdout --- 09:38:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 37m 5724Mi am-55f77847b7-mbr4x 41m 5831Mi am-55f77847b7-mfzwm 43m 5761Mi ds-cts-0 7m 381Mi ds-cts-1 6m 388Mi ds-cts-2 7m 354Mi ds-idrepo-0 2654m 13848Mi ds-idrepo-1 654m 13836Mi ds-idrepo-2 668m 13848Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2523m 4072Mi idm-65858d8c4c-gpz8d 1942m 3965Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 401m 526Mi 09:38:05 DEBUG --- stderr --- 09:38:05 DEBUG 09:38:06 INFO 09:38:06 INFO [loop_until]: kubectl --namespace=xlou top node 09:38:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:38:06 INFO [loop_until]: OK (rc = 0) 09:38:06 DEBUG --- stdout --- 09:38:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2102m 13% 5276Mi 8% gke-xlou-cdm-default-pool-f05840a3-h81k 798m 5% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2666m 16% 5328Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 646m 4% 14406Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2646m 16% 14419Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 703m 4% 14385Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 475m 2% 2045Mi 3% 09:38:06 DEBUG --- stderr --- 09:38:06 DEBUG 09:39:05 INFO 09:39:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:39:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:39:05 INFO [loop_until]: OK (rc = 0) 09:39:05 DEBUG --- stdout --- 09:39:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 38m 5724Mi am-55f77847b7-mbr4x 43m 5831Mi am-55f77847b7-mfzwm 40m 5761Mi ds-cts-0 6m 380Mi ds-cts-1 6m 388Mi ds-cts-2 7m 355Mi ds-idrepo-0 2908m 13585Mi ds-idrepo-1 1093m 13554Mi ds-idrepo-2 881m 13818Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2483m 4081Mi idm-65858d8c4c-gpz8d 2043m 3974Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 400m 528Mi 09:39:05 DEBUG --- stderr --- 09:39:05 DEBUG 09:39:06 INFO 09:39:06 INFO [loop_until]: kubectl --namespace=xlou top node 09:39:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:39:06 INFO [loop_until]: OK (rc = 0) 09:39:06 DEBUG --- stdout --- 09:39:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6957Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2135m 13% 5285Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 805m 5% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2624m 16% 5333Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 842m 5% 14395Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3133m 19% 14161Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1501m 9% 14144Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 486m 3% 2044Mi 3% 09:39:06 DEBUG --- stderr --- 09:39:06 DEBUG 09:40:05 INFO 09:40:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:40:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:40:05 INFO [loop_until]: OK (rc = 0) 09:40:05 DEBUG --- stdout --- 09:40:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5724Mi am-55f77847b7-mbr4x 39m 5832Mi am-55f77847b7-mfzwm 41m 5761Mi ds-cts-0 8m 381Mi ds-cts-1 7m 388Mi ds-cts-2 6m 355Mi ds-idrepo-0 2624m 13663Mi ds-idrepo-1 521m 13629Mi ds-idrepo-2 1397m 13819Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2577m 4088Mi idm-65858d8c4c-gpz8d 1966m 3980Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 403m 530Mi 09:40:05 DEBUG --- stderr --- 09:40:05 DEBUG 09:40:06 INFO 09:40:06 INFO [loop_until]: kubectl --namespace=xlou top node 09:40:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:40:06 INFO [loop_until]: OK (rc = 0) 09:40:06 DEBUG --- stdout --- 09:40:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6775Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 89m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2105m 13% 5291Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 800m 5% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2677m 16% 5343Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1178m 7% 14368Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2899m 18% 14246Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 730m 4% 14221Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 454m 2% 2047Mi 3% 09:40:06 DEBUG --- stderr --- 09:40:06 DEBUG 09:41:05 INFO 09:41:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:41:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:41:05 INFO [loop_until]: OK (rc = 0) 09:41:05 DEBUG --- stdout --- 09:41:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5724Mi am-55f77847b7-mbr4x 40m 5832Mi am-55f77847b7-mfzwm 42m 5761Mi ds-cts-0 17m 382Mi ds-cts-1 7m 387Mi ds-cts-2 8m 355Mi ds-idrepo-0 2854m 13721Mi ds-idrepo-1 510m 13663Mi ds-idrepo-2 428m 13833Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2564m 4098Mi idm-65858d8c4c-gpz8d 2012m 3987Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 402m 531Mi 09:41:05 DEBUG --- stderr --- 09:41:05 DEBUG 09:41:06 INFO 09:41:06 INFO [loop_until]: kubectl --namespace=xlou top node 09:41:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:41:06 INFO [loop_until]: OK (rc = 0) 09:41:06 DEBUG --- stdout --- 09:41:06 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6773Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2160m 13% 5301Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 822m 5% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2694m 16% 5349Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 624m 3% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2636m 16% 14307Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 744m 4% 14253Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 478m 3% 2051Mi 3% 09:41:06 DEBUG --- stderr --- 09:41:06 DEBUG 09:42:05 INFO 09:42:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:42:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:42:05 INFO [loop_until]: OK (rc = 0) 09:42:05 DEBUG --- stdout --- 09:42:05 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5724Mi am-55f77847b7-mbr4x 39m 5832Mi am-55f77847b7-mfzwm 45m 5761Mi ds-cts-0 7m 383Mi ds-cts-1 7m 388Mi ds-cts-2 6m 355Mi ds-idrepo-0 2509m 13764Mi ds-idrepo-1 651m 13715Mi ds-idrepo-2 668m 13838Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2478m 4105Mi idm-65858d8c4c-gpz8d 2027m 3995Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 402m 532Mi 09:42:05 DEBUG --- stderr --- 09:42:05 DEBUG 09:42:06 INFO 09:42:06 INFO [loop_until]: kubectl --namespace=xlou top node 09:42:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:42:07 INFO [loop_until]: OK (rc = 0) 09:42:07 DEBUG --- stdout --- 09:42:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6774Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6955Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2176m 13% 5304Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 806m 5% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2509m 15% 5356Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 825m 5% 14420Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2705m 17% 14354Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 669m 4% 14275Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 465m 2% 2049Mi 3% 09:42:07 DEBUG --- stderr --- 09:42:07 DEBUG 09:43:05 INFO 09:43:05 INFO [loop_until]: kubectl --namespace=xlou top pods 09:43:05 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:43:06 INFO [loop_until]: OK (rc = 0) 09:43:06 DEBUG --- stdout --- 09:43:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5724Mi am-55f77847b7-mbr4x 39m 5832Mi am-55f77847b7-mfzwm 44m 5765Mi ds-cts-0 8m 383Mi ds-cts-1 7m 388Mi ds-cts-2 6m 355Mi ds-idrepo-0 2626m 13812Mi ds-idrepo-1 926m 13755Mi ds-idrepo-2 804m 13764Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2612m 4110Mi idm-65858d8c4c-gpz8d 2038m 4002Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 389m 535Mi 09:43:06 DEBUG --- stderr --- 09:43:06 DEBUG 09:43:07 INFO 09:43:07 INFO [loop_until]: kubectl --namespace=xlou top node 09:43:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:43:07 INFO [loop_until]: OK (rc = 0) 09:43:07 DEBUG --- stdout --- 09:43:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6779Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 96m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2177m 13% 5314Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 809m 5% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2671m 16% 5361Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 722m 4% 14344Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2966m 18% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1114m 7% 14310Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 452m 2% 2050Mi 3% 09:43:07 DEBUG --- stderr --- 09:43:07 DEBUG 09:44:06 INFO 09:44:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:44:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:44:06 INFO [loop_until]: OK (rc = 0) 09:44:06 DEBUG --- stdout --- 09:44:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5724Mi am-55f77847b7-mbr4x 41m 5832Mi am-55f77847b7-mfzwm 43m 5765Mi ds-cts-0 6m 383Mi ds-cts-1 8m 388Mi ds-cts-2 6m 356Mi ds-idrepo-0 2750m 13849Mi ds-idrepo-1 716m 13799Mi ds-idrepo-2 626m 13792Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2585m 4118Mi idm-65858d8c4c-gpz8d 2091m 4007Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 409m 536Mi 09:44:06 DEBUG --- stderr --- 09:44:06 DEBUG 09:44:07 INFO 09:44:07 INFO [loop_until]: kubectl --namespace=xlou top node 09:44:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:44:07 INFO [loop_until]: OK (rc = 0) 09:44:07 DEBUG --- stdout --- 09:44:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1343Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6777Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2199m 13% 5313Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 817m 5% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2677m 16% 5371Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 814m 5% 14375Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2706m 17% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 747m 4% 14357Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 483m 3% 2055Mi 3% 09:44:07 DEBUG --- stderr --- 09:44:07 DEBUG 09:45:06 INFO 09:45:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:45:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:45:06 INFO [loop_until]: OK (rc = 0) 09:45:06 DEBUG --- stdout --- 09:45:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5725Mi am-55f77847b7-mbr4x 44m 5832Mi am-55f77847b7-mfzwm 44m 5765Mi ds-cts-0 6m 383Mi ds-cts-1 10m 389Mi ds-cts-2 6m 356Mi ds-idrepo-0 2682m 13574Mi ds-idrepo-1 869m 13797Mi ds-idrepo-2 657m 13820Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2601m 4123Mi idm-65858d8c4c-gpz8d 2009m 4015Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 401m 537Mi 09:45:06 DEBUG --- stderr --- 09:45:06 DEBUG 09:45:07 INFO 09:45:07 INFO [loop_until]: kubectl --namespace=xlou top node 09:45:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:45:07 INFO [loop_until]: OK (rc = 0) 09:45:07 DEBUG --- stdout --- 09:45:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 103m 0% 6778Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6954Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2155m 13% 5325Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 815m 5% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2690m 16% 5373Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 639m 4% 14403Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2627m 16% 14158Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1199m 7% 14357Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 467m 2% 2055Mi 3% 09:45:07 DEBUG --- stderr --- 09:45:07 DEBUG 09:46:06 INFO 09:46:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:46:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:46:06 INFO [loop_until]: OK (rc = 0) 09:46:06 DEBUG --- stdout --- 09:46:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5725Mi am-55f77847b7-mbr4x 41m 5832Mi am-55f77847b7-mfzwm 40m 5765Mi ds-cts-0 6m 383Mi ds-cts-1 14m 388Mi ds-cts-2 6m 355Mi ds-idrepo-0 2726m 13606Mi ds-idrepo-1 1245m 13816Mi ds-idrepo-2 550m 13845Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2479m 4132Mi idm-65858d8c4c-gpz8d 2050m 4021Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 392m 539Mi 09:46:06 DEBUG --- stderr --- 09:46:06 DEBUG 09:46:07 INFO 09:46:07 INFO [loop_until]: kubectl --namespace=xlou top node 09:46:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:46:07 INFO [loop_until]: OK (rc = 0) 09:46:07 DEBUG --- stdout --- 09:46:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1345Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6778Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6956Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2189m 13% 5329Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 808m 5% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2649m 16% 5393Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 466m 2% 14424Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2678m 16% 14192Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1153m 7% 14386Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 452m 2% 2056Mi 3% 09:46:07 DEBUG --- stderr --- 09:46:07 DEBUG 09:47:06 INFO 09:47:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:47:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:47:06 INFO [loop_until]: OK (rc = 0) 09:47:06 DEBUG --- stdout --- 09:47:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 37m 5725Mi am-55f77847b7-mbr4x 44m 5834Mi am-55f77847b7-mfzwm 48m 5779Mi ds-cts-0 6m 383Mi ds-cts-1 6m 389Mi ds-cts-2 6m 356Mi ds-idrepo-0 3313m 13654Mi ds-idrepo-1 697m 13849Mi ds-idrepo-2 747m 13839Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2483m 4138Mi idm-65858d8c4c-gpz8d 2093m 4029Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 415m 541Mi 09:47:06 DEBUG --- stderr --- 09:47:06 DEBUG 09:47:07 INFO 09:47:07 INFO [loop_until]: kubectl --namespace=xlou top node 09:47:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:47:07 INFO [loop_until]: OK (rc = 0) 09:47:07 DEBUG --- stdout --- 09:47:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1345Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 108m 0% 6794Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 92m 0% 6958Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2230m 14% 5335Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 813m 5% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2642m 16% 5386Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 808m 5% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3463m 21% 14282Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 755m 4% 14416Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 485m 3% 2059Mi 3% 09:47:07 DEBUG --- stderr --- 09:47:07 DEBUG 09:48:06 INFO 09:48:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:48:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:48:06 INFO [loop_until]: OK (rc = 0) 09:48:06 DEBUG --- stdout --- 09:48:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5719Mi am-55f77847b7-mbr4x 36m 5836Mi am-55f77847b7-mfzwm 43m 5765Mi ds-cts-0 6m 383Mi ds-cts-1 18m 388Mi ds-cts-2 6m 355Mi ds-idrepo-0 2771m 13746Mi ds-idrepo-1 907m 13752Mi ds-idrepo-2 640m 13853Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2496m 4147Mi idm-65858d8c4c-gpz8d 2048m 4035Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 423m 543Mi 09:48:06 DEBUG --- stderr --- 09:48:06 DEBUG 09:48:07 INFO 09:48:07 INFO [loop_until]: kubectl --namespace=xlou top node 09:48:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:48:07 INFO [loop_until]: OK (rc = 0) 09:48:07 DEBUG --- stdout --- 09:48:07 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6780Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2141m 13% 5339Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 773m 4% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2664m 16% 5396Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 641m 4% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2905m 18% 14347Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1058m 6% 14321Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 494m 3% 2060Mi 3% 09:48:07 DEBUG --- stderr --- 09:48:07 DEBUG 09:49:06 INFO 09:49:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:49:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:49:06 INFO [loop_until]: OK (rc = 0) 09:49:06 DEBUG --- stdout --- 09:49:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 39m 5719Mi am-55f77847b7-mbr4x 38m 5836Mi am-55f77847b7-mfzwm 40m 5765Mi ds-cts-0 6m 383Mi ds-cts-1 6m 388Mi ds-cts-2 6m 356Mi ds-idrepo-0 2929m 13773Mi ds-idrepo-1 679m 13780Mi ds-idrepo-2 596m 13642Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2469m 4153Mi idm-65858d8c4c-gpz8d 2046m 4042Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 411m 543Mi 09:49:06 DEBUG --- stderr --- 09:49:06 DEBUG 09:49:07 INFO 09:49:07 INFO [loop_until]: kubectl --namespace=xlou top node 09:49:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:49:08 INFO [loop_until]: OK (rc = 0) 09:49:08 DEBUG --- stdout --- 09:49:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6779Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2190m 13% 5352Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 803m 5% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2610m 16% 5406Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 624m 3% 14233Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2659m 16% 14364Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 594m 3% 14346Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 481m 3% 2063Mi 3% 09:49:08 DEBUG --- stderr --- 09:49:08 DEBUG 09:50:06 INFO 09:50:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:50:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:50:06 INFO [loop_until]: OK (rc = 0) 09:50:06 DEBUG --- stdout --- 09:50:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5719Mi am-55f77847b7-mbr4x 40m 5836Mi am-55f77847b7-mfzwm 48m 5769Mi ds-cts-0 7m 383Mi ds-cts-1 7m 388Mi ds-cts-2 6m 355Mi ds-idrepo-0 2563m 13806Mi ds-idrepo-1 957m 13818Mi ds-idrepo-2 602m 13667Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2436m 4160Mi idm-65858d8c4c-gpz8d 2022m 4049Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 409m 546Mi 09:50:06 DEBUG --- stderr --- 09:50:06 DEBUG 09:50:08 INFO 09:50:08 INFO [loop_until]: kubectl --namespace=xlou top node 09:50:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:50:08 INFO [loop_until]: OK (rc = 0) 09:50:08 DEBUG --- stdout --- 09:50:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 94m 0% 6782Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 97m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2211m 13% 5354Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 801m 5% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2600m 16% 5414Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 627m 3% 14256Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2855m 17% 14409Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1060m 6% 14390Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 492m 3% 2072Mi 3% 09:50:08 DEBUG --- stderr --- 09:50:08 DEBUG 09:51:06 INFO 09:51:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:51:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:51:06 INFO [loop_until]: OK (rc = 0) 09:51:06 DEBUG --- stdout --- 09:51:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 39m 5719Mi am-55f77847b7-mbr4x 40m 5836Mi am-55f77847b7-mfzwm 41m 5768Mi ds-cts-0 6m 383Mi ds-cts-1 9m 388Mi ds-cts-2 8m 357Mi ds-idrepo-0 2746m 13835Mi ds-idrepo-1 848m 13835Mi ds-idrepo-2 1084m 13699Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2481m 4168Mi idm-65858d8c4c-gpz8d 2011m 4055Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 408m 548Mi 09:51:06 DEBUG --- stderr --- 09:51:06 DEBUG 09:51:08 INFO 09:51:08 INFO [loop_until]: kubectl --namespace=xlou top node 09:51:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:51:08 INFO [loop_until]: OK (rc = 0) 09:51:08 DEBUG --- stdout --- 09:51:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6782Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 97m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 93m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2109m 13% 5362Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 805m 5% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2683m 16% 5420Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1232m 7% 14306Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2903m 18% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 862m 5% 14417Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 480m 3% 2065Mi 3% 09:51:08 DEBUG --- stderr --- 09:51:08 DEBUG 09:52:06 INFO 09:52:06 INFO [loop_until]: kubectl --namespace=xlou top pods 09:52:06 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:52:06 INFO [loop_until]: OK (rc = 0) 09:52:06 DEBUG --- stdout --- 09:52:06 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5719Mi am-55f77847b7-mbr4x 42m 5836Mi am-55f77847b7-mfzwm 43m 5768Mi ds-cts-0 6m 383Mi ds-cts-1 6m 388Mi ds-cts-2 6m 356Mi ds-idrepo-0 3036m 13745Mi ds-idrepo-1 836m 13705Mi ds-idrepo-2 638m 13727Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2675m 4174Mi idm-65858d8c4c-gpz8d 2054m 4062Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 403m 549Mi 09:52:06 DEBUG --- stderr --- 09:52:06 DEBUG 09:52:08 INFO 09:52:08 INFO [loop_until]: kubectl --namespace=xlou top node 09:52:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:52:08 INFO [loop_until]: OK (rc = 0) 09:52:08 DEBUG --- stdout --- 09:52:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6783Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6824Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2205m 13% 5370Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 788m 4% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2621m 16% 5429Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 871m 5% 14327Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2926m 18% 14346Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 859m 5% 14271Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 458m 2% 2067Mi 3% 09:52:08 DEBUG --- stderr --- 09:52:08 DEBUG 09:53:07 INFO 09:53:07 INFO [loop_until]: kubectl --namespace=xlou top pods 09:53:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:53:07 INFO [loop_until]: OK (rc = 0) 09:53:07 DEBUG --- stdout --- 09:53:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 41m 5719Mi am-55f77847b7-mbr4x 43m 5837Mi am-55f77847b7-mfzwm 45m 5769Mi ds-cts-0 6m 383Mi ds-cts-1 6m 388Mi ds-cts-2 6m 356Mi ds-idrepo-0 2672m 13784Mi ds-idrepo-1 717m 13734Mi ds-idrepo-2 635m 13637Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2579m 4182Mi idm-65858d8c4c-gpz8d 2047m 4067Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 403m 551Mi 09:53:07 DEBUG --- stderr --- 09:53:07 DEBUG 09:53:08 INFO 09:53:08 INFO [loop_until]: kubectl --namespace=xlou top node 09:53:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:53:08 INFO [loop_until]: OK (rc = 0) 09:53:08 DEBUG --- stdout --- 09:53:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6780Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2226m 14% 5375Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 818m 5% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2689m 16% 5435Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 665m 4% 14224Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2684m 16% 14372Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 544m 3% 14294Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 479m 3% 2070Mi 3% 09:53:08 DEBUG --- stderr --- 09:53:08 DEBUG 09:54:07 INFO 09:54:07 INFO [loop_until]: kubectl --namespace=xlou top pods 09:54:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:54:07 INFO [loop_until]: OK (rc = 0) 09:54:07 DEBUG --- stdout --- 09:54:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 38m 5719Mi am-55f77847b7-mbr4x 42m 5836Mi am-55f77847b7-mfzwm 40m 5769Mi ds-cts-0 6m 383Mi ds-cts-1 8m 388Mi ds-cts-2 8m 355Mi ds-idrepo-0 2849m 13823Mi ds-idrepo-1 965m 13744Mi ds-idrepo-2 584m 13657Mi end-user-ui-6845bc78c7-jrqhg 1m 4Mi idm-65858d8c4c-5vh78 2522m 4187Mi idm-65858d8c4c-gpz8d 2123m 4073Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 404m 553Mi 09:54:07 DEBUG --- stderr --- 09:54:07 DEBUG 09:54:08 INFO 09:54:08 INFO [loop_until]: kubectl --namespace=xlou top node 09:54:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:54:08 INFO [loop_until]: OK (rc = 0) 09:54:08 DEBUG --- stdout --- 09:54:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 96m 0% 6794Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2206m 13% 5383Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 782m 4% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2654m 16% 5441Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 625m 3% 14248Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2909m 18% 14410Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 887m 5% 14312Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 476m 2% 2070Mi 3% 09:54:08 DEBUG --- stderr --- 09:54:08 DEBUG 09:55:07 INFO 09:55:07 INFO [loop_until]: kubectl --namespace=xlou top pods 09:55:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:55:07 INFO [loop_until]: OK (rc = 0) 09:55:07 DEBUG --- stdout --- 09:55:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5719Mi am-55f77847b7-mbr4x 43m 5838Mi am-55f77847b7-mfzwm 42m 5769Mi ds-cts-0 5m 383Mi ds-cts-1 6m 388Mi ds-cts-2 6m 356Mi ds-idrepo-0 3381m 13766Mi ds-idrepo-1 858m 13812Mi ds-idrepo-2 1004m 13719Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2557m 4200Mi idm-65858d8c4c-gpz8d 2045m 4082Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 404m 554Mi 09:55:07 DEBUG --- stderr --- 09:55:07 DEBUG 09:55:08 INFO 09:55:08 INFO [loop_until]: kubectl --namespace=xlou top node 09:55:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:55:08 INFO [loop_until]: OK (rc = 0) 09:55:08 DEBUG --- stdout --- 09:55:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 98m 0% 6783Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 98m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 92m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2216m 13% 5388Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 819m 5% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2746m 17% 5452Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1117Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1060m 6% 14315Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3678m 23% 14377Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1049m 6% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 462m 2% 2070Mi 3% 09:55:08 DEBUG --- stderr --- 09:55:08 DEBUG 09:56:07 INFO 09:56:07 INFO [loop_until]: kubectl --namespace=xlou top pods 09:56:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:56:07 INFO [loop_until]: OK (rc = 0) 09:56:07 DEBUG --- stdout --- 09:56:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5719Mi am-55f77847b7-mbr4x 41m 5837Mi am-55f77847b7-mfzwm 49m 5769Mi ds-cts-0 6m 384Mi ds-cts-1 6m 389Mi ds-cts-2 7m 356Mi ds-idrepo-0 2828m 13808Mi ds-idrepo-1 806m 13783Mi ds-idrepo-2 1187m 13690Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2484m 4206Mi idm-65858d8c4c-gpz8d 2088m 4087Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 406m 555Mi 09:56:07 DEBUG --- stderr --- 09:56:07 DEBUG 09:56:08 INFO 09:56:08 INFO [loop_until]: kubectl --namespace=xlou top node 09:56:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:56:08 INFO [loop_until]: OK (rc = 0) 09:56:08 DEBUG --- stdout --- 09:56:08 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6784Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 94m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2183m 13% 5397Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 780m 4% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2668m 16% 5459Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 59m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 852m 5% 14293Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2714m 17% 14415Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 676m 4% 14358Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 470m 2% 2074Mi 3% 09:56:08 DEBUG --- stderr --- 09:56:08 DEBUG 09:57:07 INFO 09:57:07 INFO [loop_until]: kubectl --namespace=xlou top pods 09:57:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:57:07 INFO [loop_until]: OK (rc = 0) 09:57:07 DEBUG --- stdout --- 09:57:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5719Mi am-55f77847b7-mbr4x 38m 5837Mi am-55f77847b7-mfzwm 42m 5769Mi ds-cts-0 6m 383Mi ds-cts-1 6m 388Mi ds-cts-2 6m 356Mi ds-idrepo-0 2905m 13822Mi ds-idrepo-1 577m 13811Mi ds-idrepo-2 581m 13725Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2408m 4212Mi idm-65858d8c4c-gpz8d 2073m 4097Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 393m 557Mi 09:57:07 DEBUG --- stderr --- 09:57:07 DEBUG 09:57:08 INFO 09:57:08 INFO [loop_until]: kubectl --namespace=xlou top node 09:57:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:57:09 INFO [loop_until]: OK (rc = 0) 09:57:09 DEBUG --- stdout --- 09:57:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6784Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 93m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 94m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2204m 13% 5411Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 802m 5% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2619m 16% 5468Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 645m 4% 14323Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2889m 18% 14430Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 657m 4% 14383Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 462m 2% 2074Mi 3% 09:57:09 DEBUG --- stderr --- 09:57:09 DEBUG 09:58:07 INFO 09:58:07 INFO [loop_until]: kubectl --namespace=xlou top pods 09:58:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:58:07 INFO [loop_until]: OK (rc = 0) 09:58:07 DEBUG --- stdout --- 09:58:07 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5719Mi am-55f77847b7-mbr4x 39m 5837Mi am-55f77847b7-mfzwm 42m 5769Mi ds-cts-0 6m 384Mi ds-cts-1 6m 388Mi ds-cts-2 8m 354Mi ds-idrepo-0 2662m 13836Mi ds-idrepo-1 791m 13822Mi ds-idrepo-2 826m 13737Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2454m 4220Mi idm-65858d8c4c-gpz8d 2150m 4106Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 406m 559Mi 09:58:07 DEBUG --- stderr --- 09:58:07 DEBUG 09:58:09 INFO 09:58:09 INFO [loop_until]: kubectl --namespace=xlou top node 09:58:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:58:09 INFO [loop_until]: OK (rc = 0) 09:58:09 DEBUG --- stdout --- 09:58:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 101m 0% 6784Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 91m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 96m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2238m 14% 5416Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 774m 4% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2567m 16% 5473Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 699m 4% 14341Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 61m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3129m 19% 14359Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 825m 5% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 471m 2% 2077Mi 3% 09:58:09 DEBUG --- stderr --- 09:58:09 DEBUG 09:59:07 INFO 09:59:07 INFO [loop_until]: kubectl --namespace=xlou top pods 09:59:07 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:59:08 INFO [loop_until]: OK (rc = 0) 09:59:08 DEBUG --- stdout --- 09:59:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5719Mi am-55f77847b7-mbr4x 42m 5837Mi am-55f77847b7-mfzwm 44m 5769Mi ds-cts-0 5m 381Mi ds-cts-1 6m 388Mi ds-cts-2 6m 354Mi ds-idrepo-0 2552m 13782Mi ds-idrepo-1 737m 13770Mi ds-idrepo-2 654m 13738Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2497m 4227Mi idm-65858d8c4c-gpz8d 2029m 4112Mi lodemon-6cd9c44bd4-vnqvr 5m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 376m 560Mi 09:59:08 DEBUG --- stderr --- 09:59:08 DEBUG 09:59:09 INFO 09:59:09 INFO [loop_until]: kubectl --namespace=xlou top node 09:59:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 09:59:09 INFO [loop_until]: OK (rc = 0) 09:59:09 DEBUG --- stdout --- 09:59:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 100m 0% 6785Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 99m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2175m 13% 5424Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 803m 5% 2151Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2671m 16% 5482Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 653m 4% 14350Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2598m 16% 14393Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 800m 5% 14348Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 456m 2% 2077Mi 3% 09:59:09 DEBUG --- stderr --- 09:59:09 DEBUG 10:00:08 INFO 10:00:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:00:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:00:08 INFO [loop_until]: OK (rc = 0) 10:00:08 DEBUG --- stdout --- 10:00:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 40m 5719Mi am-55f77847b7-mbr4x 46m 5837Mi am-55f77847b7-mfzwm 42m 5769Mi ds-cts-0 6m 382Mi ds-cts-1 16m 389Mi ds-cts-2 6m 354Mi ds-idrepo-0 2673m 13809Mi ds-idrepo-1 641m 13788Mi ds-idrepo-2 559m 13714Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2475m 4237Mi idm-65858d8c4c-gpz8d 2005m 4117Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 398m 562Mi 10:00:08 DEBUG --- stderr --- 10:00:08 DEBUG 10:00:09 INFO 10:00:09 INFO [loop_until]: kubectl --namespace=xlou top node 10:00:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:00:09 INFO [loop_until]: OK (rc = 0) 10:00:09 DEBUG --- stdout --- 10:00:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 99m 0% 6785Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 101m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 95m 0% 6822Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2162m 13% 5430Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 780m 4% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2659m 16% 5483Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 652m 4% 14317Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2795m 17% 14421Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1080Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 704m 4% 14364Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 458m 2% 2078Mi 3% 10:00:09 DEBUG --- stderr --- 10:00:09 DEBUG 10:01:08 INFO 10:01:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:01:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:01:08 INFO [loop_until]: OK (rc = 0) 10:01:08 DEBUG --- stdout --- 10:01:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 44m 5722Mi am-55f77847b7-mbr4x 41m 5837Mi am-55f77847b7-mfzwm 45m 5769Mi ds-cts-0 6m 382Mi ds-cts-1 7m 389Mi ds-cts-2 7m 354Mi ds-idrepo-0 2775m 13847Mi ds-idrepo-1 730m 13802Mi ds-idrepo-2 601m 13727Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2538m 4242Mi idm-65858d8c4c-gpz8d 2045m 4124Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 392m 564Mi 10:01:08 DEBUG --- stderr --- 10:01:08 DEBUG 10:01:09 INFO 10:01:09 INFO [loop_until]: kubectl --namespace=xlou top node 10:01:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:01:09 INFO [loop_until]: OK (rc = 0) 10:01:09 DEBUG --- stdout --- 10:01:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 95m 0% 6784Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 95m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 99m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2197m 13% 5433Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 807m 5% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2601m 16% 5493Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1118Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 640m 4% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2697m 16% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 725m 4% 14391Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 463m 2% 2080Mi 3% 10:01:09 DEBUG --- stderr --- 10:01:09 DEBUG 10:02:08 INFO 10:02:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:02:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:02:08 INFO [loop_until]: OK (rc = 0) 10:02:08 DEBUG --- stdout --- 10:02:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 39m 5721Mi am-55f77847b7-mbr4x 41m 5837Mi am-55f77847b7-mfzwm 39m 5769Mi ds-cts-0 6m 382Mi ds-cts-1 8m 393Mi ds-cts-2 6m 354Mi ds-idrepo-0 2583m 13833Mi ds-idrepo-1 1224m 13822Mi ds-idrepo-2 799m 13750Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 2514m 4252Mi idm-65858d8c4c-gpz8d 2061m 4131Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 397m 566Mi 10:02:08 DEBUG --- stderr --- 10:02:08 DEBUG 10:02:09 INFO 10:02:09 INFO [loop_until]: kubectl --namespace=xlou top node 10:02:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:02:09 INFO [loop_until]: OK (rc = 0) 10:02:09 DEBUG --- stdout --- 10:02:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6787Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 100m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 90m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 2187m 13% 5438Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 803m 5% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 2652m 16% 5503Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 810m 5% 14358Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2687m 16% 14448Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1307m 8% 14410Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 464m 2% 2080Mi 3% 10:02:09 DEBUG --- stderr --- 10:02:09 DEBUG 10:03:08 INFO 10:03:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:03:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:03:08 INFO [loop_until]: OK (rc = 0) 10:03:08 DEBUG --- stdout --- 10:03:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 6m 5721Mi am-55f77847b7-mbr4x 9m 5837Mi am-55f77847b7-mfzwm 6m 5769Mi ds-cts-0 6m 382Mi ds-cts-1 7m 393Mi ds-cts-2 5m 354Mi ds-idrepo-0 90m 13832Mi ds-idrepo-1 274m 13721Mi ds-idrepo-2 672m 13715Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 6m 4252Mi idm-65858d8c4c-gpz8d 6m 4132Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 63m 149Mi 10:03:08 DEBUG --- stderr --- 10:03:08 DEBUG 10:03:09 INFO 10:03:09 INFO [loop_until]: kubectl --namespace=xlou top node 10:03:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:03:09 INFO [loop_until]: OK (rc = 0) 10:03:09 DEBUG --- stdout --- 10:03:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 59m 0% 6782Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 63m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6825Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 5441Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5502Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 415m 2% 14330Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 592m 3% 14324Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 126m 0% 1674Mi 2% 10:03:09 DEBUG --- stderr --- 10:03:09 DEBUG 10:04:08 INFO 10:04:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:04:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:04:08 INFO [loop_until]: OK (rc = 0) 10:04:08 DEBUG --- stdout --- 10:04:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 6m 5721Mi am-55f77847b7-mbr4x 17m 5837Mi am-55f77847b7-mfzwm 6m 5769Mi ds-cts-0 6m 382Mi ds-cts-1 6m 393Mi ds-cts-2 5m 354Mi ds-idrepo-0 10m 13831Mi ds-idrepo-1 10m 13737Mi ds-idrepo-2 10m 13716Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 6m 4252Mi idm-65858d8c4c-gpz8d 6m 4132Mi lodemon-6cd9c44bd4-vnqvr 4m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 149Mi 10:04:08 DEBUG --- stderr --- 10:04:08 DEBUG 10:04:09 INFO 10:04:09 INFO [loop_until]: kubectl --namespace=xlou top node 10:04:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:04:09 INFO [loop_until]: OK (rc = 0) 10:04:09 DEBUG --- stdout --- 10:04:09 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6785Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 69m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 75m 0% 5440Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 120m 0% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 71m 0% 5506Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 60m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 59m 0% 14334Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 68m 0% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14327Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 63m 0% 1677Mi 2% 10:04:09 DEBUG --- stderr --- 10:04:09 DEBUG 127.0.0.1 - - [12/Aug/2023 10:04:19] "GET /monitoring/average?start_time=23-08-12_08:33:48&stop_time=23-08-12_09:02:18 HTTP/1.1" 200 - 10:05:08 INFO 10:05:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:05:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:05:08 INFO [loop_until]: OK (rc = 0) 10:05:08 DEBUG --- stdout --- 10:05:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 8m 5722Mi am-55f77847b7-mbr4x 18m 5837Mi am-55f77847b7-mfzwm 14m 5769Mi ds-cts-0 7m 383Mi ds-cts-1 7m 393Mi ds-cts-2 5m 354Mi ds-idrepo-0 188m 13840Mi ds-idrepo-1 37m 13741Mi ds-idrepo-2 237m 13724Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 6m 4252Mi idm-65858d8c4c-gpz8d 5m 4131Mi lodemon-6cd9c44bd4-vnqvr 3m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1311m 510Mi 10:05:08 DEBUG --- stderr --- 10:05:08 DEBUG 10:05:10 INFO 10:05:10 INFO [loop_until]: kubectl --namespace=xlou top node 10:05:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:05:10 INFO [loop_until]: OK (rc = 0) 10:05:10 DEBUG --- stdout --- 10:05:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 97m 0% 6818Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 74m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 62m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 258m 1% 5446Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 328m 2% 5518Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1119Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 146m 0% 14347Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 96m 0% 14457Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 104m 0% 14331Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 1888m 11% 1946Mi 3% 10:05:10 DEBUG --- stderr --- 10:05:10 DEBUG 10:06:08 INFO 10:06:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:06:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:06:08 INFO [loop_until]: OK (rc = 0) 10:06:08 DEBUG --- stdout --- 10:06:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 53m 5722Mi am-55f77847b7-mbr4x 53m 5837Mi am-55f77847b7-mfzwm 57m 5789Mi ds-cts-0 6m 382Mi ds-cts-1 13m 395Mi ds-cts-2 7m 354Mi ds-idrepo-0 3640m 13827Mi ds-idrepo-1 1998m 13801Mi ds-idrepo-2 2038m 13822Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1589m 4280Mi idm-65858d8c4c-gpz8d 1264m 4160Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 532m 538Mi 10:06:08 DEBUG --- stderr --- 10:06:08 DEBUG 10:06:10 INFO 10:06:10 INFO [loop_until]: kubectl --namespace=xlou top node 10:06:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:06:10 INFO [loop_until]: OK (rc = 0) 10:06:10 DEBUG --- stdout --- 10:06:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 116m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1423m 8% 5472Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 648m 4% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1785m 11% 5543Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2210m 13% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3635m 22% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2131m 13% 14419Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 600m 3% 2054Mi 3% 10:06:10 DEBUG --- stderr --- 10:06:10 DEBUG 10:07:08 INFO 10:07:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:07:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:07:08 INFO [loop_until]: OK (rc = 0) 10:07:08 DEBUG --- stdout --- 10:07:08 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 60m 5721Mi am-55f77847b7-mbr4x 52m 5838Mi am-55f77847b7-mfzwm 56m 5789Mi ds-cts-0 6m 383Mi ds-cts-1 6m 395Mi ds-cts-2 8m 354Mi ds-idrepo-0 4285m 13822Mi ds-idrepo-1 2495m 13846Mi ds-idrepo-2 2331m 13814Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1549m 4295Mi idm-65858d8c4c-gpz8d 1198m 4178Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 339m 539Mi 10:07:08 DEBUG --- stderr --- 10:07:08 DEBUG 10:07:10 INFO 10:07:10 INFO [loop_until]: kubectl --namespace=xlou top node 10:07:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:07:10 INFO [loop_until]: OK (rc = 0) 10:07:10 DEBUG --- stdout --- 10:07:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 105m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1337m 8% 5487Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 642m 4% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1668m 10% 5550Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 62m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2312m 14% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1105Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3709m 23% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2621m 16% 14412Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 416m 2% 2056Mi 3% 10:07:10 DEBUG --- stderr --- 10:07:10 DEBUG 10:08:08 INFO 10:08:08 INFO [loop_until]: kubectl --namespace=xlou top pods 10:08:08 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:08:09 INFO [loop_until]: OK (rc = 0) 10:08:09 DEBUG --- stdout --- 10:08:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 55m 5722Mi am-55f77847b7-mbr4x 53m 5838Mi am-55f77847b7-mfzwm 58m 5790Mi ds-cts-0 6m 383Mi ds-cts-1 6m 395Mi ds-cts-2 6m 354Mi ds-idrepo-0 2969m 13809Mi ds-idrepo-1 1775m 13756Mi ds-idrepo-2 2121m 13776Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1516m 4301Mi idm-65858d8c4c-gpz8d 1221m 4182Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 357m 539Mi 10:08:09 DEBUG --- stderr --- 10:08:09 DEBUG 10:08:10 INFO 10:08:10 INFO [loop_until]: kubectl --namespace=xlou top node 10:08:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:08:10 INFO [loop_until]: OK (rc = 0) 10:08:10 DEBUG --- stdout --- 10:08:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6963Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1347m 8% 5493Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 647m 4% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1616m 10% 5558Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2062m 12% 14408Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1106Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3228m 20% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1845m 11% 14371Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 421m 2% 2056Mi 3% 10:08:10 DEBUG --- stderr --- 10:08:10 DEBUG 10:09:09 INFO 10:09:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:09:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:09:09 INFO [loop_until]: OK (rc = 0) 10:09:09 DEBUG --- stdout --- 10:09:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 55m 5721Mi am-55f77847b7-mbr4x 53m 5838Mi am-55f77847b7-mfzwm 57m 5789Mi ds-cts-0 6m 382Mi ds-cts-1 6m 395Mi ds-cts-2 8m 356Mi ds-idrepo-0 3008m 13814Mi ds-idrepo-1 2357m 13815Mi ds-idrepo-2 1861m 13804Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1494m 4308Mi idm-65858d8c4c-gpz8d 1205m 4187Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 341m 539Mi 10:09:09 DEBUG --- stderr --- 10:09:09 DEBUG 10:09:10 INFO 10:09:10 INFO [loop_until]: kubectl --namespace=xlou top node 10:09:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:09:10 INFO [loop_until]: OK (rc = 0) 10:09:10 DEBUG --- stdout --- 10:09:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1321m 8% 5496Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 613m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1604m 10% 5561Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1927m 12% 14488Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1104Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3258m 20% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2484m 15% 14436Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 403m 2% 2054Mi 3% 10:09:10 DEBUG --- stderr --- 10:09:10 DEBUG 10:10:09 INFO 10:10:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:10:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:10:09 INFO [loop_until]: OK (rc = 0) 10:10:09 DEBUG --- stdout --- 10:10:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 53m 5722Mi am-55f77847b7-mbr4x 63m 5838Mi am-55f77847b7-mfzwm 57m 5790Mi ds-cts-0 6m 382Mi ds-cts-1 6m 395Mi ds-cts-2 8m 356Mi ds-idrepo-0 3119m 13822Mi ds-idrepo-1 1625m 13790Mi ds-idrepo-2 2057m 13801Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1469m 4313Mi idm-65858d8c4c-gpz8d 1199m 4192Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 360m 541Mi 10:10:09 DEBUG --- stderr --- 10:10:09 DEBUG 10:10:10 INFO 10:10:10 INFO [loop_until]: kubectl --namespace=xlou top node 10:10:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:10:10 INFO [loop_until]: OK (rc = 0) 10:10:10 DEBUG --- stdout --- 10:10:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1346Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 118m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1299m 8% 5504Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 638m 4% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1641m 10% 5568Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 63m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2125m 13% 14436Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3391m 21% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 66m 0% 1097Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1615m 10% 14427Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 435m 2% 2058Mi 3% 10:10:10 DEBUG --- stderr --- 10:10:10 DEBUG 10:11:09 INFO 10:11:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:11:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:11:09 INFO [loop_until]: OK (rc = 0) 10:11:09 DEBUG --- stdout --- 10:11:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 50m 5722Mi am-55f77847b7-mbr4x 54m 5838Mi am-55f77847b7-mfzwm 54m 5790Mi ds-cts-0 6m 382Mi ds-cts-1 7m 395Mi ds-cts-2 9m 357Mi ds-idrepo-0 3669m 13759Mi ds-idrepo-1 3716m 13693Mi ds-idrepo-2 2715m 13849Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1457m 4318Mi idm-65858d8c4c-gpz8d 1210m 4197Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 340m 542Mi 10:11:09 DEBUG --- stderr --- 10:11:09 DEBUG 10:11:10 INFO 10:11:10 INFO [loop_until]: kubectl --namespace=xlou top node 10:11:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:11:10 INFO [loop_until]: OK (rc = 0) 10:11:10 DEBUG --- stdout --- 10:11:10 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1347Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 109m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6960Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 104m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1343m 8% 5506Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 614m 3% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1598m 10% 5572Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2662m 16% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3687m 23% 14396Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3779m 23% 14245Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 407m 2% 2058Mi 3% 10:11:10 DEBUG --- stderr --- 10:11:10 DEBUG 10:12:09 INFO 10:12:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:12:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:12:09 INFO [loop_until]: OK (rc = 0) 10:12:09 DEBUG --- stdout --- 10:12:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 54m 5722Mi am-55f77847b7-mbr4x 50m 5839Mi am-55f77847b7-mfzwm 55m 5790Mi ds-cts-0 6m 382Mi ds-cts-1 7m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 2527m 13823Mi ds-idrepo-1 1891m 13853Mi ds-idrepo-2 3368m 13789Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1485m 4324Mi idm-65858d8c4c-gpz8d 1240m 4203Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 354m 542Mi 10:12:09 DEBUG --- stderr --- 10:12:09 DEBUG 10:12:10 INFO 10:12:10 INFO [loop_until]: kubectl --namespace=xlou top node 10:12:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:12:11 INFO [loop_until]: OK (rc = 0) 10:12:11 DEBUG --- stdout --- 10:12:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 106m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 105m 0% 6961Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1369m 8% 5507Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 637m 4% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1597m 10% 5576Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3098m 19% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2562m 16% 14475Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1083Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1748m 11% 14433Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 417m 2% 2058Mi 3% 10:12:11 DEBUG --- stderr --- 10:12:11 DEBUG 10:13:09 INFO 10:13:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:13:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:13:09 INFO [loop_until]: OK (rc = 0) 10:13:09 DEBUG --- stdout --- 10:13:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 59m 5722Mi am-55f77847b7-mbr4x 52m 5839Mi am-55f77847b7-mfzwm 54m 5790Mi ds-cts-0 7m 382Mi ds-cts-1 8m 395Mi ds-cts-2 7m 357Mi ds-idrepo-0 4196m 13827Mi ds-idrepo-1 1977m 13766Mi ds-idrepo-2 2083m 13757Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1558m 4329Mi idm-65858d8c4c-gpz8d 1187m 4208Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 345m 542Mi 10:13:09 DEBUG --- stderr --- 10:13:09 DEBUG 10:13:11 INFO 10:13:11 INFO [loop_until]: kubectl --namespace=xlou top node 10:13:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:13:11 INFO [loop_until]: OK (rc = 0) 10:13:11 DEBUG --- stdout --- 10:13:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6806Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6962Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1255m 7% 5512Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 614m 3% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1634m 10% 5580Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1862m 11% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4308m 27% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1503m 9% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 402m 2% 2060Mi 3% 10:13:11 DEBUG --- stderr --- 10:13:11 DEBUG 10:14:09 INFO 10:14:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:14:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:14:09 INFO [loop_until]: OK (rc = 0) 10:14:09 DEBUG --- stdout --- 10:14:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 55m 5722Mi am-55f77847b7-mbr4x 53m 5839Mi am-55f77847b7-mfzwm 58m 5790Mi ds-cts-0 10m 384Mi ds-cts-1 7m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 3320m 13773Mi ds-idrepo-1 2422m 13824Mi ds-idrepo-2 1786m 13863Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1505m 4335Mi idm-65858d8c4c-gpz8d 1239m 4213Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 350m 542Mi 10:14:09 DEBUG --- stderr --- 10:14:09 DEBUG 10:14:11 INFO 10:14:11 INFO [loop_until]: kubectl --namespace=xlou top node 10:14:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:14:11 INFO [loop_until]: OK (rc = 0) 10:14:11 DEBUG --- stdout --- 10:14:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 112m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1343m 8% 5521Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 640m 4% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1627m 10% 5587Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1905m 11% 14466Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3242m 20% 14381Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1082Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2480m 15% 14470Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 416m 2% 2059Mi 3% 10:14:11 DEBUG --- stderr --- 10:14:11 DEBUG 10:15:09 INFO 10:15:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:15:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:15:09 INFO [loop_until]: OK (rc = 0) 10:15:09 DEBUG --- stdout --- 10:15:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 54m 5722Mi am-55f77847b7-mbr4x 52m 5839Mi am-55f77847b7-mfzwm 58m 5790Mi ds-cts-0 7m 384Mi ds-cts-1 7m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2672m 13823Mi ds-idrepo-1 2445m 13734Mi ds-idrepo-2 2059m 13823Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1467m 4340Mi idm-65858d8c4c-gpz8d 1208m 4218Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 343m 542Mi 10:15:09 DEBUG --- stderr --- 10:15:09 DEBUG 10:15:11 INFO 10:15:11 INFO [loop_until]: kubectl --namespace=xlou top node 10:15:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:15:11 INFO [loop_until]: OK (rc = 0) 10:15:11 DEBUG --- stdout --- 10:15:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6959Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6826Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1344m 8% 5527Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 620m 3% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1620m 10% 5592Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2193m 13% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2539m 15% 14489Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2546m 16% 14384Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 422m 2% 2057Mi 3% 10:15:11 DEBUG --- stderr --- 10:15:11 DEBUG 10:16:09 INFO 10:16:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:16:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:16:09 INFO [loop_until]: OK (rc = 0) 10:16:09 DEBUG --- stdout --- 10:16:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 53m 5722Mi am-55f77847b7-mbr4x 53m 5839Mi am-55f77847b7-mfzwm 53m 5791Mi ds-cts-0 8m 384Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 3119m 13741Mi ds-idrepo-1 1599m 13820Mi ds-idrepo-2 2494m 13789Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1473m 4346Mi idm-65858d8c4c-gpz8d 1234m 4224Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 345m 542Mi 10:16:09 DEBUG --- stderr --- 10:16:09 DEBUG 10:16:11 INFO 10:16:11 INFO [loop_until]: kubectl --namespace=xlou top node 10:16:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:16:11 INFO [loop_until]: OK (rc = 0) 10:16:11 DEBUG --- stdout --- 10:16:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6803Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1293m 8% 5533Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 637m 4% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1608m 10% 5598Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2558m 16% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2736m 17% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1625m 10% 14468Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 397m 2% 2058Mi 3% 10:16:11 DEBUG --- stderr --- 10:16:11 DEBUG 10:17:09 INFO 10:17:09 INFO [loop_until]: kubectl --namespace=xlou top pods 10:17:09 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:17:09 INFO [loop_until]: OK (rc = 0) 10:17:09 DEBUG --- stdout --- 10:17:09 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 54m 5722Mi am-55f77847b7-mbr4x 58m 5840Mi am-55f77847b7-mfzwm 55m 5791Mi ds-cts-0 6m 384Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2890m 13860Mi ds-idrepo-1 2479m 13816Mi ds-idrepo-2 1741m 13785Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1484m 4350Mi idm-65858d8c4c-gpz8d 1229m 4229Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 343m 543Mi 10:17:09 DEBUG --- stderr --- 10:17:09 DEBUG 10:17:11 INFO 10:17:11 INFO [loop_until]: kubectl --namespace=xlou top node 10:17:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:17:11 INFO [loop_until]: OK (rc = 0) 10:17:11 DEBUG --- stdout --- 10:17:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 111m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 103m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1335m 8% 5534Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 619m 3% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1639m 10% 5601Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1887m 11% 14437Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2828m 17% 14461Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2760m 17% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 417m 2% 2058Mi 3% 10:17:11 DEBUG --- stderr --- 10:17:11 DEBUG 10:18:10 INFO 10:18:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:18:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:18:10 INFO [loop_until]: OK (rc = 0) 10:18:10 DEBUG --- stdout --- 10:18:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 53m 5722Mi am-55f77847b7-mbr4x 51m 5839Mi am-55f77847b7-mfzwm 54m 5791Mi ds-cts-0 10m 381Mi ds-cts-1 8m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 3397m 13816Mi ds-idrepo-1 1979m 13824Mi ds-idrepo-2 2856m 13823Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1448m 4356Mi idm-65858d8c4c-gpz8d 1180m 4234Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 339m 543Mi 10:18:10 DEBUG --- stderr --- 10:18:10 DEBUG 10:18:11 INFO 10:18:11 INFO [loop_until]: kubectl --namespace=xlou top node 10:18:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:18:11 INFO [loop_until]: OK (rc = 0) 10:18:11 DEBUG --- stdout --- 10:18:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 109m 0% 6805Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 106m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1283m 8% 5543Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 626m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1542m 9% 5607Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 75m 0% 1121Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2737m 17% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 68m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3501m 22% 14492Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2015m 12% 14425Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 415m 2% 2062Mi 3% 10:18:11 DEBUG --- stderr --- 10:18:11 DEBUG 10:19:10 INFO 10:19:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:19:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:19:10 INFO [loop_until]: OK (rc = 0) 10:19:10 DEBUG --- stdout --- 10:19:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 56m 5722Mi am-55f77847b7-mbr4x 52m 5839Mi am-55f77847b7-mfzwm 60m 5791Mi ds-cts-0 6m 381Mi ds-cts-1 7m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 2781m 13825Mi ds-idrepo-1 1885m 13776Mi ds-idrepo-2 1969m 13827Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1491m 4361Mi idm-65858d8c4c-gpz8d 1196m 4238Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 350m 543Mi 10:19:10 DEBUG --- stderr --- 10:19:10 DEBUG 10:19:11 INFO 10:19:11 INFO [loop_until]: kubectl --namespace=xlou top node 10:19:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:19:11 INFO [loop_until]: OK (rc = 0) 10:19:11 DEBUG --- stdout --- 10:19:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6804Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6964Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1305m 8% 5544Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 622m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1644m 10% 5625Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1984m 12% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3009m 18% 14376Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2026m 12% 14382Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 409m 2% 2059Mi 3% 10:19:11 DEBUG --- stderr --- 10:19:11 DEBUG 10:20:10 INFO 10:20:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:20:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:20:10 INFO [loop_until]: OK (rc = 0) 10:20:10 DEBUG --- stdout --- 10:20:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 58m 5723Mi am-55f77847b7-mbr4x 52m 5839Mi am-55f77847b7-mfzwm 58m 5791Mi ds-cts-0 6m 384Mi ds-cts-1 7m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2749m 13815Mi ds-idrepo-1 2855m 13827Mi ds-idrepo-2 2274m 13802Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1456m 4366Mi idm-65858d8c4c-gpz8d 1210m 4245Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 352m 543Mi 10:20:10 DEBUG --- stderr --- 10:20:10 DEBUG 10:20:11 INFO 10:20:11 INFO [loop_until]: kubectl --namespace=xlou top node 10:20:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:20:11 INFO [loop_until]: OK (rc = 0) 10:20:11 DEBUG --- stdout --- 10:20:11 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 106m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1351m 8% 5553Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 635m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1613m 10% 5619Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3146m 19% 14411Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1107Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2873m 18% 14459Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2081m 13% 14426Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 420m 2% 2068Mi 3% 10:20:11 DEBUG --- stderr --- 10:20:11 DEBUG 10:21:10 INFO 10:21:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:21:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:21:10 INFO [loop_until]: OK (rc = 0) 10:21:10 DEBUG --- stdout --- 10:21:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 55m 5722Mi am-55f77847b7-mbr4x 55m 5839Mi am-55f77847b7-mfzwm 64m 5793Mi ds-cts-0 5m 384Mi ds-cts-1 7m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2518m 13824Mi ds-idrepo-1 1473m 13823Mi ds-idrepo-2 1992m 13839Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1492m 4374Mi idm-65858d8c4c-gpz8d 1176m 4249Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 339m 543Mi 10:21:10 DEBUG --- stderr --- 10:21:10 DEBUG 10:21:12 INFO 10:21:12 INFO [loop_until]: kubectl --namespace=xlou top node 10:21:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:21:12 INFO [loop_until]: OK (rc = 0) 10:21:12 DEBUG --- stdout --- 10:21:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 121m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6967Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1300m 8% 5557Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 608m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1610m 10% 5627Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2055m 12% 14460Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2573m 16% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1648m 10% 14420Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 412m 2% 2061Mi 3% 10:21:12 DEBUG --- stderr --- 10:21:12 DEBUG 10:22:10 INFO 10:22:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:22:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:22:10 INFO [loop_until]: OK (rc = 0) 10:22:10 DEBUG --- stdout --- 10:22:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 52m 5723Mi am-55f77847b7-mbr4x 55m 5839Mi am-55f77847b7-mfzwm 57m 5793Mi ds-cts-0 6m 384Mi ds-cts-1 6m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2485m 13822Mi ds-idrepo-1 2627m 13750Mi ds-idrepo-2 1960m 13823Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1492m 4379Mi idm-65858d8c4c-gpz8d 1150m 4255Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 341m 543Mi 10:22:10 DEBUG --- stderr --- 10:22:10 DEBUG 10:22:12 INFO 10:22:12 INFO [loop_until]: kubectl --namespace=xlou top node 10:22:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:22:12 INFO [loop_until]: OK (rc = 0) 10:22:12 DEBUG --- stdout --- 10:22:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 110m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1263m 7% 5564Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 633m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1653m 10% 5632Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 61m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2056m 12% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2605m 16% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2013m 12% 14377Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 392m 2% 2060Mi 3% 10:22:12 DEBUG --- stderr --- 10:22:12 DEBUG 10:23:10 INFO 10:23:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:23:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:23:10 INFO [loop_until]: OK (rc = 0) 10:23:10 DEBUG --- stdout --- 10:23:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 51m 5723Mi am-55f77847b7-mbr4x 64m 5840Mi am-55f77847b7-mfzwm 55m 5793Mi ds-cts-0 6m 385Mi ds-cts-1 7m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 4370m 13818Mi ds-idrepo-1 2327m 13823Mi ds-idrepo-2 1847m 13819Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1531m 4384Mi idm-65858d8c4c-gpz8d 1175m 4259Mi lodemon-6cd9c44bd4-vnqvr 7m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 345m 544Mi 10:23:10 DEBUG --- stderr --- 10:23:10 DEBUG 10:23:12 INFO 10:23:12 INFO [loop_until]: kubectl --namespace=xlou top node 10:23:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:23:12 INFO [loop_until]: OK (rc = 0) 10:23:12 DEBUG --- stdout --- 10:23:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 123m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 107m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1325m 8% 5569Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 622m 3% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1659m 10% 5640Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2066m 13% 14456Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3812m 23% 14494Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2475m 15% 14403Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 420m 2% 2057Mi 3% 10:23:12 DEBUG --- stderr --- 10:23:12 DEBUG 10:24:10 INFO 10:24:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:24:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:24:10 INFO [loop_until]: OK (rc = 0) 10:24:10 DEBUG --- stdout --- 10:24:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 52m 5723Mi am-55f77847b7-mbr4x 61m 5840Mi am-55f77847b7-mfzwm 60m 5793Mi ds-cts-0 6m 384Mi ds-cts-1 8m 396Mi ds-cts-2 6m 357Mi ds-idrepo-0 2660m 13863Mi ds-idrepo-1 1859m 13866Mi ds-idrepo-2 3334m 13610Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1484m 4388Mi idm-65858d8c4c-gpz8d 1254m 4263Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 352m 544Mi 10:24:10 DEBUG --- stderr --- 10:24:10 DEBUG 10:24:12 INFO 10:24:12 INFO [loop_until]: kubectl --namespace=xlou top node 10:24:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:24:12 INFO [loop_until]: OK (rc = 0) 10:24:12 DEBUG --- stdout --- 10:24:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 116m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 120m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 108m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1352m 8% 5572Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 633m 3% 2164Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1622m 10% 5642Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2977m 18% 14251Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3703m 23% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1636m 10% 14432Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 419m 2% 2061Mi 3% 10:24:12 DEBUG --- stderr --- 10:24:12 DEBUG 10:25:10 INFO 10:25:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:25:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:25:10 INFO [loop_until]: OK (rc = 0) 10:25:10 DEBUG --- stdout --- 10:25:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 62m 5724Mi am-55f77847b7-mbr4x 70m 5840Mi am-55f77847b7-mfzwm 65m 5793Mi ds-cts-0 5m 385Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2430m 13835Mi ds-idrepo-1 3348m 13844Mi ds-idrepo-2 2348m 13770Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1354m 4393Mi idm-65858d8c4c-gpz8d 1197m 4269Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 331m 544Mi 10:25:10 DEBUG --- stderr --- 10:25:10 DEBUG 10:25:12 INFO 10:25:12 INFO [loop_until]: kubectl --namespace=xlou top node 10:25:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:25:12 INFO [loop_until]: OK (rc = 0) 10:25:12 DEBUG --- stdout --- 10:25:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 128m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 114m 0% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1313m 8% 5578Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 617m 3% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1591m 10% 5650Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2390m 15% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3579m 22% 14401Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3918m 24% 14278Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 396m 2% 2061Mi 3% 10:25:12 DEBUG --- stderr --- 10:25:12 DEBUG 10:26:10 INFO 10:26:10 INFO [loop_until]: kubectl --namespace=xlou top pods 10:26:10 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:26:10 INFO [loop_until]: OK (rc = 0) 10:26:10 DEBUG --- stdout --- 10:26:10 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 58m 5724Mi am-55f77847b7-mbr4x 63m 5840Mi am-55f77847b7-mfzwm 60m 5794Mi ds-cts-0 6m 385Mi ds-cts-1 7m 396Mi ds-cts-2 6m 357Mi ds-idrepo-0 2669m 13836Mi ds-idrepo-1 1758m 13748Mi ds-idrepo-2 1912m 13813Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1482m 4403Mi idm-65858d8c4c-gpz8d 1183m 4274Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 349m 544Mi 10:26:10 DEBUG --- stderr --- 10:26:10 DEBUG 10:26:12 INFO 10:26:12 INFO [loop_until]: kubectl --namespace=xlou top node 10:26:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:26:12 INFO [loop_until]: OK (rc = 0) 10:26:12 DEBUG --- stdout --- 10:26:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 124m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6832Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1333m 8% 5585Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 638m 4% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1631m 10% 5657Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1703m 10% 14478Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3813m 23% 14423Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1605m 10% 14369Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 414m 2% 2059Mi 3% 10:26:12 DEBUG --- stderr --- 10:26:12 DEBUG 10:27:11 INFO 10:27:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:27:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:27:11 INFO [loop_until]: OK (rc = 0) 10:27:11 DEBUG --- stdout --- 10:27:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 54m 5724Mi am-55f77847b7-mbr4x 59m 5840Mi am-55f77847b7-mfzwm 59m 5794Mi ds-cts-0 5m 384Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2516m 13826Mi ds-idrepo-1 2637m 13861Mi ds-idrepo-2 2823m 13801Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1468m 4408Mi idm-65858d8c4c-gpz8d 1188m 4281Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 347m 544Mi 10:27:11 DEBUG --- stderr --- 10:27:11 DEBUG 10:27:12 INFO 10:27:12 INFO [loop_until]: kubectl --namespace=xlou top node 10:27:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:27:12 INFO [loop_until]: OK (rc = 0) 10:27:12 DEBUG --- stdout --- 10:27:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 113m 0% 6811Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 116m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1320m 8% 5594Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 636m 4% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1633m 10% 5664Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2753m 17% 14452Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2585m 16% 14501Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2102m 13% 14444Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 404m 2% 2058Mi 3% 10:27:12 DEBUG --- stderr --- 10:27:12 DEBUG 10:28:11 INFO 10:28:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:28:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:28:11 INFO [loop_until]: OK (rc = 0) 10:28:11 DEBUG --- stdout --- 10:28:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 51m 5724Mi am-55f77847b7-mbr4x 58m 5840Mi am-55f77847b7-mfzwm 56m 5794Mi ds-cts-0 6m 385Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 4109m 13819Mi ds-idrepo-1 2150m 13788Mi ds-idrepo-2 2685m 13794Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1492m 4414Mi idm-65858d8c4c-gpz8d 1291m 4286Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 345m 544Mi 10:28:11 DEBUG --- stderr --- 10:28:11 DEBUG 10:28:12 INFO 10:28:12 INFO [loop_until]: kubectl --namespace=xlou top node 10:28:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:28:12 INFO [loop_until]: OK (rc = 0) 10:28:12 DEBUG --- stdout --- 10:28:12 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 112m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 114m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 105m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1344m 8% 5599Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 648m 4% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1599m 10% 5670Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2567m 16% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4078m 25% 14458Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2163m 13% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 404m 2% 2061Mi 3% 10:28:12 DEBUG --- stderr --- 10:28:12 DEBUG 10:29:11 INFO 10:29:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:29:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:29:11 INFO [loop_until]: OK (rc = 0) 10:29:11 DEBUG --- stdout --- 10:29:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 57m 5724Mi am-55f77847b7-mbr4x 55m 5840Mi am-55f77847b7-mfzwm 55m 5795Mi ds-cts-0 6m 384Mi ds-cts-1 9m 395Mi ds-cts-2 7m 358Mi ds-idrepo-0 3840m 13728Mi ds-idrepo-1 3993m 13857Mi ds-idrepo-2 2068m 13829Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1481m 4420Mi idm-65858d8c4c-gpz8d 1189m 4291Mi lodemon-6cd9c44bd4-vnqvr 6m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 348m 545Mi 10:29:11 DEBUG --- stderr --- 10:29:11 DEBUG 10:29:13 INFO 10:29:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:29:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:29:13 INFO [loop_until]: OK (rc = 0) 10:29:13 DEBUG --- stdout --- 10:29:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1360Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 111m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 113m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1355m 8% 5606Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 634m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1615m 10% 5676Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 72m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2760m 17% 14482Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3758m 23% 14339Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3286m 20% 14419Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 410m 2% 2062Mi 3% 10:29:13 DEBUG --- stderr --- 10:29:13 DEBUG 10:30:11 INFO 10:30:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:30:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:30:11 INFO [loop_until]: OK (rc = 0) 10:30:11 DEBUG --- stdout --- 10:30:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 55m 5724Mi am-55f77847b7-mbr4x 57m 5840Mi am-55f77847b7-mfzwm 60m 5795Mi ds-cts-0 6m 384Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2513m 13839Mi ds-idrepo-1 1954m 13795Mi ds-idrepo-2 1792m 13824Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1444m 4424Mi idm-65858d8c4c-gpz8d 1212m 4298Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 367m 545Mi 10:30:11 DEBUG --- stderr --- 10:30:11 DEBUG 10:30:13 INFO 10:30:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:30:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:30:13 INFO [loop_until]: OK (rc = 0) 10:30:13 DEBUG --- stdout --- 10:30:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 108m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 110m 0% 6966Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6828Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1375m 8% 5611Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 635m 3% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1587m 9% 5677Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1799m 11% 14481Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2594m 16% 14463Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2097m 13% 14401Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 426m 2% 2059Mi 3% 10:30:13 DEBUG --- stderr --- 10:30:13 DEBUG 10:31:11 INFO 10:31:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:31:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:31:11 INFO [loop_until]: OK (rc = 0) 10:31:11 DEBUG --- stdout --- 10:31:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 54m 5724Mi am-55f77847b7-mbr4x 53m 5840Mi am-55f77847b7-mfzwm 55m 5795Mi ds-cts-0 6m 385Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2384m 13828Mi ds-idrepo-1 2315m 13877Mi ds-idrepo-2 2869m 13807Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1474m 4429Mi idm-65858d8c4c-gpz8d 1208m 4303Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 351m 545Mi 10:31:11 DEBUG --- stderr --- 10:31:11 DEBUG 10:31:13 INFO 10:31:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:31:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:31:13 INFO [loop_until]: OK (rc = 0) 10:31:13 DEBUG --- stdout --- 10:31:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 115m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 109m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1302m 8% 5613Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 633m 3% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1569m 9% 5684Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3107m 19% 14516Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2579m 16% 14484Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1085Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1873m 11% 14404Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 398m 2% 2062Mi 3% 10:31:13 DEBUG --- stderr --- 10:31:13 DEBUG 10:32:11 INFO 10:32:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:32:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:32:11 INFO [loop_until]: OK (rc = 0) 10:32:11 DEBUG --- stdout --- 10:32:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 56m 5724Mi am-55f77847b7-mbr4x 53m 5840Mi am-55f77847b7-mfzwm 60m 5795Mi ds-cts-0 6m 384Mi ds-cts-1 8m 395Mi ds-cts-2 7m 357Mi ds-idrepo-0 3787m 13811Mi ds-idrepo-1 1638m 13825Mi ds-idrepo-2 2497m 13799Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1531m 4436Mi idm-65858d8c4c-gpz8d 1205m 4307Mi lodemon-6cd9c44bd4-vnqvr 4m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 347m 545Mi 10:32:11 DEBUG --- stderr --- 10:32:11 DEBUG 10:32:13 INFO 10:32:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:32:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:32:13 INFO [loop_until]: OK (rc = 0) 10:32:13 DEBUG --- stdout --- 10:32:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 117m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 109m 0% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 112m 0% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1391m 8% 5621Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 655m 4% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1640m 10% 5689Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1744m 10% 14469Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4213m 26% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 64m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2142m 13% 14450Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 423m 2% 2064Mi 3% 10:32:13 DEBUG --- stderr --- 10:32:13 DEBUG 10:33:11 INFO 10:33:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:33:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:33:11 INFO [loop_until]: OK (rc = 0) 10:33:11 DEBUG --- stdout --- 10:33:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 57m 5724Mi am-55f77847b7-mbr4x 55m 5840Mi am-55f77847b7-mfzwm 58m 5795Mi ds-cts-0 7m 386Mi ds-cts-1 8m 396Mi ds-cts-2 6m 358Mi ds-idrepo-0 2542m 13831Mi ds-idrepo-1 2546m 13843Mi ds-idrepo-2 1764m 13823Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1520m 4442Mi idm-65858d8c4c-gpz8d 1196m 4314Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 357m 545Mi 10:33:11 DEBUG --- stderr --- 10:33:11 DEBUG 10:33:13 INFO 10:33:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:33:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:33:13 INFO [loop_until]: OK (rc = 0) 10:33:13 DEBUG --- stdout --- 10:33:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1357Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 114m 0% 6807Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 107m 0% 6969Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 111m 0% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1345m 8% 5624Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 640m 4% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1660m 10% 5694Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1831m 11% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3057m 19% 14462Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2695m 16% 14459Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 434m 2% 2064Mi 3% 10:33:13 DEBUG --- stderr --- 10:33:13 DEBUG 10:34:11 INFO 10:34:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:34:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:34:11 INFO [loop_until]: OK (rc = 0) 10:34:11 DEBUG --- stdout --- 10:34:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 52m 5725Mi am-55f77847b7-mbr4x 55m 5840Mi am-55f77847b7-mfzwm 55m 5795Mi ds-cts-0 6m 384Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 2592m 13863Mi ds-idrepo-1 1876m 13840Mi ds-idrepo-2 2765m 13802Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1472m 4447Mi idm-65858d8c4c-gpz8d 1178m 4317Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 350m 546Mi 10:34:11 DEBUG --- stderr --- 10:34:11 DEBUG 10:34:13 INFO 10:34:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:34:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:34:13 INFO [loop_until]: OK (rc = 0) 10:34:13 DEBUG --- stdout --- 10:34:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1365Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 106m 0% 6810Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 108m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 110m 0% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 1316m 8% 5630Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 645m 4% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1665m 10% 5701Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 71m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2956m 18% 14529Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2555m 16% 14505Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2174m 13% 14365Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 406m 2% 2062Mi 3% 10:34:13 DEBUG --- stderr --- 10:34:13 DEBUG 10:35:11 INFO 10:35:11 INFO [loop_until]: kubectl --namespace=xlou top pods 10:35:11 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:35:11 INFO [loop_until]: OK (rc = 0) 10:35:11 DEBUG --- stdout --- 10:35:11 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 42m 5725Mi am-55f77847b7-mbr4x 46m 5841Mi am-55f77847b7-mfzwm 38m 5795Mi ds-cts-0 5m 384Mi ds-cts-1 9m 395Mi ds-cts-2 6m 355Mi ds-idrepo-0 1433m 13864Mi ds-idrepo-1 849m 13834Mi ds-idrepo-2 1266m 13809Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 926m 4450Mi idm-65858d8c4c-gpz8d 627m 4322Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 260m 545Mi 10:35:11 DEBUG --- stderr --- 10:35:11 DEBUG 10:35:13 INFO 10:35:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:35:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:35:13 INFO [loop_until]: OK (rc = 0) 10:35:13 DEBUG --- stdout --- 10:35:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 93m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 85m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 77m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 632m 3% 5632Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 374m 2% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 792m 4% 5704Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 73m 0% 1122Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1105m 6% 14468Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 1904m 11% 14505Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 975m 6% 14456Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 331m 2% 2063Mi 3% 10:35:13 DEBUG --- stderr --- 10:35:13 DEBUG 10:36:12 INFO 10:36:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:36:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:36:12 INFO [loop_until]: OK (rc = 0) 10:36:12 DEBUG --- stdout --- 10:36:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 6m 5725Mi am-55f77847b7-mbr4x 5m 5841Mi am-55f77847b7-mfzwm 6m 5795Mi ds-cts-0 6m 385Mi ds-cts-1 8m 395Mi ds-cts-2 5m 356Mi ds-idrepo-0 10m 13864Mi ds-idrepo-1 10m 13821Mi ds-idrepo-2 13m 13876Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 6m 4449Mi idm-65858d8c4c-gpz8d 7m 4321Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 153Mi 10:36:12 DEBUG --- stderr --- 10:36:12 DEBUG 10:36:13 INFO 10:36:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:36:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:36:13 INFO [loop_until]: OK (rc = 0) 10:36:13 DEBUG --- stdout --- 10:36:13 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 6809Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 58m 0% 6968Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6833Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 69m 0% 5629Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 119m 0% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 70m 0% 5704Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 70m 0% 14534Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14510Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 63m 0% 14440Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 66m 0% 1676Mi 2% 10:36:13 DEBUG --- stderr --- 10:36:13 DEBUG 127.0.0.1 - - [12/Aug/2023 10:36:50] "GET /monitoring/average?start_time=23-08-12_09:06:19&stop_time=23-08-12_09:34:50 HTTP/1.1" 200 - 10:37:12 INFO 10:37:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:37:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:37:12 INFO [loop_until]: OK (rc = 0) 10:37:12 DEBUG --- stdout --- 10:37:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 7m 5725Mi am-55f77847b7-mbr4x 6m 5841Mi am-55f77847b7-mfzwm 6m 5795Mi ds-cts-0 7m 385Mi ds-cts-1 8m 395Mi ds-cts-2 4m 355Mi ds-idrepo-0 10m 13864Mi ds-idrepo-1 12m 13820Mi ds-idrepo-2 14m 13876Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 6m 4449Mi idm-65858d8c4c-gpz8d 6m 4321Mi lodemon-6cd9c44bd4-vnqvr 3m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 153Mi 10:37:12 DEBUG --- stderr --- 10:37:12 DEBUG 10:37:13 INFO 10:37:13 INFO [loop_until]: kubectl --namespace=xlou top node 10:37:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:37:14 INFO [loop_until]: OK (rc = 0) 10:37:14 DEBUG --- stdout --- 10:37:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 60m 0% 6808Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 6965Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 60m 0% 6831Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 68m 0% 5630Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5702Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 70m 0% 14537Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 57m 0% 14509Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1084Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14442Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 69m 0% 1677Mi 2% 10:37:14 DEBUG --- stderr --- 10:37:14 DEBUG 10:38:12 INFO 10:38:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:38:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:38:12 INFO [loop_until]: OK (rc = 0) 10:38:12 DEBUG --- stdout --- 10:38:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 162m 5726Mi am-55f77847b7-mbr4x 145m 5843Mi am-55f77847b7-mfzwm 180m 5800Mi ds-cts-0 5m 385Mi ds-cts-1 8m 395Mi ds-cts-2 6m 356Mi ds-idrepo-0 2814m 13861Mi ds-idrepo-1 1025m 13858Mi ds-idrepo-2 1275m 13822Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 891m 4462Mi idm-65858d8c4c-gpz8d 814m 4333Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 656m 626Mi 10:38:12 DEBUG --- stderr --- 10:38:12 DEBUG 10:38:14 INFO 10:38:14 INFO [loop_until]: kubectl --namespace=xlou top node 10:38:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:38:14 INFO [loop_until]: OK (rc = 0) 10:38:14 DEBUG --- stdout --- 10:38:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 248m 1% 6814Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 241m 1% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 225m 1% 6834Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 945m 5% 5642Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 524m 3% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1067m 6% 5715Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1297m 8% 14464Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 63m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2239m 14% 14477Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 62m 0% 1081Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1034m 6% 14490Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 716m 4% 2147Mi 3% 10:38:14 DEBUG --- stderr --- 10:38:14 DEBUG 10:39:12 INFO 10:39:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:39:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:39:12 INFO [loop_until]: OK (rc = 0) 10:39:12 DEBUG --- stdout --- 10:39:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 134m 5728Mi am-55f77847b7-mbr4x 133m 5843Mi am-55f77847b7-mfzwm 135m 5800Mi ds-cts-0 7m 385Mi ds-cts-1 8m 395Mi ds-cts-2 7m 357Mi ds-idrepo-0 3596m 13829Mi ds-idrepo-1 2134m 13817Mi ds-idrepo-2 5070m 13824Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1076m 4482Mi idm-65858d8c4c-gpz8d 867m 4346Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 326m 654Mi 10:39:12 DEBUG --- stderr --- 10:39:12 DEBUG 10:39:14 INFO 10:39:14 INFO [loop_until]: kubectl --namespace=xlou top node 10:39:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:39:14 INFO [loop_until]: OK (rc = 0) 10:39:14 DEBUG --- stdout --- 10:39:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 189m 1% 6817Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 192m 1% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 192m 1% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 970m 6% 5658Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 619m 3% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1202m 7% 5735Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 3352m 21% 14488Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3270m 20% 14479Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2213m 13% 14443Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 390m 2% 2172Mi 3% 10:39:14 DEBUG --- stderr --- 10:39:14 DEBUG 10:40:12 INFO 10:40:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:40:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:40:12 INFO [loop_until]: OK (rc = 0) 10:40:12 DEBUG --- stdout --- 10:40:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 123m 5728Mi am-55f77847b7-mbr4x 121m 5843Mi am-55f77847b7-mfzwm 124m 5800Mi ds-cts-0 6m 385Mi ds-cts-1 8m 395Mi ds-cts-2 7m 357Mi ds-idrepo-0 5022m 13781Mi ds-idrepo-1 3287m 13792Mi ds-idrepo-2 1358m 13823Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1011m 4487Mi idm-65858d8c4c-gpz8d 814m 4350Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 312m 658Mi 10:40:12 DEBUG --- stderr --- 10:40:12 DEBUG 10:40:14 INFO 10:40:14 INFO [loop_until]: kubectl --namespace=xlou top node 10:40:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:40:14 INFO [loop_until]: OK (rc = 0) 10:40:14 DEBUG --- stdout --- 10:40:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 180m 1% 6815Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 179m 1% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 180m 1% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 887m 5% 5659Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 587m 3% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1152m 7% 5738Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1427m 8% 14489Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 5031m 31% 14447Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3389m 21% 14413Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 379m 2% 2175Mi 3% 10:40:14 DEBUG --- stderr --- 10:40:14 DEBUG 10:41:12 INFO 10:41:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:41:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:41:12 INFO [loop_until]: OK (rc = 0) 10:41:12 DEBUG --- stdout --- 10:41:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 120m 5729Mi am-55f77847b7-mbr4x 123m 5843Mi am-55f77847b7-mfzwm 128m 5800Mi ds-cts-0 6m 385Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 3087m 13827Mi ds-idrepo-1 1433m 13750Mi ds-idrepo-2 1766m 13791Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 986m 4491Mi idm-65858d8c4c-gpz8d 807m 4353Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 332m 659Mi 10:41:12 DEBUG --- stderr --- 10:41:12 DEBUG 10:41:14 INFO 10:41:14 INFO [loop_until]: kubectl --namespace=xlou top node 10:41:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:41:14 INFO [loop_until]: OK (rc = 0) 10:41:14 DEBUG --- stdout --- 10:41:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 183m 1% 6813Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6970Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 179m 1% 6835Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 944m 5% 5665Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 617m 3% 2145Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1100m 6% 5742Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1120Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1589m 10% 14483Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3247m 20% 14488Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1083m 6% 14396Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 395m 2% 2174Mi 3% 10:41:14 DEBUG --- stderr --- 10:41:14 DEBUG 10:42:12 INFO 10:42:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:42:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:42:12 INFO [loop_until]: OK (rc = 0) 10:42:12 DEBUG --- stdout --- 10:42:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 153m 5743Mi am-55f77847b7-mbr4x 122m 5843Mi am-55f77847b7-mfzwm 125m 5815Mi ds-cts-0 6m 385Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 3069m 13746Mi ds-idrepo-1 1909m 13829Mi ds-idrepo-2 1619m 13824Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 971m 4493Mi idm-65858d8c4c-gpz8d 835m 4357Mi lodemon-6cd9c44bd4-vnqvr 3m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 309m 659Mi 10:42:12 DEBUG --- stderr --- 10:42:12 DEBUG 10:42:14 INFO 10:42:14 INFO [loop_until]: kubectl --namespace=xlou top node 10:42:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:42:14 INFO [loop_until]: OK (rc = 0) 10:42:14 DEBUG --- stdout --- 10:42:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 178m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 179m 1% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 202m 1% 6849Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 947m 5% 5665Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 595m 3% 2156Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1116m 7% 5747Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1773m 11% 14495Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 55m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3280m 20% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1865m 11% 14462Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 378m 2% 2176Mi 3% 10:42:14 DEBUG --- stderr --- 10:42:14 DEBUG 10:43:12 INFO 10:43:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:43:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:43:12 INFO [loop_until]: OK (rc = 0) 10:43:12 DEBUG --- stdout --- 10:43:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 119m 5743Mi am-55f77847b7-mbr4x 128m 5843Mi am-55f77847b7-mfzwm 122m 5814Mi ds-cts-0 5m 385Mi ds-cts-1 8m 397Mi ds-cts-2 7m 357Mi ds-idrepo-0 2749m 13851Mi ds-idrepo-1 1208m 13870Mi ds-idrepo-2 1625m 13821Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 987m 4496Mi idm-65858d8c4c-gpz8d 791m 4359Mi lodemon-6cd9c44bd4-vnqvr 5m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 303m 660Mi 10:43:12 DEBUG --- stderr --- 10:43:12 DEBUG 10:43:14 INFO 10:43:14 INFO [loop_until]: kubectl --namespace=xlou top node 10:43:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:43:14 INFO [loop_until]: OK (rc = 0) 10:43:14 DEBUG --- stdout --- 10:43:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 177m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 221m 1% 6974Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6852Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 943m 5% 5669Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 603m 3% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1132m 7% 5750Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1831m 11% 14500Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2957m 18% 14516Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1186m 7% 14502Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 374m 2% 2177Mi 3% 10:43:14 DEBUG --- stderr --- 10:43:14 DEBUG 10:44:12 INFO 10:44:12 INFO [loop_until]: kubectl --namespace=xlou top pods 10:44:12 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:44:12 INFO [loop_until]: OK (rc = 0) 10:44:12 DEBUG --- stdout --- 10:44:12 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 121m 5743Mi am-55f77847b7-mbr4x 119m 5847Mi am-55f77847b7-mfzwm 123m 5814Mi ds-cts-0 6m 385Mi ds-cts-1 8m 395Mi ds-cts-2 7m 358Mi ds-idrepo-0 2638m 13861Mi ds-idrepo-1 1527m 13823Mi ds-idrepo-2 1709m 13826Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1009m 4499Mi idm-65858d8c4c-gpz8d 805m 4361Mi lodemon-6cd9c44bd4-vnqvr 6m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 295m 660Mi 10:44:12 DEBUG --- stderr --- 10:44:12 DEBUG 10:44:14 INFO 10:44:14 INFO [loop_until]: kubectl --namespace=xlou top node 10:44:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:44:14 INFO [loop_until]: OK (rc = 0) 10:44:14 DEBUG --- stdout --- 10:44:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 79m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 177m 1% 6830Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 175m 1% 6850Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 923m 5% 5673Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 605m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1105m 6% 5750Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1578m 9% 14513Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2641m 16% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1618m 10% 14472Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 366m 2% 2177Mi 3% 10:44:14 DEBUG --- stderr --- 10:44:14 DEBUG 10:45:13 INFO 10:45:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:45:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:45:13 INFO [loop_until]: OK (rc = 0) 10:45:13 DEBUG --- stdout --- 10:45:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 118m 5743Mi am-55f77847b7-mbr4x 119m 5847Mi am-55f77847b7-mfzwm 123m 5814Mi ds-cts-0 5m 385Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 5021m 13748Mi ds-idrepo-1 1612m 13651Mi ds-idrepo-2 2059m 13664Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 998m 4504Mi idm-65858d8c4c-gpz8d 805m 4366Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 299m 660Mi 10:45:13 DEBUG --- stderr --- 10:45:13 DEBUG 10:45:14 INFO 10:45:14 INFO [loop_until]: kubectl --namespace=xlou top node 10:45:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:45:14 INFO [loop_until]: OK (rc = 0) 10:45:14 DEBUG --- stdout --- 10:45:14 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 175m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 176m 1% 6973Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 174m 1% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 942m 5% 5676Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 603m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1114m 7% 5756Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2221m 13% 14348Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4455m 28% 14394Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1758m 11% 14262Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 364m 2% 2176Mi 3% 10:45:14 DEBUG --- stderr --- 10:45:14 DEBUG 10:46:13 INFO 10:46:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:46:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:46:13 INFO [loop_until]: OK (rc = 0) 10:46:13 DEBUG --- stdout --- 10:46:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 126m 5743Mi am-55f77847b7-mbr4x 123m 5847Mi am-55f77847b7-mfzwm 130m 5814Mi ds-cts-0 6m 389Mi ds-cts-1 7m 395Mi ds-cts-2 7m 357Mi ds-idrepo-0 2648m 13828Mi ds-idrepo-1 1834m 13786Mi ds-idrepo-2 1330m 13722Mi end-user-ui-6845bc78c7-jrqhg 1m 4Mi idm-65858d8c4c-5vh78 999m 4507Mi idm-65858d8c4c-gpz8d 840m 4368Mi lodemon-6cd9c44bd4-vnqvr 4m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 322m 661Mi 10:46:13 DEBUG --- stderr --- 10:46:13 DEBUG 10:46:15 INFO 10:46:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:46:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:46:15 INFO [loop_until]: OK (rc = 0) 10:46:15 DEBUG --- stdout --- 10:46:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 181m 1% 6827Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 180m 1% 6972Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 173m 1% 6851Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 937m 5% 5676Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 617m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1131m 7% 5756Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1123Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1579m 9% 14402Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2682m 16% 14498Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1972m 12% 14436Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 390m 2% 2177Mi 3% 10:46:15 DEBUG --- stderr --- 10:46:15 DEBUG 10:47:13 INFO 10:47:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:47:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:47:13 INFO [loop_until]: OK (rc = 0) 10:47:13 DEBUG --- stdout --- 10:47:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 120m 5743Mi am-55f77847b7-mbr4x 123m 5847Mi am-55f77847b7-mfzwm 129m 5814Mi ds-cts-0 9m 383Mi ds-cts-1 8m 395Mi ds-cts-2 6m 357Mi ds-idrepo-0 3461m 13671Mi ds-idrepo-1 2036m 13746Mi ds-idrepo-2 1403m 13824Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 983m 4510Mi idm-65858d8c4c-gpz8d 820m 4372Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 311m 661Mi 10:47:13 DEBUG --- stderr --- 10:47:13 DEBUG 10:47:15 INFO 10:47:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:47:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:47:15 INFO [loop_until]: OK (rc = 0) 10:47:15 DEBUG --- stdout --- 10:47:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 183m 1% 6829Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 179m 1% 6971Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 178m 1% 6848Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 962m 6% 5681Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 618m 3% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1123m 7% 5760Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1126Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1620m 10% 14503Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3823m 24% 14429Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1839m 11% 14391Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 362m 2% 2176Mi 3% 10:47:15 DEBUG --- stderr --- 10:47:15 DEBUG 10:48:13 INFO 10:48:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:48:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:48:13 INFO [loop_until]: OK (rc = 0) 10:48:13 DEBUG --- stdout --- 10:48:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 162m 5751Mi am-55f77847b7-mbr4x 124m 5847Mi am-55f77847b7-mfzwm 124m 5823Mi ds-cts-0 6m 383Mi ds-cts-1 8m 396Mi ds-cts-2 7m 357Mi ds-idrepo-0 3682m 13678Mi ds-idrepo-1 1057m 13837Mi ds-idrepo-2 1728m 13645Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1013m 4515Mi idm-65858d8c4c-gpz8d 834m 4374Mi lodemon-6cd9c44bd4-vnqvr 7m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 320m 661Mi 10:48:13 DEBUG --- stderr --- 10:48:13 DEBUG 10:48:15 INFO 10:48:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:48:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:48:15 INFO [loop_until]: OK (rc = 0) 10:48:15 DEBUG --- stdout --- 10:48:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 180m 1% 6836Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 223m 1% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 958m 6% 5685Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 615m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1146m 7% 5764Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1818m 11% 14349Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 60m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4038m 25% 14356Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 63m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1116m 7% 14490Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 378m 2% 2178Mi 3% 10:48:15 DEBUG --- stderr --- 10:48:15 DEBUG 10:49:13 INFO 10:49:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:49:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:49:13 INFO [loop_until]: OK (rc = 0) 10:49:13 DEBUG --- stdout --- 10:49:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 123m 5751Mi am-55f77847b7-mbr4x 156m 5850Mi am-55f77847b7-mfzwm 122m 5823Mi ds-cts-0 5m 382Mi ds-cts-1 7m 396Mi ds-cts-2 6m 357Mi ds-idrepo-0 3168m 13605Mi ds-idrepo-1 2726m 13697Mi ds-idrepo-2 1589m 13786Mi end-user-ui-6845bc78c7-jrqhg 1m 4Mi idm-65858d8c4c-5vh78 972m 4518Mi idm-65858d8c4c-gpz8d 814m 4376Mi lodemon-6cd9c44bd4-vnqvr 4m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 306m 661Mi 10:49:13 DEBUG --- stderr --- 10:49:13 DEBUG 10:49:15 INFO 10:49:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:49:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:49:15 INFO [loop_until]: OK (rc = 0) 10:49:15 DEBUG --- stdout --- 10:49:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 178m 1% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 227m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 178m 1% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 919m 5% 5687Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 613m 3% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1125m 7% 5767Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1595m 10% 14490Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3521m 22% 14352Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2543m 16% 14351Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 376m 2% 2178Mi 3% 10:49:15 DEBUG --- stderr --- 10:49:15 DEBUG 10:50:13 INFO 10:50:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:50:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:50:13 INFO [loop_until]: OK (rc = 0) 10:50:13 DEBUG --- stdout --- 10:50:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 118m 5751Mi am-55f77847b7-mbr4x 121m 5850Mi am-55f77847b7-mfzwm 122m 5823Mi ds-cts-0 6m 383Mi ds-cts-1 8m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 2642m 13618Mi ds-idrepo-1 1354m 13722Mi ds-idrepo-2 1544m 13750Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1001m 4520Mi idm-65858d8c4c-gpz8d 782m 4379Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 306m 661Mi 10:50:13 DEBUG --- stderr --- 10:50:13 DEBUG 10:50:15 INFO 10:50:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:50:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:50:15 INFO [loop_until]: OK (rc = 0) 10:50:15 DEBUG --- stdout --- 10:50:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1349Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 178m 1% 6838Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 175m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 174m 1% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 927m 5% 5689Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 603m 3% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1121m 7% 5772Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1860m 11% 14449Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2923m 18% 14293Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 65m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1429m 8% 14369Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 367m 2% 2179Mi 3% 10:50:15 DEBUG --- stderr --- 10:50:15 DEBUG 10:51:13 INFO 10:51:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:51:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:51:13 INFO [loop_until]: OK (rc = 0) 10:51:13 DEBUG --- stdout --- 10:51:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 123m 5751Mi am-55f77847b7-mbr4x 121m 5850Mi am-55f77847b7-mfzwm 122m 5823Mi ds-cts-0 5m 383Mi ds-cts-1 7m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 2540m 13751Mi ds-idrepo-1 2147m 13823Mi ds-idrepo-2 2470m 13727Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1013m 4523Mi idm-65858d8c4c-gpz8d 808m 4382Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 315m 661Mi 10:51:13 DEBUG --- stderr --- 10:51:13 DEBUG 10:51:15 INFO 10:51:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:51:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:51:15 INFO [loop_until]: OK (rc = 0) 10:51:15 DEBUG --- stdout --- 10:51:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 178m 1% 6836Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 179m 1% 6975Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 177m 1% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 940m 5% 5692Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 618m 3% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1156m 7% 5784Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2919m 18% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1108Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2693m 16% 14454Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2897m 18% 14458Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 379m 2% 2189Mi 3% 10:51:15 DEBUG --- stderr --- 10:51:15 DEBUG 10:52:13 INFO 10:52:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:52:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:52:13 INFO [loop_until]: OK (rc = 0) 10:52:13 DEBUG --- stdout --- 10:52:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 124m 5751Mi am-55f77847b7-mbr4x 124m 5850Mi am-55f77847b7-mfzwm 129m 5823Mi ds-cts-0 5m 383Mi ds-cts-1 8m 396Mi ds-cts-2 6m 358Mi ds-idrepo-0 2877m 13822Mi ds-idrepo-1 1406m 13711Mi ds-idrepo-2 2311m 13750Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 991m 4525Mi idm-65858d8c4c-gpz8d 857m 4384Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 280m 661Mi 10:52:13 DEBUG --- stderr --- 10:52:13 DEBUG 10:52:15 INFO 10:52:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:52:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:52:15 INFO [loop_until]: OK (rc = 0) 10:52:15 DEBUG --- stdout --- 10:52:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 175m 1% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 180m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 175m 1% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 936m 5% 5694Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 621m 3% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1124m 7% 5777Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 69m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1576m 9% 14455Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2665m 16% 14511Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1437m 9% 14366Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 379m 2% 2175Mi 3% 10:52:15 DEBUG --- stderr --- 10:52:15 DEBUG 10:53:13 INFO 10:53:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:53:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:53:13 INFO [loop_until]: OK (rc = 0) 10:53:13 DEBUG --- stdout --- 10:53:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 121m 5751Mi am-55f77847b7-mbr4x 124m 5850Mi am-55f77847b7-mfzwm 127m 5823Mi ds-cts-0 10m 386Mi ds-cts-1 8m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 2498m 13833Mi ds-idrepo-1 1294m 13840Mi ds-idrepo-2 1305m 13823Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 983m 4528Mi idm-65858d8c4c-gpz8d 821m 4387Mi lodemon-6cd9c44bd4-vnqvr 6m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 305m 661Mi 10:53:13 DEBUG --- stderr --- 10:53:13 DEBUG 10:53:15 INFO 10:53:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:53:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:53:15 INFO [loop_until]: OK (rc = 0) 10:53:15 DEBUG --- stdout --- 10:53:15 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 181m 1% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 178m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 171m 1% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 931m 5% 5696Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 611m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1117m 7% 5781Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 74m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1566m 9% 14520Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 62m 0% 1114Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2607m 16% 14531Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1320m 8% 14501Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 374m 2% 2179Mi 3% 10:53:15 DEBUG --- stderr --- 10:53:15 DEBUG 10:54:13 INFO 10:54:13 INFO [loop_until]: kubectl --namespace=xlou top pods 10:54:13 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:54:13 INFO [loop_until]: OK (rc = 0) 10:54:13 DEBUG --- stdout --- 10:54:13 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 119m 5753Mi am-55f77847b7-mbr4x 123m 5850Mi am-55f77847b7-mfzwm 123m 5826Mi ds-cts-0 5m 386Mi ds-cts-1 8m 395Mi ds-cts-2 6m 359Mi ds-idrepo-0 2668m 13826Mi ds-idrepo-1 1124m 13866Mi ds-idrepo-2 1601m 13507Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1023m 4532Mi idm-65858d8c4c-gpz8d 821m 4389Mi lodemon-6cd9c44bd4-vnqvr 5m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 299m 661Mi 10:54:13 DEBUG --- stderr --- 10:54:13 DEBUG 10:54:15 INFO 10:54:15 INFO [loop_until]: kubectl --namespace=xlou top node 10:54:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:54:16 INFO [loop_until]: OK (rc = 0) 10:54:16 DEBUG --- stdout --- 10:54:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 178m 1% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 180m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 177m 1% 6859Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 959m 6% 5699Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 619m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1118m 7% 5786Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1483m 9% 14247Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2894m 18% 14514Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1021m 6% 14513Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 369m 2% 2179Mi 3% 10:54:16 DEBUG --- stderr --- 10:54:16 DEBUG 10:55:14 INFO 10:55:14 INFO [loop_until]: kubectl --namespace=xlou top pods 10:55:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:55:14 INFO [loop_until]: OK (rc = 0) 10:55:14 DEBUG --- stdout --- 10:55:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 119m 5753Mi am-55f77847b7-mbr4x 165m 5852Mi am-55f77847b7-mfzwm 127m 5826Mi ds-cts-0 6m 386Mi ds-cts-1 8m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 3003m 13645Mi ds-idrepo-1 1336m 13822Mi ds-idrepo-2 1468m 13669Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 929m 4535Mi idm-65858d8c4c-gpz8d 818m 4394Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 302m 661Mi 10:55:14 DEBUG --- stderr --- 10:55:14 DEBUG 10:55:16 INFO 10:55:16 INFO [loop_until]: kubectl --namespace=xlou top node 10:55:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:55:16 INFO [loop_until]: OK (rc = 0) 10:55:16 DEBUG --- stdout --- 10:55:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 72m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 178m 1% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 216m 1% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 175m 1% 6857Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 941m 5% 5702Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 610m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1100m 6% 5790Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1381m 8% 14398Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3404m 21% 14399Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1325m 8% 14494Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 372m 2% 2180Mi 3% 10:55:16 DEBUG --- stderr --- 10:55:16 DEBUG 10:56:14 INFO 10:56:14 INFO [loop_until]: kubectl --namespace=xlou top pods 10:56:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:56:14 INFO [loop_until]: OK (rc = 0) 10:56:14 DEBUG --- stdout --- 10:56:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 123m 5753Mi am-55f77847b7-mbr4x 125m 5852Mi am-55f77847b7-mfzwm 124m 5826Mi ds-cts-0 12m 384Mi ds-cts-1 7m 396Mi ds-cts-2 6m 358Mi ds-idrepo-0 2781m 13721Mi ds-idrepo-1 2509m 13813Mi ds-idrepo-2 2596m 13549Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1012m 4538Mi idm-65858d8c4c-gpz8d 808m 4396Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 302m 662Mi 10:56:14 DEBUG --- stderr --- 10:56:14 DEBUG 10:56:16 INFO 10:56:16 INFO [loop_until]: kubectl --namespace=xlou top node 10:56:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:56:16 INFO [loop_until]: OK (rc = 0) 10:56:16 DEBUG --- stdout --- 10:56:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 70m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 180m 1% 6837Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 181m 1% 6976Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 178m 1% 6856Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 945m 5% 5705Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 615m 3% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1136m 7% 5791Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1518m 9% 14237Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2624m 16% 14405Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1086Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2643m 16% 14475Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 371m 2% 2180Mi 3% 10:56:16 DEBUG --- stderr --- 10:56:16 DEBUG 10:57:14 INFO 10:57:14 INFO [loop_until]: kubectl --namespace=xlou top pods 10:57:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:57:14 INFO [loop_until]: OK (rc = 0) 10:57:14 DEBUG --- stdout --- 10:57:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 118m 5753Mi am-55f77847b7-mbr4x 123m 5852Mi am-55f77847b7-mfzwm 122m 5826Mi ds-cts-0 5m 384Mi ds-cts-1 8m 396Mi ds-cts-2 6m 358Mi ds-idrepo-0 2541m 13806Mi ds-idrepo-1 1119m 13823Mi ds-idrepo-2 1247m 13681Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 971m 4540Mi idm-65858d8c4c-gpz8d 790m 4399Mi lodemon-6cd9c44bd4-vnqvr 5m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 297m 662Mi 10:57:14 DEBUG --- stderr --- 10:57:14 DEBUG 10:57:16 INFO 10:57:16 INFO [loop_until]: kubectl --namespace=xlou top node 10:57:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:57:16 INFO [loop_until]: OK (rc = 0) 10:57:16 DEBUG --- stdout --- 10:57:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 73m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 180m 1% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 175m 1% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 165m 1% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 903m 5% 5705Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 604m 3% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1087m 6% 5794Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1237m 7% 14374Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2604m 16% 14501Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 55m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1381m 8% 14503Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 360m 2% 2179Mi 3% 10:57:16 DEBUG --- stderr --- 10:57:16 DEBUG 10:58:14 INFO 10:58:14 INFO [loop_until]: kubectl --namespace=xlou top pods 10:58:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:58:14 INFO [loop_until]: OK (rc = 0) 10:58:14 DEBUG --- stdout --- 10:58:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 127m 5753Mi am-55f77847b7-mbr4x 123m 5852Mi am-55f77847b7-mfzwm 125m 5826Mi ds-cts-0 5m 384Mi ds-cts-1 8m 396Mi ds-cts-2 6m 358Mi ds-idrepo-0 2827m 13828Mi ds-idrepo-1 1415m 13775Mi ds-idrepo-2 1429m 13781Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1015m 4543Mi idm-65858d8c4c-gpz8d 823m 4401Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 296m 662Mi 10:58:14 DEBUG --- stderr --- 10:58:14 DEBUG 10:58:16 INFO 10:58:16 INFO [loop_until]: kubectl --namespace=xlou top node 10:58:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:58:16 INFO [loop_until]: OK (rc = 0) 10:58:16 DEBUG --- stdout --- 10:58:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 180m 1% 6840Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6978Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 179m 1% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 893m 5% 5712Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 611m 3% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1130m 7% 5797Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 68m 0% 1125Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1629m 10% 14501Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2608m 16% 14512Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1637m 10% 14441Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 377m 2% 2179Mi 3% 10:58:16 DEBUG --- stderr --- 10:58:16 DEBUG 10:59:14 INFO 10:59:14 INFO [loop_until]: kubectl --namespace=xlou top pods 10:59:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:59:14 INFO [loop_until]: OK (rc = 0) 10:59:14 DEBUG --- stdout --- 10:59:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 120m 5753Mi am-55f77847b7-mbr4x 123m 5852Mi am-55f77847b7-mfzwm 121m 5826Mi ds-cts-0 6m 384Mi ds-cts-1 7m 396Mi ds-cts-2 6m 358Mi ds-idrepo-0 2875m 13759Mi ds-idrepo-1 1102m 13824Mi ds-idrepo-2 4422m 13592Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 992m 4546Mi idm-65858d8c4c-gpz8d 802m 4405Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 303m 663Mi 10:59:14 DEBUG --- stderr --- 10:59:14 DEBUG 10:59:16 INFO 10:59:16 INFO [loop_until]: kubectl --namespace=xlou top node 10:59:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 10:59:16 INFO [loop_until]: OK (rc = 0) 10:59:16 DEBUG --- stdout --- 10:59:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1354Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 174m 1% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 178m 1% 6977Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 178m 1% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 927m 5% 5714Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 606m 3% 2155Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1124m 7% 5796Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 4766m 29% 14286Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2546m 16% 14444Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 58m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1282m 8% 14510Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 352m 2% 2180Mi 3% 10:59:16 DEBUG --- stderr --- 10:59:16 DEBUG 11:00:14 INFO 11:00:14 INFO [loop_until]: kubectl --namespace=xlou top pods 11:00:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:00:14 INFO [loop_until]: OK (rc = 0) 11:00:14 DEBUG --- stdout --- 11:00:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 128m 5756Mi am-55f77847b7-mbr4x 123m 5852Mi am-55f77847b7-mfzwm 122m 5829Mi ds-cts-0 5m 384Mi ds-cts-1 7m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 3323m 13774Mi ds-idrepo-1 1196m 13851Mi ds-idrepo-2 1202m 13758Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 967m 4549Mi idm-65858d8c4c-gpz8d 782m 4408Mi lodemon-6cd9c44bd4-vnqvr 4m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 305m 663Mi 11:00:14 DEBUG --- stderr --- 11:00:14 DEBUG 11:00:16 INFO 11:00:16 INFO [loop_until]: kubectl --namespace=xlou top node 11:00:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:00:16 INFO [loop_until]: OK (rc = 0) 11:00:16 DEBUG --- stdout --- 11:00:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1351Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 178m 1% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 178m 1% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 173m 1% 6860Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 919m 5% 5718Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 600m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1123m 7% 5798Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1127Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1475m 9% 14443Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2826m 17% 14471Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1298m 8% 14518Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 362m 2% 2179Mi 3% 11:00:16 DEBUG --- stderr --- 11:00:16 DEBUG 11:01:14 INFO 11:01:14 INFO [loop_until]: kubectl --namespace=xlou top pods 11:01:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:01:14 INFO [loop_until]: OK (rc = 0) 11:01:14 DEBUG --- stdout --- 11:01:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 122m 5756Mi am-55f77847b7-mbr4x 158m 5855Mi am-55f77847b7-mfzwm 126m 5828Mi ds-cts-0 6m 384Mi ds-cts-1 7m 396Mi ds-cts-2 6m 358Mi ds-idrepo-0 3714m 13798Mi ds-idrepo-1 3821m 13526Mi ds-idrepo-2 1651m 13825Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1022m 4551Mi idm-65858d8c4c-gpz8d 794m 4411Mi lodemon-6cd9c44bd4-vnqvr 4m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 316m 660Mi 11:01:14 DEBUG --- stderr --- 11:01:14 DEBUG 11:01:16 INFO 11:01:16 INFO [loop_until]: kubectl --namespace=xlou top node 11:01:16 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:01:16 INFO [loop_until]: OK (rc = 0) 11:01:16 DEBUG --- stdout --- 11:01:16 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1350Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 181m 1% 6839Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 219m 1% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 175m 1% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 912m 5% 5720Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 614m 3% 2159Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1156m 7% 5805Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1508m 9% 14530Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4608m 28% 14499Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 59m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 3736m 23% 14210Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 376m 2% 2176Mi 3% 11:01:16 DEBUG --- stderr --- 11:01:16 DEBUG 11:02:14 INFO 11:02:14 INFO [loop_until]: kubectl --namespace=xlou top pods 11:02:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:02:14 INFO [loop_until]: OK (rc = 0) 11:02:14 DEBUG --- stdout --- 11:02:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 120m 5756Mi am-55f77847b7-mbr4x 118m 5855Mi am-55f77847b7-mfzwm 122m 5829Mi ds-cts-0 6m 384Mi ds-cts-1 7m 395Mi ds-cts-2 6m 358Mi ds-idrepo-0 2617m 13848Mi ds-idrepo-1 900m 13220Mi ds-idrepo-2 1410m 13848Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 981m 4556Mi idm-65858d8c4c-gpz8d 793m 4413Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 302m 660Mi 11:02:14 DEBUG --- stderr --- 11:02:14 DEBUG 11:02:17 INFO 11:02:17 INFO [loop_until]: kubectl --namespace=xlou top node 11:02:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:02:17 INFO [loop_until]: OK (rc = 0) 11:02:17 DEBUG --- stdout --- 11:02:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1352Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 176m 1% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 169m 1% 6980Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 175m 1% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 930m 5% 5724Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 610m 3% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1105m 6% 5806Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1124Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1495m 9% 14560Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3192m 20% 14295Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 953m 5% 13892Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 355m 2% 2179Mi 3% 11:02:17 DEBUG --- stderr --- 11:02:17 DEBUG 11:03:14 INFO 11:03:14 INFO [loop_until]: kubectl --namespace=xlou top pods 11:03:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:03:14 INFO [loop_until]: OK (rc = 0) 11:03:14 DEBUG --- stdout --- 11:03:14 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 121m 5756Mi am-55f77847b7-mbr4x 122m 5855Mi am-55f77847b7-mfzwm 122m 5829Mi ds-cts-0 5m 384Mi ds-cts-1 7m 396Mi ds-cts-2 6m 358Mi ds-idrepo-0 3149m 13557Mi ds-idrepo-1 737m 13346Mi ds-idrepo-2 1928m 13771Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 994m 4559Mi idm-65858d8c4c-gpz8d 797m 4416Mi lodemon-6cd9c44bd4-vnqvr 1m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 296m 660Mi 11:03:14 DEBUG --- stderr --- 11:03:14 DEBUG 11:03:17 INFO 11:03:17 INFO [loop_until]: kubectl --namespace=xlou top node 11:03:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:03:17 INFO [loop_until]: OK (rc = 0) 11:03:17 DEBUG --- stdout --- 11:03:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 179m 1% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 177m 1% 6982Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6865Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 931m 5% 5729Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 617m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1136m 7% 5811Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 70m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1336m 8% 14487Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 59m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2931m 18% 14284Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 61m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1546m 9% 14035Mi 23% gke-xlou-cdm-frontend-a8771548-k40m 370m 2% 2177Mi 3% 11:03:17 DEBUG --- stderr --- 11:03:17 DEBUG 11:04:14 INFO 11:04:14 INFO [loop_until]: kubectl --namespace=xlou top pods 11:04:14 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:04:15 INFO [loop_until]: OK (rc = 0) 11:04:15 DEBUG --- stdout --- 11:04:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 125m 5756Mi am-55f77847b7-mbr4x 124m 5855Mi am-55f77847b7-mfzwm 128m 5829Mi ds-cts-0 5m 385Mi ds-cts-1 8m 397Mi ds-cts-2 6m 358Mi ds-idrepo-0 3280m 13759Mi ds-idrepo-1 1313m 13512Mi ds-idrepo-2 2343m 13683Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1030m 4561Mi idm-65858d8c4c-gpz8d 821m 4419Mi lodemon-6cd9c44bd4-vnqvr 7m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 303m 660Mi 11:04:15 DEBUG --- stderr --- 11:04:15 DEBUG 11:04:17 INFO 11:04:17 INFO [loop_until]: kubectl --namespace=xlou top node 11:04:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:04:17 INFO [loop_until]: OK (rc = 0) 11:04:17 DEBUG --- stdout --- 11:04:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 186m 1% 6842Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 180m 1% 6984Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6863Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 932m 5% 5729Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 624m 3% 2161Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1164m 7% 5819Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 2073m 13% 14370Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1112Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 3511m 22% 14473Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2198m 13% 14234Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 382m 2% 2177Mi 3% 11:04:17 DEBUG --- stderr --- 11:04:17 DEBUG 11:05:15 INFO 11:05:15 INFO [loop_until]: kubectl --namespace=xlou top pods 11:05:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:05:15 INFO [loop_until]: OK (rc = 0) 11:05:15 DEBUG --- stdout --- 11:05:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 122m 5756Mi am-55f77847b7-mbr4x 120m 5855Mi am-55f77847b7-mfzwm 157m 5831Mi ds-cts-0 5m 386Mi ds-cts-1 7m 397Mi ds-cts-2 6m 360Mi ds-idrepo-0 2688m 13823Mi ds-idrepo-1 1012m 13466Mi ds-idrepo-2 1265m 13802Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1003m 4566Mi idm-65858d8c4c-gpz8d 838m 4422Mi lodemon-6cd9c44bd4-vnqvr 2m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 306m 660Mi 11:05:15 DEBUG --- stderr --- 11:05:15 DEBUG 11:05:17 INFO 11:05:17 INFO [loop_until]: kubectl --namespace=xlou top node 11:05:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:05:17 INFO [loop_until]: OK (rc = 0) 11:05:17 DEBUG --- stdout --- 11:05:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 71m 0% 1358Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 214m 1% 6845Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 176m 1% 6984Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 180m 1% 6861Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 954m 6% 5732Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 588m 3% 2154Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1115m 7% 5819Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 64m 0% 1131Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1179m 7% 14509Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2719m 17% 14526Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1088Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1062m 6% 14157Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 356m 2% 2178Mi 3% 11:05:17 DEBUG --- stderr --- 11:05:17 DEBUG 11:06:15 INFO 11:06:15 INFO [loop_until]: kubectl --namespace=xlou top pods 11:06:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:06:15 INFO [loop_until]: OK (rc = 0) 11:06:15 DEBUG --- stdout --- 11:06:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 120m 5759Mi am-55f77847b7-mbr4x 124m 5855Mi am-55f77847b7-mfzwm 121m 5831Mi ds-cts-0 5m 385Mi ds-cts-1 7m 397Mi ds-cts-2 6m 359Mi ds-idrepo-0 4375m 13544Mi ds-idrepo-1 1972m 13645Mi ds-idrepo-2 1556m 13717Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 982m 4571Mi idm-65858d8c4c-gpz8d 778m 4425Mi lodemon-6cd9c44bd4-vnqvr 4m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 297m 660Mi 11:06:15 DEBUG --- stderr --- 11:06:15 DEBUG 11:06:17 INFO 11:06:17 INFO [loop_until]: kubectl --namespace=xlou top node 11:06:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:06:17 INFO [loop_until]: OK (rc = 0) 11:06:17 DEBUG --- stdout --- 11:06:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 76m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 173m 1% 6858Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 169m 1% 6979Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 170m 1% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 926m 5% 5736Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 602m 3% 2160Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1131m 7% 5820Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1128Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1455m 9% 14451Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 56m 0% 1111Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 4215m 26% 14244Mi 24% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1093Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 2200m 13% 14333Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 368m 2% 2177Mi 3% 11:06:17 DEBUG --- stderr --- 11:06:17 DEBUG 11:07:15 INFO 11:07:15 INFO [loop_until]: kubectl --namespace=xlou top pods 11:07:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:07:15 INFO [loop_until]: OK (rc = 0) 11:07:15 DEBUG --- stdout --- 11:07:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 122m 5759Mi am-55f77847b7-mbr4x 118m 5857Mi am-55f77847b7-mfzwm 126m 5831Mi ds-cts-0 6m 385Mi ds-cts-1 7m 397Mi ds-cts-2 6m 360Mi ds-idrepo-0 2776m 13221Mi ds-idrepo-1 1291m 13772Mi ds-idrepo-2 1214m 13755Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 1013m 4573Mi idm-65858d8c4c-gpz8d 795m 4428Mi lodemon-6cd9c44bd4-vnqvr 5m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 312m 660Mi 11:07:15 DEBUG --- stderr --- 11:07:15 DEBUG 11:07:17 INFO 11:07:17 INFO [loop_until]: kubectl --namespace=xlou top node 11:07:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:07:17 INFO [loop_until]: OK (rc = 0) 11:07:17 DEBUG --- stdout --- 11:07:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 74m 0% 1356Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 180m 1% 6841Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 220m 1% 6985Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 176m 1% 6867Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 898m 5% 5740Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 591m 3% 2158Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 1128m 7% 5824Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1130Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 1648m 10% 14467Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1113Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 2774m 17% 13910Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 60m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1167m 7% 14463Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 373m 2% 2178Mi 3% 11:07:17 DEBUG --- stderr --- 11:07:17 DEBUG 11:08:15 INFO 11:08:15 INFO [loop_until]: kubectl --namespace=xlou top pods 11:08:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:08:15 INFO [loop_until]: OK (rc = 0) 11:08:15 DEBUG --- stdout --- 11:08:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 7m 5759Mi am-55f77847b7-mbr4x 8m 5857Mi am-55f77847b7-mfzwm 14m 5831Mi ds-cts-0 6m 385Mi ds-cts-1 9m 397Mi ds-cts-2 5m 359Mi ds-idrepo-0 319m 13144Mi ds-idrepo-1 1193m 13615Mi ds-idrepo-2 561m 13723Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 5m 4574Mi idm-65858d8c4c-gpz8d 8m 4428Mi lodemon-6cd9c44bd4-vnqvr 6m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 67m 660Mi 11:08:15 DEBUG --- stderr --- 11:08:15 DEBUG 11:08:17 INFO 11:08:17 INFO [loop_until]: kubectl --namespace=xlou top node 11:08:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:08:17 INFO [loop_until]: OK (rc = 0) 11:08:17 DEBUG --- stdout --- 11:08:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 75m 0% 1348Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6845Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 60m 0% 6984Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 59m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 5741Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 122m 0% 2163Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 66m 0% 5823Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 66m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 418m 2% 14432Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1109Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 228m 1% 13838Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 56m 0% 1091Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 1370m 8% 14298Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 125m 0% 1682Mi 2% 11:08:17 DEBUG --- stderr --- 11:08:17 DEBUG 11:09:15 INFO 11:09:15 INFO [loop_until]: kubectl --namespace=xlou top pods 11:09:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:09:15 INFO [loop_until]: OK (rc = 0) 11:09:15 DEBUG --- stdout --- 11:09:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 6m 5759Mi am-55f77847b7-mbr4x 6m 5857Mi am-55f77847b7-mfzwm 9m 5831Mi ds-cts-0 6m 386Mi ds-cts-1 7m 397Mi ds-cts-2 5m 359Mi ds-idrepo-0 10m 13144Mi ds-idrepo-1 12m 13519Mi ds-idrepo-2 13m 13723Mi end-user-ui-6845bc78c7-jrqhg 1m 3Mi idm-65858d8c4c-5vh78 4m 4573Mi idm-65858d8c4c-gpz8d 8m 4428Mi lodemon-6cd9c44bd4-vnqvr 6m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 1m 155Mi 11:09:15 DEBUG --- stderr --- 11:09:15 DEBUG 11:09:17 INFO 11:09:17 INFO [loop_until]: kubectl --namespace=xlou top node 11:09:17 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:09:17 INFO [loop_until]: OK (rc = 0) 11:09:17 DEBUG --- stdout --- 11:09:17 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1353Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 63m 0% 6843Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 60m 0% 6988Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6864Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 76m 0% 5738Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 121m 0% 2162Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5826Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 68m 0% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1110Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 59m 0% 13839Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 53m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 66m 0% 14203Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 65m 0% 1681Mi 2% 11:09:17 DEBUG --- stderr --- 11:09:17 DEBUG 127.0.0.1 - - [12/Aug/2023 11:09:21] "GET /monitoring/average?start_time=23-08-12_09:38:50&stop_time=23-08-12_10:07:21 HTTP/1.1" 200 - 11:10:15 INFO 11:10:15 INFO [loop_until]: kubectl --namespace=xlou top pods 11:10:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:10:15 INFO [loop_until]: OK (rc = 0) 11:10:15 DEBUG --- stdout --- 11:10:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 4Mi am-55f77847b7-2vpdz 10m 5765Mi am-55f77847b7-mbr4x 6m 5857Mi am-55f77847b7-mfzwm 7m 5831Mi ds-cts-0 5m 385Mi ds-cts-1 7m 397Mi ds-cts-2 6m 359Mi ds-idrepo-0 10m 13145Mi ds-idrepo-1 12m 13519Mi ds-idrepo-2 12m 13723Mi end-user-ui-6845bc78c7-jrqhg 1m 4Mi idm-65858d8c4c-5vh78 4m 4573Mi idm-65858d8c4c-gpz8d 7m 4428Mi lodemon-6cd9c44bd4-vnqvr 6m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 2m 155Mi 11:10:15 DEBUG --- stderr --- 11:10:15 DEBUG 11:10:18 INFO 11:10:18 INFO [loop_until]: kubectl --namespace=xlou top node 11:10:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:10:18 INFO [loop_until]: OK (rc = 0) 11:10:18 DEBUG --- stdout --- 11:10:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 81m 0% 1361Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 67m 0% 6844Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 71m 0% 6983Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 74m 0% 6870Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 79m 0% 5738Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 140m 0% 2148Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 74m 0% 5825Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1129Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 73m 0% 14434Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 112m 0% 1188Mi 2% gke-xlou-cdm-ds-32e4dcb1-b374 194m 1% 13839Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1090Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 68m 0% 14203Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 71m 0% 1681Mi 2% 11:10:18 DEBUG --- stderr --- 11:10:18 DEBUG 11:11:15 INFO 11:11:15 INFO [loop_until]: kubectl --namespace=xlou top pods 11:11:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:11:15 INFO [loop_until]: OK (rc = 0) 11:11:15 DEBUG --- stdout --- 11:11:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 5Mi am-55f77847b7-2vpdz 7m 5764Mi am-55f77847b7-mbr4x 6m 5857Mi am-55f77847b7-mfzwm 8m 5831Mi ds-cts-0 6m 386Mi ds-cts-1 7m 397Mi ds-cts-2 5m 359Mi ds-idrepo-0 246m 13144Mi ds-idrepo-1 103m 13519Mi ds-idrepo-2 84m 13723Mi end-user-ui-6845bc78c7-jrqhg 1m 5Mi idm-65858d8c4c-5vh78 4m 4573Mi idm-65858d8c4c-gpz8d 7m 4428Mi lodemon-6cd9c44bd4-vnqvr 6m 66Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 461m 351Mi 11:11:15 DEBUG --- stderr --- 11:11:15 DEBUG 11:11:18 INFO 11:11:18 INFO [loop_until]: kubectl --namespace=xlou top node 11:11:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:11:18 INFO [loop_until]: OK (rc = 0) 11:11:18 DEBUG --- stdout --- 11:11:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 78m 0% 1355Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 61m 0% 6845Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 60m 0% 6984Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 61m 0% 6871Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 70m 0% 5740Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 125m 0% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 67m 0% 5826Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 65m 0% 1133Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 131m 0% 14441Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 57m 0% 1115Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 124m 0% 13842Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 54m 0% 1089Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 163m 1% 14202Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 540m 3% 1894Mi 3% 11:11:18 DEBUG --- stderr --- 11:11:18 DEBUG 11:12:15 INFO 11:12:15 INFO [loop_until]: kubectl --namespace=xlou top pods 11:12:15 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:12:15 INFO [loop_until]: OK (rc = 0) 11:12:15 DEBUG --- stdout --- 11:12:15 DEBUG NAME CPU(cores) MEMORY(bytes) admin-ui-587fc66dd5-9h5wb 1m 5Mi am-55f77847b7-2vpdz 6m 5764Mi am-55f77847b7-mbr4x 6m 5857Mi am-55f77847b7-mfzwm 7m 5831Mi ds-cts-0 5m 386Mi ds-cts-1 7m 397Mi ds-cts-2 4m 359Mi ds-idrepo-0 9m 13145Mi ds-idrepo-1 10m 13519Mi ds-idrepo-2 13m 13724Mi end-user-ui-6845bc78c7-jrqhg 1m 5Mi idm-65858d8c4c-5vh78 5m 4573Mi idm-65858d8c4c-gpz8d 7m 4428Mi lodemon-6cd9c44bd4-vnqvr 4m 67Mi login-ui-74d6fb46c-jtvtc 1m 3Mi overseer-0-64679cf868-xscwh 415m 555Mi 11:12:15 DEBUG --- stderr --- 11:12:15 DEBUG 11:12:18 INFO 11:12:18 INFO [loop_until]: kubectl --namespace=xlou top node 11:12:18 INFO [loop_until]: (max_time=180, interval=5, expected_rc=[0] 11:12:18 INFO [loop_until]: OK (rc = 0) 11:12:18 DEBUG --- stdout --- 11:12:18 DEBUG NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-xlou-cdm-default-pool-f05840a3-2nsn 77m 0% 1359Mi 2% gke-xlou-cdm-default-pool-f05840a3-5pbc 62m 0% 6846Mi 11% gke-xlou-cdm-default-pool-f05840a3-976h 61m 0% 6986Mi 11% gke-xlou-cdm-default-pool-f05840a3-9p4b 63m 0% 6871Mi 11% gke-xlou-cdm-default-pool-f05840a3-bf2g 72m 0% 5741Mi 9% gke-xlou-cdm-default-pool-f05840a3-h81k 132m 0% 2157Mi 3% gke-xlou-cdm-default-pool-f05840a3-tnc9 69m 0% 5824Mi 9% gke-xlou-cdm-ds-32e4dcb1-1l6p 67m 0% 1141Mi 1% gke-xlou-cdm-ds-32e4dcb1-4z9d 66m 0% 14435Mi 24% gke-xlou-cdm-ds-32e4dcb1-8bsn 58m 0% 1116Mi 1% gke-xlou-cdm-ds-32e4dcb1-b374 60m 0% 13841Mi 23% gke-xlou-cdm-ds-32e4dcb1-n920 57m 0% 1087Mi 1% gke-xlou-cdm-ds-32e4dcb1-x4wx 59m 0% 14203Mi 24% gke-xlou-cdm-frontend-a8771548-k40m 736m 4% 2164Mi 3% 11:12:18 DEBUG --- stderr --- 11:12:18 DEBUG 11:13:10 INFO Finished: True 11:13:10 INFO Waiting for threads to register finish flag 11:13:18 INFO Done. Have a nice day! :) 127.0.0.1 - - [12/Aug/2023 11:13:18] "GET /monitoring/stop HTTP/1.1" 200 - 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Cpu_cores_used_per_pod.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Memory_usage_per_pod.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Disk_tps_read_per_pod.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Disk_tps_writes_per_pod.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Cpu_cores_used_per_node.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Memory_usage_used_per_node.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Cpu_iowait_per_node.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Network_receive_per_node.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Network_transmit_per_node.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/am_cts_task_count_token_session.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/am_authentication_rate.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/ds_db_cache_misses_internal_nodes(backend=amCts).json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/ds_db_cache_misses_internal_nodes(backend=amIdentityStore).json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/ds_db_cache_misses_internal_nodes(backend=cfgStore).json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/ds_db_cache_misses_internal_nodes(backend=idmRepo).json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/am_authentication_count_per_pod.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/Cts_reaper_Deletion_count.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/AM_oauth2_authorization_codes.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/ds_backend_entries_deleted_amCts.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/ds_pods_replication_delay.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/am_cts_reaper_cache_size.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/am_cts_reaper_search_seconds_total.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/ds_replication_replica_replayed_updates_conflicts_resolved.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/node_disk_read_bytes_total.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/node_disk_written_bytes_total.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/ds_backend_entry_count.json does not exist. Skipping... 11:13:21 INFO File /tmp/lodemon_data-23-08-12_08:35:20/node_disk_io_time_seconds_total.json does not exist. Skipping... 127.0.0.1 - - [12/Aug/2023 11:13:23] "GET /monitoring/process HTTP/1.1" 200 -